The Genetic and Lexical Stacks
Many complex phenomena may be decomposed using a stack. For example, one might decompose contemporary scientific theory into a stack as follows: physics -- chemistry -- biology -- psychology -- sociology.
Many complex phenomena may be decomposed using a stack. For example, one might decompose contemporary scientific theory into a stack as follows: physics -- chemistry -- biology -- psychology -- sociology.
I just learned about the Knuth up-arrow notation yesterday. Basically, Knuth's up-arrow is the answer to the question "What comes next in the sequence \((+, \times, \wedge)\)?" You could call it iterated exponentiation. Later operators in the sequence are called "higher-order", and may be defined in terms of the previous order function.
K-nearest neighbor (KNN) regression is a popular machine learning algorithm. However, without visualization, one might not be aware of some quirks that are often present in the regression. Below I give a visualization of KNN regression which show this quirkiness.
My interests in evolutionary algorithms on one hand and language on the other have led me to ponder the evolution of language.
I've put together a PDF containing the revised HSK vocab for levels 1--6, sorted in such a way to maximize word learning rate. The list was sourced from Lingomi, with sorting applied using the MaxRank method. I have found this particular presentation of the list to be especially useful; so I put it here in hopes that others can also benefit.
Learning vocabulary in thematic groups is an effective way to learn. However, as is often the case, it is challenging to find good learning materials. For thematic vocabulary, we want sources which simultaneously do the following:
Specifically for Chinese, I've found two excellent resources thus far.
While taking notes for ai-class, I found myself in a conundrum that any amateur mathematician can relate to: I ran out of appropriate letters in the Roman and Greek alphabets.
The history of English fascinates me. Here follows my very brief but hopefully reasonably factual account.
I'm curious about unsupervised word sense disambiguation, and unsupervised machine learning in general. For that, Manning and Schuetze tell us we need clustering. I jumped ahead to Chapter 14 to experiment with clustering algorithms.
The other day I picked up my Chinese copy of Alice in Wonderland that I picked up in Beijing last year. My intention was to lay in the sun by the lake until I had finished the first page, using the dictionary as needed to achieve basic comprehension. The result was a bad sunburn and only two of four paragraphs finished. What went wrong?