Archive for category mathematics
Given vectors and , the three quantities , and are very interesting indeed.
In fact, the concept of inner product vector space revolves largely around those quantities. The underlying mathematical fact is the Cauchy-Schwarz inequality , which is a corollary of the axioms of an inner product vector space.
The concept of Concentration of Measure (CoM) quantifies the tendency of an -dimensional probability distribution to “lump” or concentrate around an -dimensional submanifold. The phenomenon is that this tendency is especially large in high dimensions. Viewed another way, this is about the interaction between probability and distance in high dimensions.
This is (hopefully) the start of a series of posts that will explain this concept. I would like to review a few concepts and definitions first though.
We denote by a metric measure space of metric and probability measure . A probability measure is in effect a normalised measure, such that the measure of the whole space is 1. In such a space, the -extension of a set is denoted by and is defined as . Of course, the -extension of includes all of . What it does more than that is that it fattens by width . Among other things, this means that is a shell enveloping , and that for small values of , this volume (measure if you want to be pedantic) is approximately the surface area (surface measure) of the set multiplied by .
Now we can define the concentration rate of a space. The basic idea is: of all the sets that span half of the total volume of the space, find the one whose -extension spans the smallest volume. We then declare the concentration rate of the space to be the volume outside that extension. In math terms: .
A few points can be made here. First, one would expect the set whose extension spans the smallest volume to be the one which has the smallest surface area, by virture of the relation between them. And in fact, one would be right, and this is the subject of the Isoperimetric Inequalities. Second, the concentration rate is the maximum volume outside an extended half-volume space possible, so using this property amounts to utilizing certain inequalities to limit, say, the volume of error. This is useful, for example, in the analysis of randomized algorithms. Third, the current definition yields one sided inequalities, but it is very easy to see that two sided algorithms can be derived which limit the volume outside a shell of width around the surface of any set of half volume.
The more the measure is concentrated around a half-space, the smaller the concentration rate is. What we really want, and what CoM is about, is a very quick decay of the concentration rate as increases. This will be useful for us to decide in a learning algorithm which value of to cut off learning at with small probable error. But that is for another day.
And yes, it would be good if Posterous supported Latex..
This Posterous service looks like a great thing.
The picture attached is the ratio of the volume of a hyperball to its
circumscribing hypercube in N dimensions.
The mathematical relation is actually .
I need to review both the relation and the latex text as I am writing
from memory, but this should be good enough for a test post.
This is simply brilliant. Humans have always been good at visualisation, and puzzles always generate more interest in study than dry math symbols. I was just going to comment on it there, but there is more to this than meets the eye, so I will have to expand this entry. Stay tuned!
The best approach I have come across so far. Unsurprisingly, it does have a Knuth-like quality. At least, I think this is a much more sensible way to explain to beginners an definition.
I will probably copy all of the important proofs and concepts here as an personal exercise.
Hi, I am back.
Things have progressed reasonably well. So, I am reasonably happy with my math skills now, though I didn’t fix all concepts. I am now crossing my fingers for no relapse.
For the time being, my definition of mathematics is recognizing patterns, abstracting them, and using the results in conjunction with previous abstractions. It remains to see whether this will survive.
Back to Space, the issue of Vector Space has undergone a good treatment recently, in Prof Gowers weblog. Do pay attention to Terence Tao‘s comment and his linked notes. I personally find the linear transformation approach more useful than matrices, to the extent that I sometimes think of a matrix entry as , just to avoid breaking the conceptual transformation approach. I probably find that easier as a result of using SciPy, where up to a certain extent you gain in performance by thinking of matrices as black boxes, akin to vectorization. Probably Matlab users will have the same effect. By the way, Tao’s comment about the idea of a linear transformation being a multidimensional generalisation of a ratio did not help at first, but it is now resonating with another idea that I have. More on that later. In short, the concept of linear transformation is that embedded in vector space.
Ok. So the last post wasn’t very successful (I am not going to argue about the definition of successful here). My explanation of space wasn’t so good, because almost everything can be described as set with structure, according to mathematicians. Now I am probably irritating some Category Theory proponents as well. Oh well.
As a corrective remedy, I am going to spend some time in a mathematical rehab. You will be able to follow that here. I will probably start by using the spaces mentioned in that post to explain various mathematical concepts.
And, as for the definition of space, let’s just say that a space is a space because a mathematician says so. And I am happy that wikipedia is not that better from my post.