Archive for category machine learning

Concentration of Measure I

The concept of Concentration of Measure (CoM) quantifies the tendency of an N-dimensional probability distribution to “lump” or concentrate around an (N-1)-dimensional submanifold. The phenomenon is that this tendency is especially large in high dimensions. Viewed another way, this is about the interaction between probability and distance in high dimensions.

This is (hopefully) the start of a series of posts that will explain this concept. I would like to review a few concepts and definitions first though.

We denote by X a metric measure space of metric d and probability measure \mu. A probability measure is in effect a normalised measure, such that the measure of the whole space is 1. In such a space, the \epsilon-extension of a set A is denoted by A_\epsilon and is defined as A_\epsilon=\left\{ x|d(x,A)<\epsilon \right\}. Of course, the \epsilon-extension of A includes all of A. What it does more than that is that it fattens A by width \epsilon. Among other things, this means that A_\epsilon \setminus A is a shell enveloping A, and that for small values of \epsilon, this volume (measure if you want to be pedantic) is approximately the surface area (surface measure) of the set A multiplied by \epsilon.

Now we can define the concentration rate of a space. The basic idea is: of all the sets A that span half of the total volume of the space, find the one whose \epsilon-extension spans the smallest volume. We then declare the concentration rate of the space to be the volume outside that extension. In math terms: \alpha(\epsilon)=\sup\left\{ \mu\left( X \setminus A_\epsilon \right) | \mu(A)=\frac{1}{2} \right\}.

A few points can be made here. First, one would expect the set whose extension spans the smallest volume to be the one which has the smallest surface area, by virture of the relation between them. And in fact, one would be right, and this is the subject of the Isoperimetric Inequalities. Second, the concentration rate is the maximum volume outside an extended half-volume space possible, so using this property amounts to utilizing certain inequalities to limit, say, the volume of error. This is useful, for example, in the analysis of randomized algorithms. Third, the current definition yields one sided inequalities, but it is very easy to see that two sided algorithms can be derived which limit the volume outside a shell of width 2\epsilon around the surface of any set of half volume.

The more the measure is concentrated around a half-space, the smaller the concentration rate is. What we really want, and what CoM is about, is a very quick decay of the concentration rate as \epsilon increases. This will be useful for us to decide in a learning algorithm which value of \epsilon to cut off learning at with small probable error. But that is for another day.

And yes, it would be good if Posterous supported Latex..

Posted via email from Muhammad’s posterous

Leave a comment

Normal Law of Errors

Everybody firmly believes in it because the mathematicians imagine it is a fact of observation, and observers that it is a theory of mathematics.

Henri Poincaré

So, which is it? To apportion the blame, I suggest that the fact of observation be that “errors” are independent and identically distributed with finite variances. Observers may as well vote for Lyapunov’s set of conditions. I am sure that mathematicians are more than happy to shoulder the responsibility for the resulting Central Limit Theorem.

Leave a comment