Archive for category mathematics

The three musketeers

Given vectors \mathbf{x} and \mathbf{y}, the three quantities \mathbf{x}\cdot\mathbf{x}, \mathbf{x}\cdot\mathbf{y} and \mathbf{y}\cdot\mathbf{y} are very interesting indeed.

In fact, the concept of inner product vector space revolves largely around those quantities. The underlying mathematical fact is the Cauchy-Schwarz inequality (\mathbf{x}\cdot\mathbf{y})^2 \le  (\mathbf{x}\cdot\mathbf{x})\times (\mathbf{y}\cdot\mathbf{y}), which is a corollary of the axioms of an inner product vector space.

Leave a comment

Concentration of Measure I

The concept of Concentration of Measure (CoM) quantifies the tendency of an N-dimensional probability distribution to “lump” or concentrate around an (N-1)-dimensional submanifold. The phenomenon is that this tendency is especially large in high dimensions. Viewed another way, this is about the interaction between probability and distance in high dimensions.

This is (hopefully) the start of a series of posts that will explain this concept. I would like to review a few concepts and definitions first though.

We denote by X a metric measure space of metric d and probability measure \mu. A probability measure is in effect a normalised measure, such that the measure of the whole space is 1. In such a space, the \epsilon-extension of a set A is denoted by A_\epsilon and is defined as A_\epsilon=\left\{ x|d(x,A)<\epsilon \right\}. Of course, the \epsilon-extension of A includes all of A. What it does more than that is that it fattens A by width \epsilon. Among other things, this means that A_\epsilon \setminus A is a shell enveloping A, and that for small values of \epsilon, this volume (measure if you want to be pedantic) is approximately the surface area (surface measure) of the set A multiplied by \epsilon.

Now we can define the concentration rate of a space. The basic idea is: of all the sets A that span half of the total volume of the space, find the one whose \epsilon-extension spans the smallest volume. We then declare the concentration rate of the space to be the volume outside that extension. In math terms: \alpha(\epsilon)=\sup\left\{ \mu\left( X \setminus A_\epsilon \right) | \mu(A)=\frac{1}{2} \right\}.

A few points can be made here. First, one would expect the set whose extension spans the smallest volume to be the one which has the smallest surface area, by virture of the relation between them. And in fact, one would be right, and this is the subject of the Isoperimetric Inequalities. Second, the concentration rate is the maximum volume outside an extended half-volume space possible, so using this property amounts to utilizing certain inequalities to limit, say, the volume of error. This is useful, for example, in the analysis of randomized algorithms. Third, the current definition yields one sided inequalities, but it is very easy to see that two sided algorithms can be derived which limit the volume outside a shell of width 2\epsilon around the surface of any set of half volume.

The more the measure is concentrated around a half-space, the smaller the concentration rate is. What we really want, and what CoM is about, is a very quick decay of the concentration rate as \epsilon increases. This will be useful for us to decide in a learning algorithm which value of \epsilon to cut off learning at with small probable error. But that is for another day.

And yes, it would be good if Posterous supported Latex..

Leave a comment

Testing Posterous — and the ratio of the volumes of hyperballs and hypercubes in high dimensions

This Posterous service looks like a great thing.
The picture attached is the ratio of the volume of a hyperball to its
circumscribing hypercube in N dimensions.
The mathematical relation is actually \frac{V_b}{V_c}(N)=\frac{\pi^\frac{N}{2}}{2^N \Gamma(\frac{N}{2}+1)}.
I need to review both the relation and the latex text as I am writing
from memory, but this should be good enough for a test post.

Leave a comment

Factoring by Stealth

This is simply brilliant. Humans have always been good at visualisation, and puzzles always generate more interest in study than dry math symbols. I was just going to comment on it there, but there is more to this than meets the eye, so I will have to expand this entry. Stay tuned!

Leave a comment

Calculus – a Brilliant Approach

The best approach I have come across so far. Unsurprisingly, it does have a Knuth-like quality. At least, I think this is a much more sensible way to explain to beginners an \epsilon-\delta definition.
I will probably copy all of the important proofs and concepts here as an personal exercise.

Leave a comment

Spaces – First Round

Hi, I am back.

Things have progressed reasonably well. So, I am reasonably happy with my math skills now, though I didn’t fix all concepts. I am now crossing my fingers for no relapse.

For the time being, my definition of mathematics is recognizing patterns, abstracting them, and using the results in conjunction with previous abstractions. It remains to see whether this will survive.

Back to Space, the issue of Vector Space has undergone a good treatment recently, in Prof Gowers weblog. Do pay attention to Terence Tao‘s comment and his linked notes. I personally find the linear transformation approach more useful than matrices, to the extent that I sometimes think of a matrix entry as a_{ij}=e_i^\prime A e_j, just to avoid breaking the conceptual transformation approach. I probably find that easier as a result of using SciPy, where up to a certain extent you gain in performance by thinking of matrices as black boxes, akin to vectorization. Probably Matlab users will have the same effect. By the way, Tao’s comment about the idea of a linear transformation being a multidimensional generalisation of a ratio did not help at first, but it is now resonating with another idea that I have. More on that later. In short, the concept of linear transformation is that embedded in vector space.

Leave a comment

A Mathematical Rehab

Ok. So the last post wasn’t very successful (I am not going to argue about the definition of successful here). My explanation of space wasn’t so good, because almost everything can be described as set with structure, according to mathematicians. Now I am probably irritating some Category Theory proponents as well. Oh well.

As a corrective remedy, I am going to spend some time in a mathematical rehab. You will be able to follow that here. I will probably start by using the spaces mentioned in that post to explain various mathematical concepts.

And, as for the definition of space, let’s just say that a space is a space because a mathematician says so. And I am happy that wikipedia is not that better from my post.

Leave a comment

Space

Space for mathematicians is an interesting set. Or we can say a set with structure. Trying to get used to this concept, I am listing a few spaces, with some explanation, here. By the way, members of such sets are usually called points.

  • Vector Space: members of such a space can be scaled and added. Here the definition is dependent on properties of the members.
  • Topological Space: in this space points have neighbourhoods. Or, the topological space is a set on which a collection of open sets (closed under unions and finite intersections) is defined.
  • State Space: This means different things to computer scientists, physicists, and control engineers. For control engineers, state space is the space in which members are tuples of state variables.
  • Metric Space: a metric or distance is defined between members of such a space.
  • Euclidean Space: for most of us, this is the space acquired by age eighteen.

Of course, there are many more spaces. I mean, if you are so inclined, you can look up Hilbert Space, Minkowski Space, Anti de Sitter Space, etc. Just remember, a space is a set you can talk about.

2 Comments

Dimension is not easy

Dimension is not easy to understand. At the turn of the century it was one of the major problems in mathematics to determine what dimension means and which properties it has. And since then the situation has become somewhat worse because mathematicians have come up with some ten different notions of dimension: topological dimension, Hausdorff dimension, fractal dimension, self-similarity dimension, box-counting dimension, capacity dimension, information dimension, Euclidean dimension, and more. They are all related. Some of them, however, make sense in certain situations, but not at all in others, where alternative definitions are more helpful. Sometimes they all make sense and are the same. Sometimes several make sense but do not agree. The details can be confusing even for a research mathematician.

Fractals for the Classroom

Leave a comment

Normal Law of Errors

Everybody firmly believes in it because the mathematicians imagine it is a fact of observation, and observers that it is a theory of mathematics.

Henri Poincaré

So, which is it? To apportion the blame, I suggest that the fact of observation be that “errors” are independent and identically distributed with finite variances. Observers may as well vote for Lyapunov’s set of conditions. I am sure that mathematicians are more than happy to shoulder the responsibility for the resulting Central Limit Theorem.

Leave a comment