Personal Workflow

Looks like I could use some order.

Having to juggle many balls, I decided that some streamlining is needed. My first approach is to try some variation of the Getting Things Done approach, using Things.

Easing myself into this. I will start by using it as a glorified to-do list. I hope to be able to report progress here (I guess this is the first to-do:).

I am also looking at other software to simplify software development workflow. I will start by using the Mercurial version control system.

, ,

1 Comment

Concentration of Measure I

The concept of Concentration of Measure (CoM) quantifies the tendency of an N-dimensional probability distribution to “lump” or concentrate around an (N-1)-dimensional submanifold. The phenomenon is that this tendency is especially large in high dimensions. Viewed another way, this is about the interaction between probability and distance in high dimensions.

This is (hopefully) the start of a series of posts that will explain this concept. I would like to review a few concepts and definitions first though.

We denote by X a metric measure space of metric d and probability measure \mu. A probability measure is in effect a normalised measure, such that the measure of the whole space is 1. In such a space, the \epsilon-extension of a set A is denoted by A_\epsilon and is defined as A_\epsilon=\left\{ x|d(x,A)<\epsilon \right\}. Of course, the \epsilon-extension of A includes all of A. What it does more than that is that it fattens A by width \epsilon. Among other things, this means that A_\epsilon \setminus A is a shell enveloping A, and that for small values of \epsilon, this volume (measure if you want to be pedantic) is approximately the surface area (surface measure) of the set A multiplied by \epsilon.

Now we can define the concentration rate of a space. The basic idea is: of all the sets A that span half of the total volume of the space, find the one whose \epsilon-extension spans the smallest volume. We then declare the concentration rate of the space to be the volume outside that extension. In math terms: \alpha(\epsilon)=\sup\left\{ \mu\left( X \setminus A_\epsilon \right) | \mu(A)=\frac{1}{2} \right\}.

A few points can be made here. First, one would expect the set whose extension spans the smallest volume to be the one which has the smallest surface area, by virture of the relation between them. And in fact, one would be right, and this is the subject of the Isoperimetric Inequalities. Second, the concentration rate is the maximum volume outside an extended half-volume space possible, so using this property amounts to utilizing certain inequalities to limit, say, the volume of error. This is useful, for example, in the analysis of randomized algorithms. Third, the current definition yields one sided inequalities, but it is very easy to see that two sided algorithms can be derived which limit the volume outside a shell of width 2\epsilon around the surface of any set of half volume.

The more the measure is concentrated around a half-space, the smaller the concentration rate is. What we really want, and what CoM is about, is a very quick decay of the concentration rate as \epsilon increases. This will be useful for us to decide in a learning algorithm which value of \epsilon to cut off learning at with small probable error. But that is for another day.

And yes, it would be good if Posterous supported Latex..

Leave a comment

Testing Posterous — and the ratio of the volumes of hyperballs and hypercubes in high dimensions

This Posterous service looks like a great thing.
The picture attached is the ratio of the volume of a hyperball to its
circumscribing hypercube in N dimensions.
The mathematical relation is actually \frac{V_b}{V_c}(N)=\frac{\pi^\frac{N}{2}}{2^N \Gamma(\frac{N}{2}+1)}.
I need to review both the relation and the latex text as I am writing
from memory, but this should be good enough for a test post.

Leave a comment

Factoring by Stealth

This is simply brilliant. Humans have always been good at visualisation, and puzzles always generate more interest in study than dry math symbols. I was just going to comment on it there, but there is more to this than meets the eye, so I will have to expand this entry. Stay tuned!

Leave a comment

Multidimensional Separation of Concerns – Bad Command or File Name

There should really be a sarcasm tag in HTML.

Hyperspaces look to be a hugely advanced method for Multidimensional Separation of Concerns. Unfortunately, separation of concerns is one of the most slippery concepts in computer science. I will certainly remember the day when somebody actually explains what a concern is. And no, vacuous definitions don’t count.

Interestingly, the concept in Dijkstra’s “On the role of scientific thought” makes a lot of sense. The words aspect and concern are used in their natural English meaning, and the whole piece is essentially a call for divide and rule which is essential in all science, to use Ceteris Paribus assumptions. Else, can someone explain Dijkstra’s “It is being one- and multiple-track minded simultaneously.” in a computer science perspective?

Best worded definition so far is this: “So what is a cross-cutting concern? A concern is a particular concept or area of interest. For example, in an ordering system the core concerns could be order processing and manufacturing, while the system concerns could be transaction handling and security management. A cross-cutting concern is a concern that affects several classes or modules, a concern that is not well localized and modularized.”. At least, when a concept is not directly well defined, examples give you something to go by.

Leave a comment

Calculus – a Brilliant Approach

The best approach I have come across so far. Unsurprisingly, it does have a Knuth-like quality. At least, I think this is a much more sensible way to explain to beginners an \epsilon-\delta definition.
I will probably copy all of the important proofs and concepts here as an personal exercise.

Leave a comment

Spaces – First Round

Hi, I am back.

Things have progressed reasonably well. So, I am reasonably happy with my math skills now, though I didn’t fix all concepts. I am now crossing my fingers for no relapse.

For the time being, my definition of mathematics is recognizing patterns, abstracting them, and using the results in conjunction with previous abstractions. It remains to see whether this will survive.

Back to Space, the issue of Vector Space has undergone a good treatment recently, in Prof Gowers weblog. Do pay attention to Terence Tao‘s comment and his linked notes. I personally find the linear transformation approach more useful than matrices, to the extent that I sometimes think of a matrix entry as a_{ij}=e_i^\prime A e_j, just to avoid breaking the conceptual transformation approach. I probably find that easier as a result of using SciPy, where up to a certain extent you gain in performance by thinking of matrices as black boxes, akin to vectorization. Probably Matlab users will have the same effect. By the way, Tao’s comment about the idea of a linear transformation being a multidimensional generalisation of a ratio did not help at first, but it is now resonating with another idea that I have. More on that later. In short, the concept of linear transformation is that embedded in vector space.

Leave a comment