Sunday, January 3, 2010

Successive minima and Gram-Schmidt Orthogonalization

In the previous post, we discussed a non-trivial lower bound on the length of first successive minima or the shoterst non-zero lattice vector which we denote by $\lambda_1$.

(Please note that in this post and subsequent ones when I want you to consider a vector $\lambda_i$, I actually want you to consider a vector whose length is $\lambda_i$ (normally in $l_2$ norm)).

I told you that there is an alternative method to achieve the same ends. In fact, now I want to say a little bit more. I would like to prove that $\lambda_i \geq min_{j=i...n}||b_j*||$.

Previous post just discussed the corollary you can draw from this fact that $\lambda_1 \geq min_{i=1...n}||b_i*||$.

In this post, I will tackle a few other interesting problems. These problems are mostly taken from the first homework on Oded Regev's Course on the subject.

The purpose of these problems is to tell you about the nice interplay between the properties of Successive minima and Gram Schmidt Orthogonalization.

Some of these solutions are due to Roy Kasher and Nir Bitansky at Tel Aviv Computer Science

For convenience, we will assume that we are talking full rank lattices of rank $n$ unless otherwise specified.

Consider the representation of the basis vectors in the orthogonalized system as shown in the previous post.

Notice that the $ith$ co-ordinate in the $ith$ column is $||b_i*||$. All entries below this are $0$. All entries above are $\mu_{i,j}||b_j*||$ in row $j$ if $j < i$.

For your convenience, I reproduce the matrix below


$\begin{pmatrix}||b_1*||&\mu_{2,1}||b_1*||&...&\mu_{n,1}||b_1*||\\0&||b_2*||&...&\mu_{n,2}||b_2*||\\...&...&...&...\\...&...&...&...\\...&...&...&...\\0&0&...&||b_n*||\end{pmatrix}$


What will you say about the vectors $\lambda_1$, $\lambda_2$....$\lambda_j$ when the following question is asked - do any of these vectors have a non-zero component $x_i$ when $i \geq j$ ? So, what is it? Yes or No?

Yes! If it were not, then you wont have $j$ linearly independent vectors as the definition of $j^{th}$ successive minima requires.

Okay, what then? Awesomeness. You know that some of these vectors $\lambda_1$, $\lambda_2$...$\lambda_j$ has some $i \geq j$th component non-zero.

You also know that any lattice vector is formed by linear combination of the vectors in the above matrix. Thus any lattice vector (including the $j$ successive minima-attaining ones) formed by "integral helpings"  from the vectors of the above basis. We know that for the successive minima-attaining ones, a particular "integral helping" is non-zero. Thus, your $\lambda_j$ is guranteed to be atleast this quantity wherefrom it follows that $\lambda_j \geq min_{j=i...n}||b_j||$.

There is more to come. We will tackle some more problems before we say goodbye today. So, grab a coke or pepsi (but no popcorn!), and dive right in.

(Do not dive in the coke; it gives a sticky feeling!)

This problem is for testing your understanding of Gram Schmidt Orthogonaliztion process. Can you, dear reader, disprove the following

"$\lambda_n \geq max_{i=1...n}||b_i*||$"

Hmmm...how do we proceed. You can think of it this way. Well, since we are asked to disprove it; any counter-example will do.

Next, we know that Gram Schmidt is supposed to be dependent on the sequence of the vectors in the input basis.

Lets talk $2*2$ matrices.

What if I take a particular basis, which sure enough does have smaller vectors than the given $b_1$. (In particular, I would like to have $b_2$ as one of the successive minimas; this will make my life cool!)

And sure enough a little tweaking by that pen of yours gives a counterexample as the matrix $\begin{pmatrix}1&0\\1&1\end{pmatrix}$

The following problem tests you on your understanding of Successive minima and independence. So, for a full rank lattice you have been provided with this list of all the successive minima attaining lattice vectors. I ask - can they capture the whole lattice?

Some are tempted to say yes for the simple reason that we are talking full rank lattices and have $n$ independent vectors. Nothing could be further from truth. The answer is no  and the reason is simple - you expect to have the span same. Getting the lattice same is co-incidental. For example consider the following lattice basis vectors. This solution is due to Nir Bitansky.


$v_1$ =  $\begin{pmatrix}2\\0\\0\\...\\0\\1\end{pmatrix}$ $v_2$ = $\begin{pmatrix}0\\2\\0\\...\\0\\1\end{pmatrix}$ $v_3$ =  $\begin{pmatrix}0\\0\\2\\...\\0\\1\end{pmatrix}$ $v_{n-1}$ =  $\begin{pmatrix}0\\0\\0\\...\\2\\1\end{pmatrix}$ AND $v_n$ =  $\begin{pmatrix}1\\1\\1\\...\\1\\1\end{pmatrix}$

Look at $v_n$ again. It is the all 1's vector. In this system, it is clear that all the successive minimas are 2. And the $ith$ successive minima vector has $2$ at the $ith$ position with all other entries $0$. The basis generated by these successive minima vector does not even include the all $1$'s vector. And this proves the assertion in question.

For the $2$ dimensional full rank lattices however, we have a nice result. It says that the successive minima attaining vectors do form a basis. Do you see why? Let $\lambda_1$ and $\lambda_2$ be your successive minima. Being independent, they clearly hit every point in $\RR^2$. Thus, a lattice point can be written as $x*\lambda_1 + y*\lambda_2$ for some reals $x$ and $y$.

$\lfloor x\rceil*\lambda_1 + \lfloor y\rceil*\lambda_2$ clearly belongs to the lattice. And so does their difference, the vector $v$ which clearly satisfies the following

$||v|| < (||\lambda_1|| + ||\lambda_2||)/2$.

And the right side of the above inequality is clearly less than $||\lambda_2||$. Thus, you get two independent lattice vectors and one of them is less than $||\lambda_2||$ which is clearly a contradiction.

Next, we try the following problem from Oded Regev's homework. This is not really related to the properties of Successive minima or Gram Schmidt Orthogonalization; but it is an excercise you should solve after an exposure to Unimodular matrices.

The problem is -

Show that any unimodular matrix $U \in Z^{n*n}$ can be transformed to the identity matrix by the following three basic column operations: $a_i \leftrightarrow a_j$ , $a_i \leftarrow -a_i$, and $a_i \leftarrow a_i+k*a_j$ for some integer $k$.

There is a hint that goes with the problem - Use Eulidean algorithm. I tried the problem but was unable to solve it. (I had not seen the hint though).

Anyway, Roy Kasher's solution was mind blowing. Roy, if you are reading this, get a webpage fast. I would like to link your page with this blog and let people have a look at how humans would look when they evolve (going brain-wise).

I was confused about how to apply Gaussian Elimination as that may not guarantee integrality which is what I needed.

Anyway, without much ado, here is the solution that Roy gave.

First, assume that the determinant has been expanded along some arbitrary row. Then, clearly the value of the determinant is a multiple of the $gcd$ of the numbers in this row.

And, since determinant is known to be 1 (the matrix is unimodular), the gcd of elements in this row is 1. Thus, we know that there exist two elements in this row that are co-prime. And since this row was an arbitrary row, any row has got two elements which are co-prime.

First select two elements in the first row which are co-prime. Run Euclid GCD on these elements till you get a 1 in one of these elements. Also, "copy these operations" over to other elements in the column headed by these elements. The 3 operations allowed give us enough room to invoke these operations.

Thus, sooner or later, you get a 1 in one of these positions. After this, you just zero out the first element from the rest of the first row.

Next, you move on to second row; knowing like that perfect hunter that your prey, two co-prime elements must exist in this row (determinant is still 1!).

You repeat the sequence of operations till you arrive at the Identity Matrix!

What else can you state about unimodular matrices? What, for instance, can you say about their inverses?

It turns out their inverses are also unimodular. This is easy to see as the foregoing proof tells us the key thing. Only elementary column operations can carry a unimodular matrix $U$ to $I$. All the elementary operations $E_1$, $E_2$...$E_{final}$ can be chained up like $E_{final}*....E_2*E_1$ to give the inverse which is clearly a unimodular matrix as all the $E_i$'s are integral and it obviously has determinant $1$.

This also tells you the proof for the quite "obvious" fact that two Bases, $B_1$ and $B_2$ are equivalent iff one is derivable from the other using column operations.

Had enough! We are just getting warmed up. Here is another

"Show that for any full-rank integer lattice $\Lambda$, $det(\Lambda).Z^n \subseteq \Lambda$"

That reminds me - I did not tell you that |$det(\Lambda)$| is also something you can talk about. It is not basis specific (the absolute value of determinant of lattice).

Its clear if you keep in mind equivalent lattices can be derived from one other by multiplication of unimodular matrices.

Okay. So, this question asks you to show that all vectors of $Z^n$ when multiplied by $det(\Lambda)$ give a vector which is in the lattice.

To do this, the most natural way seems to be the following. Let $det(\Lambda) = d$. Now we will try to show that the following vectors can be derived using the given lattice

$v_1$ =  $\begin{pmatrix}d&\\0&\\0&\\...&\\0\end{pmatrix}$ $v_2$ =  $\begin{pmatrix}0&\\d&\\0&\\...&\\0\end{pmatrix}$ $v_3$ =  $\begin{pmatrix}0&\\0&\\d&\\...&\\0\end{pmatrix}$... $v_n$ =  $\begin{pmatrix}0&\\0&\\0&\\...&\\d\end{pmatrix}$


Can I do that?

Why not? Let us expand the determinant along first row. Assume that co-factors are denoted by $C_{ij}$ and the $ij^{th}$ entries by $a_{ij}$.

Then we know $a_{11}*(C_{11}) + a_{12}*(-C_{12}) + ..... + a_{1n}*((-1^{n+1})*C_{1n}) = d$.

Thus, we can get a $d$ in the first place of a vector, $v$, by taking $C_{11}$ of first vector, $-C_{12}$ of second vector and so on. Hmm...Looks like this will have some bearing upon $v_1$. What does this combination do to other components of $v$?

We would really like it if makes $v$ the same as $v_1$; lets keep our fingers crossed. The other components are like

$a_{i1}*(C_{i1}) + a_{i2}*(-C_{i2}) + ..... + a_{in}*((-1^{n+1})*C_{in})$.

This sum is 0 as we would have hoped! Can you see why? Its just because the cofactors are what you obtain with first row removed. And the first row is the same as $ith$ row. This is universally known to be zero. To reiterate,


$a_{i1}*(C_{i1}) + a_{i2}*(-C_{i2}) + ..... + a_{in}*((-1^{n+1})*C_{in}) = 0$

for all $i \neq 1$.

Similarly, you can obtain $v_2$, $v_3$ and all of $v_i$'s. Once you get them, you can get any integral combination you want in this world. This proves the statement in question.

There are some more problems which I will work out once I get some more time.

Till then, good bye and have a wonderful 2010 ahead.

No comments:

Post a Comment