Tuesday, December 29, 2009

Lattice Based Cryptography - Getting Inside 3

Hello again,

Getting Inside is sounding pretty lame now (even to me). I will probably change the name from my next post onwards.

(Start skipping)

But here is a good defense about retaining the name though. I have not really talked about all a first chapter a book (like this) on lattice based cryptography would cover.

Worse, the amount of material covered so far, is far lesser. As much as I would like to cover up till the point bounded by what I know of the subject (which is only a trifle) I would also like to writeup an intuitive account of the subject. Since, I am building up on the work of other Giants in the field, I would love to "build a ladder" which, in my opinion, helps an average student understand the subject more readily. I would try motivating the topics and fill in the gaps other authors have left for lesser mortals to fill in.


(Stop Skipping - or maybe skip the next para!)

In my previous post, I had talked about the condition when two Bases generate the same lattice. While I provided the intuitive picture ,and would have provided the actual proof as well, watchmath beat me to it by proving the statement in his comment.

Great! So what next?

I would like you to keep the focus in sight. We would like to use lattice based primitives in a cryptographic scheme and prove that its secure (or as secure as the underlying lattice problem which we assume is tough).

In short I would first build up foundations using which we can prove that some lattice problems are tough.

For that, we will need to examine lattices more closely. What I would like to know is that given a lattice, how far do I at least need to travel from the origin to make sure that I have seen $i$ linearly-independent vectors. Further, a more basic question, since a lattice can have many different bases there must exist a way to talk about them irrespective of the under Base. We would like to do that - abstraction always saves a lot of trouble in any branch of science.

Without further ado, therefore, I would straightaway tell you that some lattice based problems are horribly tough. In fact, one of these (assume you have a full rabk lattice) called the Shortest Vector Problem is so tough that even approximating it to a polynomial factor of $n^{0.5}$ is NP Hard!

So, below I will help develop tools (or help reader understand the developed tools) to this end. First of all, we need the notion of what we call Succesive minima and Gram Schimdt Orthogonolization.


So, a few minutes ago I was talking about the minimum distance that I need to travel from the origin outward to ensure that I have seen $i$ linearly independent lattice vectors. This is the $i-th$ minima of the lattice.

Thus, we can think of $1st$, $2nd$, $3rd$ and $nth$ smallest minima of the lattice (denoted as $\lambda_1$, $\lambda_2$, $\lambda_3$ and $\lambda_n$). Note that these are constants for a lattice (and wont change if you pick a different base for the lattice).

Formally, we say that for a lattice $\Lambda$ the $ith$ minimum is the radius of the smallest $n-dimensional$ sphere centered in origin containing exactly $i$ independent vectors, as


$\lambda_i$ $=$ $inf$ $[r$ : $dim(span$($\Lambda$ $\cap$ $B(0, r))) \geq i$ $]$


where $B(0,r)$ = { $x \in$ $\RR^m$ : $||x|| < r$ } is the $m$-dimensional "open" ball of radius $r$.


(It is open because the interval is open - excludes $r$ contains everything less). So much for a definition. But its quite useful. (You will see the uses in the future posts; but let me remind you that we will try looking into the hardness of Shortest vector problem (SVP) and its close cousins; there this notion of smallest independent vectors would be real good).


Here are a few remarks on the definition. Note that I never said that the $\lambda_i$'s are lattice vectors; they are just radii of some open balls. Does any lattice vector achieve length $\lambda_i$? For how many $i$'s does that happen? You might intuitively feel that it should happen for all $i$'s and your intuition is damn right. But the proof involves talking about compactness and some other things I wont get into. Let me just rephrase the intuitive picture you should carry in mind - "you are looking for $ith$ successive minima. You pick a ball centered at origin which has initial radius $\lambda_{i-1}$. You just blow it up till it attains radius $\lambda_i$. Surely when your ball attains this $\lambda_i$ radius, you must have hit a lattice point which lies along a vector independent from previous Successive minima vectors. (this is the heart of the intuition). This tells you that $\lamda_i$, the ball's radius at that moment 'happens' when you hit some lattice point."


The proof can be made more precise but I wont do that here. I am again looking at SVP. Oh in case you want, SVP asks you to determine the length of the shortest non-zero lattice vector. (Which is nothing but $\lambda_1$.) The problem has got quite a few variations as hard as the problem itself. We will see them later. For now, I would like to tell you about a non-trivial lower bound on the length of the shortest non-zero vector in the lattice. That is we are ready to see the lower bound on $\lambda_1$.


Can you guess any such bound? It has to do with Gram Schmidt Orthogonalization. In case you do not know that, here is q quick tour through the concept. If you do know, you are invited to try and develop the lower bound. Well we talked about Gram Schmidt Orthogonalization. As the name indicates, this is a technique used to orthogonalize the given system of vectors. In particular we will invoke it to answer some question about $m$ dimensional lattices of rank $n$. To this end, we define the orthogonalized "sequence" of vectors $b_1$*, $b_2$*...$b_n$* where $b_i* = b_i - \sum_{i=1}^{j-1}\mu_{i,j}b_j*$ In other words, $b_i*$ is the component of $b_i$ orthogonal to $b_1$*...$b_{i-1}$*.

By the way, notice the word sequence there. It is a reminder to the astute reader (and to others!) that the Gram-Schimdt orthogonolization output depends on the sequence in which the input vectors are provided to us.

Further, it is clear that the $span(b_1, b_2...b_i) = span(b_1*, b_2*...b_i*)$ for all $i$ between $1$ and $n$. Also, notice that this orthogonalized system need not be a basis for your original lattice. In fact, its quite likely that during orthogonolization, the vectors $b_i*$ that you obtain are not even in the lattice generated by the initial basis.

Now, the bound. Yes, the bound. And the Oscar goes to $min_{i=1....n}$ $||b_i*||$. Do you see why? First let us understand what the above statement says. It just conveys the information that after you are done orthogonalizing, the smallest vector in that orthogonal system is a non-trivial lower bound on the length of shortest non-zero lattice vector. And now for the why is that part. Consider any arbitrary lattice vector $Bx$. We will project it onto some $b_j$* and hope that for minimum $b_j$*, the projection is greater than $b_j$*. Consider the largest index $x_j$ of $x$ such that $x_j$ is non zero. Then it is clear that the corresponding $b_j$* will do the trick. For, $|$<$Bx, b_j*$>$| = |x_j||b_j|^2$ (why?)

and we know by triangle inequality that $|$<$Bx, b_j*$>$| \leq ||Bx||.|b_j*|$.

Combine these two inequalities to get $||Bx|| \geq |x_j|.||b_j*|| \geq ||b_j*|| \geq min||b_i*||$.

which settles the result as it says an arbitrary lattice vector $Bx \geq min ||b_i*|| $

There is another interesting proof though. I will expand more upon it in my next blog. For now, I will only mention that this one involves re-writing your original basis in the orthogonazlied system.

In this system, the original Basis can be written the following way (as you can easily verify)

$\begin{pmatrix}||b_1*||&\mu_{2,1}||b_1*||&...&\mu_{n,1}||b_1*||\\0&||b_2*||&...&\mu_{n,2}||b_2*||\\...&...&...&...\\...&...&...&...\\...&...&...&...\\0&0&...&||b_n*||\end{pmatrix}$


Again, I have assumed that I was working with a full rank lattice while writing the original basis in the orthogonalized system. For non-full rank lattices, you get a row of zeroes after the $n$th row as you might have expected.

So, in next blog we will talk a little more about Successive minima, Gram Schmidt Orthogonolization and we will develop a little more background to study SVP.

Till then good bye!

No comments:

Post a Comment