## Tuesday, December 29, 2009

### Lattice Based Cryptography - Getting Inside 3

Hello again,

Getting Inside is sounding pretty lame now (even to me). I will probably change the name from my next post onwards.

(Start skipping)

But here is a good defense about retaining the name though. I have not really talked about all a first chapter a book (like this) on lattice based cryptography would cover.

Worse, the amount of material covered so far, is far lesser. As much as I would like to cover up till the point bounded by what I know of the subject (which is only a trifle) I would also like to writeup an intuitive account of the subject. Since, I am building up on the work of other Giants in the field, I would love to "build a ladder" which, in my opinion, helps an average student understand the subject more readily. I would try motivating the topics and fill in the gaps other authors have left for lesser mortals to fill in.

(Stop Skipping - or maybe skip the next para!)

In my previous post, I had talked about the condition when two Bases generate the same lattice. While I provided the intuitive picture ,and would have provided the actual proof as well, watchmath beat me to it by proving the statement in his comment.

Great! So what next?

I would like you to keep the focus in sight. We would like to use lattice based primitives in a cryptographic scheme and prove that its secure (or as secure as the underlying lattice problem which we assume is tough).

In short I would first build up foundations using which we can prove that some lattice problems are tough.

For that, we will need to examine lattices more closely. What I would like to know is that given a lattice, how far do I at least need to travel from the origin to make sure that I have seen $i$ linearly-independent vectors. Further, a more basic question, since a lattice can have many different bases there must exist a way to talk about them irrespective of the under Base. We would like to do that - abstraction always saves a lot of trouble in any branch of science.

Without further ado, therefore, I would straightaway tell you that some lattice based problems are horribly tough. In fact, one of these (assume you have a full rabk lattice) called the Shortest Vector Problem is so tough that even approximating it to a polynomial factor of $n^{0.5}$ is NP Hard!

So, below I will help develop tools (or help reader understand the developed tools) to this end. First of all, we need the notion of what we call Succesive minima and Gram Schimdt Orthogonolization.

So, a few minutes ago I was talking about the minimum distance that I need to travel from the origin outward to ensure that I have seen $i$ linearly independent lattice vectors. This is the $i-th$ minima of the lattice.

Thus, we can think of $1st$, $2nd$, $3rd$ and $nth$ smallest minima of the lattice (denoted as $\lambda_1$, $\lambda_2$, $\lambda_3$ and $\lambda_n$). Note that these are constants for a lattice (and wont change if you pick a different base for the lattice).

Formally, we say that for a lattice $\Lambda$ the $ith$ minimum is the radius of the smallest $n-dimensional$ sphere centered in origin containing exactly $i$ independent vectors, as

$\lambda_i$ $=$ $inf$ $[r$ : $dim(span$($\Lambda$ $\cap$ $B(0, r))) \geq i$ $]$

where $B(0,r)$ = { $x \in$ $\RR^m$ : $||x|| < r$ } is the $m$-dimensional "open" ball of radius $r$.

(It is open because the interval is open - excludes $r$ contains everything less). So much for a definition. But its quite useful. (You will see the uses in the future posts; but let me remind you that we will try looking into the hardness of Shortest vector problem (SVP) and its close cousins; there this notion of smallest independent vectors would be real good).

Here are a few remarks on the definition. Note that I never said that the $\lambda_i$'s are lattice vectors; they are just radii of some open balls. Does any lattice vector achieve length $\lambda_i$? For how many $i$'s does that happen? You might intuitively feel that it should happen for all $i$'s and your intuition is damn right. But the proof involves talking about compactness and some other things I wont get into. Let me just rephrase the intuitive picture you should carry in mind - "you are looking for $ith$ successive minima. You pick a ball centered at origin which has initial radius $\lambda_{i-1}$. You just blow it up till it attains radius $\lambda_i$. Surely when your ball attains this $\lambda_i$ radius, you must have hit a lattice point which lies along a vector independent from previous Successive minima vectors. (this is the heart of the intuition). This tells you that $\lamda_i$, the ball's radius at that moment 'happens' when you hit some lattice point."

The proof can be made more precise but I wont do that here. I am again looking at SVP. Oh in case you want, SVP asks you to determine the length of the shortest non-zero lattice vector. (Which is nothing but $\lambda_1$.) The problem has got quite a few variations as hard as the problem itself. We will see them later. For now, I would like to tell you about a non-trivial lower bound on the length of the shortest non-zero vector in the lattice. That is we are ready to see the lower bound on $\lambda_1$.

Can you guess any such bound? It has to do with Gram Schmidt Orthogonalization. In case you do not know that, here is q quick tour through the concept. If you do know, you are invited to try and develop the lower bound. Well we talked about Gram Schmidt Orthogonalization. As the name indicates, this is a technique used to orthogonalize the given system of vectors. In particular we will invoke it to answer some question about $m$ dimensional lattices of rank $n$. To this end, we define the orthogonalized "sequence" of vectors $b_1$*, $b_2$*...$b_n$* where $b_i* = b_i - \sum_{i=1}^{j-1}\mu_{i,j}b_j*$ In other words, $b_i*$ is the component of $b_i$ orthogonal to $b_1$*...$b_{i-1}$*.

By the way, notice the word sequence there. It is a reminder to the astute reader (and to others!) that the Gram-Schimdt orthogonolization output depends on the sequence in which the input vectors are provided to us.

Further, it is clear that the $span(b_1, b_2...b_i) = span(b_1*, b_2*...b_i*)$ for all $i$ between $1$ and $n$. Also, notice that this orthogonalized system need not be a basis for your original lattice. In fact, its quite likely that during orthogonolization, the vectors $b_i*$ that you obtain are not even in the lattice generated by the initial basis.

Now, the bound. Yes, the bound. And the Oscar goes to $min_{i=1....n}$ $||b_i*||$. Do you see why? First let us understand what the above statement says. It just conveys the information that after you are done orthogonalizing, the smallest vector in that orthogonal system is a non-trivial lower bound on the length of shortest non-zero lattice vector. And now for the why is that part. Consider any arbitrary lattice vector $Bx$. We will project it onto some $b_j$* and hope that for minimum $b_j$*, the projection is greater than $b_j$*. Consider the largest index $x_j$ of $x$ such that $x_j$ is non zero. Then it is clear that the corresponding $b_j$* will do the trick. For, $|$<$Bx, b_j*$>$| = |x_j||b_j|^2$ (why?)

and we know by triangle inequality that $|$<$Bx, b_j*$>$| \leq ||Bx||.|b_j*|$.

Combine these two inequalities to get $||Bx|| \geq |x_j|.||b_j*|| \geq ||b_j*|| \geq min||b_i*||$.

which settles the result as it says an arbitrary lattice vector $Bx \geq min ||b_i*||$

There is another interesting proof though. I will expand more upon it in my next blog. For now, I will only mention that this one involves re-writing your original basis in the orthogonazlied system.

In this system, the original Basis can be written the following way (as you can easily verify)

$\begin{pmatrix}||b_1*||&\mu_{2,1}||b_1*||&...&\mu_{n,1}||b_1*||\\0&||b_2*||&...&\mu_{n,2}||b_2*||\\...&...&...&...\\...&...&...&...\\...&...&...&...\\0&0&...&||b_n*||\end{pmatrix}$

Again, I have assumed that I was working with a full rank lattice while writing the original basis in the orthogonalized system. For non-full rank lattices, you get a row of zeroes after the $n$th row as you might have expected.

So, in next blog we will talk a little more about Successive minima, Gram Schmidt Orthogonolization and we will develop a little more background to study SVP.

Till then good bye!

## Monday, December 28, 2009

### Lattice Based Cryptography - Getting Inside 2

Hello Again,

So did you try out the problems listed at the end of the previous post?

That was fun, right?

Anyway, in case you did not get the solution (or did not try the problem at all) here is the solution.

(Please note that the following material draws heavily from Oded Regev lecture notes)

We will solve the more general problem of determining Base Equivalence first. That is given $2$ bases $B_{1}$ and $B_{2}$ we will determine whether they lead to the same lattice. This will help with the the problem of determining whether the given lattice hits all points in $Z^n$ as well because we know that $I$ hits all points in $Z^n$ and we know the condition when a base $B$ is equivalent to $I$. Actually we know more. We not only know "how to determine whether given $B$" is equivalent to $I$; we also know "how to create such a $B$".

First of all we notice that a given Base $B_{1}$ and another base $B_{2}$ obtained by permutation of its columns are equivalent (why? hint - its too easy).

Any other base $B_{2}$ equivalent to $B_{1}$ will have the same number of "effectively independent vectors". Further, we would expect that the set of vectors is effectively the same - both are clearly integer combinations of each other,

Something tells us that column operations on $B_{2}$ can convert it to the equivalent matrix $B_{1}$.

Further, if $B_{1}$ is full rank so that $det$($B_{1}$$) is defined, we hope that det(B_{2}$$)$ would be same in magnitude as $det$($B_{1}$$). So, that looks like the rule; perhaps both the matrices should have the same determinant in magnitude. Hmm..So we can say that if B_{1} = B_{2}$$U$ (and $B_{2}$ = $B_{1}$$V) for some unimodular matrix U (and Unimodular V), B_{1} and B_{2} are equivalent. By the way Unimodular matrices are those which have determinant +1 or (-1) and have all B_{ij}'s integral. But wait. We talked about lattices with full rank bases. What if rank is not full? Well, you can prove that these rules hold in general. This helps you see that if you want your Basis to hit Z^n, your matrix needs to be a unimodular matrix times I; in short your matrix needs to be unimodular itself. Now lets actually prove the following proposition Two bases B_{1}, B_{2} \epsilon \RR^{m*n} are equivalent if and only if B_{2} = B_{1}$$U$ for some unimodular matrix $U$.

## Sunday, December 27, 2009

### Lattice Based Cryptography - Getting Inside

Hello again,

While this might not be the best title for this blog, I guess I will have to make do with this to begin with.

Lattice Based Cryptographic systems are currently an active area of research in Cryptology and in Theoretical Computer Science as well. Many researchers including Oded Regev , Miklos Ajtai , Daniele Micciancio's and Chris Peikert have contributed to this field.

The field was officially kicked off by a ground-breaking work of Ajtai in which he discovered that lattices could be used for constructing cryptographic primitives.

But I have got ahead of myself already. I guess, I am tremendously handicapped in my writing skills.

Lets back up. I need to tell you what lattices are. Further, I will have to tell you about why do we care about this new mechanism. There are loads of other questions you could think of. I would have to start slowly and thats what I do now.

A lattice a set of points in n-dimesional plane with periodic structure. (I will give a more formal definition later) Lets just say for the moment that they look like a neat laid out grid. I would like you to think about a simple lattice - the set of points with integral co-ordinates in $2-dimensional$ plane. This is the lattice $Z^2$.

You might ask why in heaven's name does any one study these structures. If you are interested in exploring the historical context behind this, I would only say that they were extremely useful in solving Diophantine Equations . Basically here you are looking for "integral" solutions to a set of equations where the number of unknowns is more than the number of equations. The desired "set of points" (all with integral co-ordinates) is a subset of some lattice determined by the input equations. Already those of you who are familiar with linear algebra can look ahead and feel that Linear Algebra will certainly have some bearing upon where are we headed. You can also feel the vague hint of connection with Integer (and Linear) Programming.

Okay, enough. If you want to study more of the historical background behind lattices, I would suggest that you go and read some text like this on Geometry of numbers.

Now given the number theoretic importance of lattices, you might be ready for the connection with cryptography which is what this blog will mainly delve on.

But first, let me address a very basic question - what is so nice about these lattices that they, at the moment, are on the radar of researchers in cryptography? Towards the end of my previous blog, I talked about worst case hardness and average case hardness.

I said that lattice based schemes are based on worst case hardness of lattice problems. What this means is unlike factoring, where by saying factoring integers is hard we mean that there exist integers which are hard to factor; lattices are more fun.

To illustrate the point further, consider this - there are some integers which are easier to factor (even integers or maybe you can say multiples of small primes). So, saying factoring is hard is not enough. You need to "concoct a number" factoring which is hard. Thus, factoring is based on worst case hardness assumption. That is, the average case is not as hard as worst case.

Aha! thats where lattices steal the ball. With lattice based schemes the average case is every bit as hard as the worst case.

The above discussion about worst case hardness of lattice problems has been taken from Scott Aaronson's insightful discussion of the topic on his blog.

So, finally, the moment. Lets throw in the formal definition of a lattice.

Given a bunch of $n$-linearly independent vectors from $\RR^m$ (kept inside a matrix $B$), the lattice generated by them is the set of vectors $Bx = v$ where $x$ is an integer vector which holds "what integral amount $x_{i}$" of vector $b_{i}$ do we want to keep in $v$.

Notice that the Basis Vectors above are from $m-dimensional$ space. Further, their components are not "restricted" to be integers. It is the coefficients in the linear combination that need to be integral. (That is, your $x_{i}$'s are integral).

$m$ is the dimension of the lattice. $n$ is its rank. Full rank lattices have $m=n$.

Its not necessary that a full rank lattice of rank $n$ will hit all points in $Z^n$ as you can verify for yourself. For example consider $\begin{pmatrix}1\\1\end{pmatrix}$ and $\begin{pmatrix}2\\0\end{pmatrix}$ as your Basis Vectors. Clearly they hit only those points sum of whose coordinates is even.

Can you characterize the condition on the Basis vectors for a $n$-dimesional full rank lattice  so that they hit $Z^n$?

Better still can you figure out when will two Bases be equivalent?

Perhaps thats the topic for my next blog. But do try it in the meantime.

### Some snapshots of Theoretical Computer Science from my Camera

I am no authority on the subject. All that I will say below is based on the brief (and ongoing!) tour of this fascinating field.

If you want a more rigorous account, I would suggest that you look into places like Dick Lipton's blog , Luca Trevisan's blog or some other of the many beautiful sources which wikipedia and google will return.

This is supposed to be a realxed introductory pathway into some aspects of the field. I will include what I feel are really amazing insights that many theorists came over the years and will hopefully share my enthusiasm with you about the field. I hope that you will also provide some nice and refreshing insights which I will benefit from. That might sound like me being selfish; but thats me.

Anyway, where to begin?

I do not intend to keep the discussions very light. That will defeat the purpose of making an effort to explain nice things that I am studying. Nor do I want to elaborate on research frontiers for the simple reason that I cannot do that at this moment.

Therefore, I would just blabber about topics I think I can share some refreshing insights about.

Lets begin the tour with (probably) esoteric Lattice Based Cryptography .

I guess you all know what a good cryptosystem is. One popular cryptosystem is RSA which is based on the assumption that factoring of integers is hard. Please note that it is believed that factoring integers is not NP Complete. (It is possibly easier)

NP Complete problems involve searching for the solution in an exponential sized solution space which forms the core of their difficulty.

(Certainly there is a possibility that some fiendishly clever technique is looming behind the corner which enables a quick scan through the solution space - but I discount that possibility. In other words, I am assuming that P != NP)

Also they include (search version) a report if none exists clause which basically says that the algorithm looking for a particular property in the given setting should find one or report that the structure did not contain the object of interest.

Factoring does not involve a "report if none exists" clause. Further, factorisation can be solve using quantum computation in polynomial time.

Thus, we have fair amount of evidence to believe that factorisation is not NP Complete and if you ask me I am indeed going to bet against it being NP Complete.

So far so good.

What is the natural next step? We would like to base our cryptographic scheme on the hardness of some problem X "harder than factoring". We would like some X which is NP Hard - but as of now it seems too hard.

We are still trying though - the fight is on.

We have some lattice based cryptographic schemes which we think are safe from quantum computation attacks (so far).

Looking ahead, I will also tell you that whereas with factoring the security of the scheme using it depends on what we call average case hardness.

With Lattice Based Schemes you can have what we call worst case hardness.

I will detail upon these in my later blogs.

Thanks and have a good day.

## Monday, November 30, 2009

### Welcome to a yet another theory blog

Hello Everybody,

This is Akash Kumar and I am a Computer Science Student at Georgia Tech.

I find Theory extremely compelling and hence this blog. I will take you on this tour as I am taking it right now - hopefully conveying the beauty of the subject.

Feel free to comment - I hope an exchange of ideas will help me (and maybe you) immensely.

Thats it for now
Bye and have a nice day

Some good theory blogs to follow

1) Dick Lipton's blog
2) Lance Fortnow's blog
3) Scott Aaronson's blog