**Fully Homomorphic Encryption Part Two: Lattice-based Crypto and the LWE Problem**

Last time, we went through the overview of what FHE is, the different stages towards FHE, and the brief history of it. I think at this point, we should be pretty comfortable with understanding what FHE is and its potential applications.

Although these topics might sound pretty archaic and distant, I claim that **it’s actually not that hard to understand**. In fact, with just simple knowledge in linear algebra, we can fully grasp the idea of the GSW FHE Scheme.

In this post, let’s together **review some fundamental concepts of Lattice-based Crypto and the LWE Problem** so we can build up our knowledge to the actual GSW Scheme later.

# A Brief Intro to Lattice-based Cryptography

**So what the heck is Lattice-based Crypto**? According to Wikipedia:

Lattice-based cryptographyis the generic term for constructions of cryptographic primitives that involve lattices, either in the construction itself or in the security proof. Lattice-based constructions are currently important candidates forpost-quantum cryptography.

Word. Lattice-based Crypto is basically the Crypto that uses **lattices**. Sounds like a very convenient explanation.

However, for us who don’t really understand what **lattices** are, this concept sounds very confusing. To be honest, I still haven’t fully grasped the definition of lattices. If we look at its definition on Wikipedia again, it involves some geometry and group theory concepts which are also hard to comprehend. I think this is probably why this topic scared away lots of pondering readers.

Therefore, I decided to take a more **practical** approach to introduce this concept using my own understanding of it.

*Disclaimer: To more hard-core readers, my interpretation of this concept might not be entirely correct. Here I’m just trying to provide a more friendly perspective of this problem.*

# Integer Lattices from Basis Vectors

At this point, I would assume that we all have some linear algebra background. (If not, I highly recommend watching the YouTube series on Linear Algebra by **3Blue1Brown**.)

If we have some basis vectors `b_0, b_1`

that spans a linear space, any vector `v`

in this space then can be expressed as:

We know that since the coefficients `c_0, c_1`

can be any number, all the possible choices of these numbers will eventually **create an area which represents the span of the basis vectors**.

However, what if we were to add one more limitation: **all the coefficients ****c_0, c_1**** must be integers**? Then, all the possible choices of these coefficients no longer create an area. Instead, they create a discrete, sparse grid of dots, where **every dot represents a unique integer linear combination of the basis vectors**.

To imagine this, check out the image down below:

Basically, instead of having an area of infinite vector choices, now we have a grid of finite vector choices that are linear combinations of the basis vectors of this linear space where the coefficients are all integers. We refer to this grid-like system as an **Integer Lattice**.

Although we are describing a two-dimensional lattice structure, it’s perfectly fine to extend it into multiple dimensions. In fact, in order to make this into a hard problem that’s suitable for cryptography, we need to have plenty of dimensions.

# Interesting Problems in Integer Lattices

So what’s special about Integer Lattices? Turns out it’s the “Integer” part that makes this interesting.

Now since we limit the coefficients to be integers only, we can **no longer represent every single element in the original linear space using these basis vectors**. Suppose we have basis vectors:

And we would like to represent the vector:

There isn’t a valid pair of integer coefficients that allow us to do this.

And we would like to represent the vector:

There isn’t a valid pair of integer coefficients that allow us to do this.

However, here comes the interesting part: Since we cannot perfectly represent the vector with a pair of integer coefficients, **what about a very close approximation**? The problem becomes **whether we can find a pair of integer coefficients ****c_0, c_1**** such that the approximation of the vector in the Integer Lattice has the shortest distance to the vector itself in the lattice system**. In the example above, the closest we can get is when we set `c_0 = 3, c_1 = 3`

. The approximation vector is: [3, 3].

# The Learning With Errors (LWE) Problem

Now we should have a pretty good understanding of integer lattices! Let’s switch gears for a bit and talk about the main topic of this post: **Learning With Errors**.

Before we get into the specifics, I want to quickly do a warmup detour — **a retrospection of high school algebra**.

In high school algebra class, we might have encountered problems where we need to **solve a system of equations**. Namely, we are given a set of equations such as:

In order to solve this kind of problem, we are taught the **Row Reduction Method**. Basically the idea is to use one equation to subtract another equation, so we can end up in new equations. In this example above, if we subtract the first row with the third row, we can obtain `2x_1 + 3x_3 = -1`

.

Eventually, after we do enough of these row reductions, if the problem is set up correctly, we can end up with some pretty nice expressions of `x_1 = 1, x_2 = -1`

, etc. And we can plug that number back into the existing equations to solve for the other unknown variables in the system.

The row reduction operation, therefore, can be applied on matrices `A`

and `b`

to obtain the solution for vector `x`

. This method is also commonly called as **Gaussian Elimination**. We can formally define the Gaussian Elimination Problem to be: **Given matrix ****A****, and its product ****b = Ax****, can we recover the unknown vector **

.**x**

Since solving systems of equations is a problem taught in high school, we all know that it’s not really a hard one to solve. If the problem is well-defined, then after a few row reductions, we can easily obtain the results. Any computer can solve a well-defined Gaussian Elimination problem pretty efficiently.

However, **what if the original system of equation is somehow noisy**?

Namely, what if we also add an additional noise vector to the `Ax`

product, so instead we get something like:

Where `e`

is some noise vector that has the same dimension as the output `b~`

. **If we are given the matrix ****A****, and its product with errors ****b~ = Ax + e****, can we still efficiently recover the unknown vector **

?**x**

This problem, where we are trying to “learn” the unknown vector `x`

from its product with the matrix `A`

with some error offset `e`

, is commonly called the **Learning With Errors (LWE) problem**.

Now, let’s formally define LWE again.

First, we need to introduce a set of notations for the LWE problem:

- We view
`Z_q`

as integers in range`(-q/2, q/2)`

as a**finite field**. - We define
`||e||_infinity`

as the**infinity norm**operation on vector`e`

with`m`

elements. The operations returns the element in vector`e`

that has the**greatest absolute value**. Namely:

- Finally, we define
`x_B`

as a**random variable**that has its**distribution bounded by a number**

. Namely, every sample of this random variable will definitely be smaller than**B**`B`

.

Now, here is the formal definition of LWE:

Allow me to paraphrase with my own words again.

Basically, in an LWE problem (Search Version), we need to define the **dimensions** of the matrix `A`

by `m x n`

, where `m`

represents the number of equations, and `n`

represents the number of unknowns in the system. We also need to define the **size of the finite field** as `q`

, where `q`

should be a large enough integer that is larger than any of the computations we’ll need to do. Finally, we need to define the error upper bound `B`

, so the random error we sample can be bounded in a reasonable range.

The LWE problem is basically asking us to **recover the unknown vector ****s**** using the knowledge of the matrix ****A**** and the product with error **

.**As + e**

Notice that we have a lot of parameters here. LWE has a lot more parameters than the other common problems we encounter in cryptography, such as the **Discrete Log Problem** in the Diffie-Hellman key exchange protocol. Here, let’s see what these parameters are really used for again:

`n`

is commonly referred to as the security parameter (`\lambda`

). Therefore, the more unknown variables we have in the system, the harder the LWE problem becomes.`m`

is usually a polynomial of`n`

. Namely`m = poly(n)`

and`m >> n`

. The larger`m`

goes, the easier the LWE problem gets.`q`

is usually`poly(n)`

. For example, we can set`q`

to be`O(n²)`

.`B << q`

, we want the error bound to be negligible to the size of`q`

, so we can properly recover the unknown vector.

Usually, when we use LWE in practice, we will predefine `m, B, q`

as a function of `n`

, so we don’t need to manually set all these tedious parameters. Also, **correct initialization of these parameters can guarantee that our LWE problem has a unique solution with high probability**.

Notice that we keep calling the previous LWE problem construct as the **Search Version**. This is because in the problem we are given the matrix `A`

and the product with error `As + e`

and asked to recover `s`

. However, to prove security in cryptography, we usually tend to use the **Decisional Version of LWE**.

The **Decisional LWE (DLWE)** is basically the same setup as the Search Version. However, instead of trying to recover `s`

, our task becomes much simpler: to **distinguish between a valid LWE problem instance versus some randomly sampled values**.

**The DLWE Assumption** basically states that, given an LWE problem instance `(A, As + e)`

, it is **impossible to distinguish** it from sampling a random vector:

The reasoning behind the assumption is fairly straightforward: Since it’s hard to recover the unknown vector `s`

from the LWE problem instance, then this vector `As + e`

**just looks like a randomly sampled vector** to us. Therefore, we can’t call the difference even if this `As + e`

term is actually replaced by a real random vector `v`

.

Actually, this is not the first time we see both the search version and decisional version of the same problem in cryptography. Similar concepts were used when we evaluate the security of the **Diffie-Hellman Key Exchange Protocol**.

When proving semantic security of the DH protocol, we usually prefer the decisional version — the **Decisional Diffie-Hellman Problem (DDH)**. Namely, in this problem, we are presented either `(g, g^a, g^b, g^{ab})`

or `(g, g^a, g^b)`

and another random group element `g^r`

. Our task is to distinguish whether we have `g^{ab}`

or `g^r`

. If we cannot efficiently distinguish these two candidates if the problem is instantiated with reasonable security parameters, the DDH problem is considered **hard**.

With the help of pairings, we can easily crack DDH:

We can simply apply the pairing operation `e(., g)`

to the last unknown group element, and compare the result with `e(g^a, g^b)`

. If the last element is indeed `g^{ab}`

, then equality will hold. **With the help of pairings, solving DDH is trivial**. Therefore, we need to make sure that the Diffie-Hellman protocol is conducted in a safe group that no Pairing operations can be done.

Something worth noticing here is that **the DLWE problem is equally hard as the SLWE problem**. Because both problems rely on the same assumption that we cannot recover unknown vector `s`

, and there is no known attack like the Pairing operation here. This is an amazing property for cryptographic constructions. Now, **we can characterize the hardness of DLWE by the worst-case performance**!

# Putting it all together: Regev Encryption

If you made it to this line, it means we fully understood how LWE works now. **Yay!**

For the next step, let’s actually put together all the knowledge we have learned so far and use lattice-based problems to **actually do some cool lattice-based encryption**.

One of the most prominent examples of Lattice-based Encryption Schemes is **Regev Encryption**. It is invented by Regev in 2005, and it uses the standard assumptions of the DLWE problem.

Regev Encryption is **an “ElGamal” style public-key encryption scheme**. Let’s see its formal definition.

`KeyGen(1^\lambda)`

: The KeyGen algorithm will first create an LWE problem instance. Namely, it will sample`A <- Z_q^{m x n}, s <- Z_q^n, e <- x_B^m`

and set the product with error term to be`b = As + e`

. We also need to enforce that`q/4 > mB`

for the correctness property. Finally, it sets the private key`sk = s`

, and the public key`pk = (A, b)`

.`Encrypt(pk, x = {0, 1})`

: The encryption algorithm**encrypts a single bit at a time**. Just like ElGamal, it will first sample some**random binary nonce vector**`r <- Z_2^m \in {0,1}^m`

, then it will set the first part of the ciphertext`c_0`

to be`r^T * A`

, and the second part of the ciphertext`c_1`

to be`r^T * b + floor(q/2) x`

. Finally, it outputs both parts`(c_0, c_1)`

as the final ciphertext.`Decrypt(sk, ct=(c_0, c_1))`

: To decrypt the ciphertext, first we compute`x~ = c_1 — c_0 * s`

. Then we will check whether`|x~| < q/4`

. If so, we output`x = 0`

, else we output`x = 1`

.

The **Correctness** property of the encryption scheme is pretty straightforward. We can expand the computation of `x~`

to:

Basically, the value of `x~`

is the original bit `x`

times `q/2`

and then overlayed with some error term `r^T * e`

. Since we know that every element in the error vector is bounded by an upper bound `B`

, and `r`

is a random binary vector, the worst-case error we can get is simply `mB`

. Since we enforced that `q/4 > mB`

, we know that this error is bounded in a very small region with respect to the size of `q`

.

Therefore, **we can guarantee that the decryption will always succeed**. Namely, **the error term is tractable and bounded by a negligible amount** which won’t impact with decryption at all.

Next, we take a look at its **Security** property.

In order to prove that this scheme is secure, we need to use a very powerful tool in cryptographic proofs: **the Hybrid Arguments**.

This is really nothing fancy. Hybrid arguments allow us to achieve certain proof via incremental steps. Let’s first construct the first hybrid, which is basically **the view of an eavesdropping adversary that sees the entire encryption transcript**.

Now, we can replace a part of the public key and arrive at the second hybrid:

We know that one cannot efficiently distinguish between `H_0`

and `H_1`

, because of the DLWE assumption. Now, let’s further optimize `H_1`

and arrive at the final hybrid `H_2`

:

Now, we basically replaced all the ciphertext terms with random samples. Because of the **Leftover Hash Lemma**, we can effectively treat `H_2`

to be equivalent to `H_1`

.

Finally, we **combine the hybrids and arrive at the conclusion** that one cannot efficiently distinguish between `H_0`

, which is the Regev Encryption transcript, and `H_2`

, which is basically random samples that can be simulated. This proves the **semantic security** of this encryption scheme.

# To be continued: Building Leveled FHE

To review, we first described what **Integer Lattices** are, and talked about the hard **CVP** problem. Then we switched gear to talk about solving systems of equations which can be generalized as **Gaussian Elimination**. After that we added noise to the problem and that gave us the **Learning With Errors** problem. We gave a **formal definition to the LWE problem** along with its parameters. Next, we talk about how to **transition from SLWE to DLWE**, and why is it better compared to **DDH/CDH**. Finally, we concluded the post with an intro and proof to the Lattice-based Encryption Scheme — **Regev Encryption**.

And now we are here. If you made it through here, we have just gone through **all the fundamental building blocks** that will get us FHE!

Since this post is probably already too long, I shall conclude here. For the next post, we will take a look at how to construct **a Leveled FHE scheme with noise**.

# Credits

Most contents of this post were summarized from my notes for CS355. You can locate all the awesome lecture notes from CS355 here.

Thanks again to Dima, Florian, and Saba for teaching this amazing course!

*Originally published at my personal blog: **Function Decomposition**.*