# Quantum-safe cryptography

## Introduction

In this lesson we will look at how moving to quantum-safe cryptography can protect data today against the threat of quantum algorithms that could break it's protection in future .

By the end of the lesson we will have covered:

- What quantum-safe cryptography is
- Standardization efforts including NIST competitions
- Mathematical basis of some new algorithms such as Lattices and Learning with Errors
- Some Python code examples demonstrating how new mathematic challenges can be used for encryption
- A look at some specific quantum-safe algorithms including those in the CRYSTALS suite
- Hybrid cryptography

## Introduction to quantum-safe cryptography

Since quantum computers represent a than the classical computers in use today, cryptographic security needs to be reassessed for a world where quantum computers may , enabling new cryptographic attacks that would not be possible using classical computers.

These attacks may occur in future on data that is being transmitted or stored now known as *harvest now, decrypt later*, so it is not sufficient to wait until the required systems are available, but to make changes now.

The field of (QSC) encompasses efforts to that can withstand attacks both from quantum and classical computing systems. This is also sometimes called **quantum-resistant**, or **post-quantum cryptography**.

In earlier lessons, we discussed some of the potential security risks that the development of general-purpose quantum computers poses to a number of traditional cryptographic primitives such as symmetric key algorithms, cryptographic hash functions, and asymmetric key algorithms. Additionally, we mentioned cryptographically relevant quantum algorithms and the protocols they impact.

We see that the most significant impacts of quantum algorithms occur in the context of asymmetric key cryptography, where offers a polynomial-time solution to the and problems. Therefore, asymmetric cryptosystems based on factoring and discrete logarithms need to be replaced by new quantum-safe cryptography schemes.

This is in contrast to the symmetric key and cryptographic hashing protocols impacted by the Grover and algorithms, where the quantum speedups are not super-polynomial. Therefore, in this latter case, existing algorithms such as AES and SHA-256 can be fortified at least in the medium term by ensuring sufficiently long key and hash lengths.

## Quantum algorithms and impacts to cryptography

Quantum Algorithm | Functionality | ($n$ = number of bits) | Impacted Cryptographic Protocols | Mitigation |
---|---|---|---|---|

Shor | factoring | $poly(n)$ | RSA | Migrate to QSC |

Shor | discrete logarithm | $poly(n)$ | Diffie-Hellman, DSA, Elliptic Curve Cryptography | Migrate to QSC |

Grover | key search | $2^{n/2}$ | Symmetric key algorithms (e.g., AES) | Sufficient key length |

Grover | pre-image attack | $2^{n/2}$ | Hash functions (e.g., SHA-256) | Sufficient hash length |

BHT | collision attack | $2^{n/3}$ or $2^{2n/5}$ | Hash functions (e.g., SHA-256) | Sufficient hash length |

## Basic principles of quantum-safe cryptography

### Computational complexity

In cryptography, the *computational complexity class* plays an important role. This class consists of decision problems for which proposed solutions can be verified in polynomial time using a (DTM). The importance of NP stems from the fact that it is conjectured to consist of many computational problems that cannot be solved efficiently by both classical and quantum computers.

The first generation of successful asymmetric key cryptosystems developed in the 1970s based their security on mathematical problems such as prime factorization and discrete logarithms that are now conjectured to belong to the **NP-intermediate** subclass of NP. This subclass consists of problems that are believed not to have polynomial-time solutions on DTMs but at the same time are also not as hard as the hardest problems in NP.

The latter belong to the subclass . Following in the 1990s, it became clear that at least some NP-intermediate problems are amenable to efficient solutions on quantum computers that are not DTMs.

Therefore, modern quantum-safe cryptography schemes are based on NP-complete problems or related problems, which currently are not known to be solvable efficiently even on quantum computers.

### Average vs worst-case hardness

While there are many known NP-hard problems, not every such problem is suitable as a basis for cryptographic security. In this context, the notion of is useful for cryptography. A problem is **average-case hard** if most instances of the problem drawn randomly from some distribution are hard, whereas a problem is **worst-case hard** if it is hard only on some isolated *worst-case* instances. Quantum-safe cryptologists therefore search for mathematical problems that satisfy the assumption of average-case hardness and employ theoretical tools such as worst-case to average-case to identify suitable protocols whose security and efficiency can be guaranteed.

### Mathematical structures

Cryptologists have put forth a number of different mathematical structures that satisfy the necessary hardness requirements as potential candidates for quantum-safe migration of asymmetric key cryptosystems. Some well-known families include:

This class of algorithms relies on the hardness of problems such as the shortest vector problem (SVP) and the closest vector problem (CVP) in . Notable lattice-based schemes include and (LWE).

This type of cryptography is based on the difficulty of decoding a general linear code. The most notable example is the .

These systems involve equations of multiple variables over a finite field. A well-known system in this category is the HFE (Hidden Field Equations) scheme.

These are cryptographic systems that use only cryptographic hash functions. They are often used for digital signatures, like the Merkle signature scheme.

These systems are based on the difficulty of certain problems in the algebraic structure of elliptic curves. (SIDH) is an example.

## NIST standardization of quantum-safe cryptography

Recognizing the potential impact of quantum computing on current cryptographic systems, NIST initiated a program to standardize quantum-safe cryptographic algorithms in 2016 using a process similar to the one NIST followed to standardize the Advanced Encryption Standard (AES) in the early 2000s. In an open and transparent process involving security domain stakeholders, several QSC candidates were submitted for evaluation, and following a six-year review process, to become a part of NIST’s quantum-safe cryptographic standard.

### Finalists of the first NIST quantum-safe cryptography standardization effort

QSC Algorithm | Cryptographic family | Application |
---|---|---|

CRYSTALS-Kyber | Lattice-based | Key encapsulation mechanism |

CRYSTALS-Dilithium | Lattice-based | Digital signatures |

FALCON | Lattice-based | Lightweight digital signatures |

SPHINCS+ | Hash-based | Digital Signatures |

Of the four NIST finalists, three are lattice-based and one, , is hash-based. The Kyber and Dilithium algorithms, part of the , were selected to be general-purpose protocols for key encapsulation and digital signatures, respectively. was recommended for applications requiring smaller digital signatures than those provided by Dilithium. meanwhile was primarily chosen as a backup option to the lattice-based approaches, as it uses a different mathematical structure. Lattice-based cryptography therefore seems well-positioned to form the basis for the first-generation of QSC standards.

In August 2023, NIST published three draft standards for comments - which include the algorithms above:

- FIPS 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard
- FIPS 204, Module-Lattice-Based Digital Signature Standard
- FIPS 205, Stateless Hash-Based Digital Signature Standard

## Lattice-based cryptography

As the name suggests, (LBC) is based on the hardness of certain problems defined on mathematical structures called .

Of fundamental importance are two computational problems on lattices, namely the **shortest vector problem** and the **learning with errors problem**, which we discuss below after some preliminary definitions.

### Lattices

We can think of what a lattice is in the world around us. The Eiffel tower (Paris), or Birds-nest stadium (Beijing). Those structures have many linked elements which can be thought of as a series of points joined together along straight lines.

Diamonds are another form of lattice. There is a clear, repeating structure of Carbon atoms. They are then effectively joined together by atomic bonds; and this structure that is formed is what gives diamond its unique properties. However, different crystal structures can have very different layouts (can be thought of as our lattice structure), and therefore ends up with distinct features. Hence, lattices serve as the foundational mathematical framework that dictates the unique traits of different structures which we use in cryptography.

In simple mathematical terms, a lattice can be thought of as a collection of regularly spaced points that repeat at fixed intervals. To describe how these points are positioned relative to each other, we use vectors. The specific arrangement of these vectors, which defines the layout of the lattice, is referred to as the *basis.*

Imagine you have a box of toy construction bricks, and you want to build various objects with the same set of pieces. Each unique object you create requires a specific arrangement, and the way you choose and arrange the bricks serves as the *basis* for constructing different objects. If you change the selection or arrangement of bricks, you get a different object with distinct characteristics.

Similarly, in a lattice, the *basis* is like the set of atomic building blocks that define the lattice's structure. Depending on how you arrange these building blocks (atoms or points) in the lattice, you can create various lattice structures with different properties. Just as changing the toy construction brick pieces changes the object you build, altering the basis changes the lattice's characteristics and properties.

It's important to note that lattices are not limited to two or three dimensions; they can extend to higher dimensions, and in fields like cryptography, they may involve 1000s or more dimensions.

## Mathematical description of lattices

- Given $n$ linearly independent vectors $A = \{\mathbf{v_1},...,\mathbf{v_n} | \mathbf{v_i} \in \mathbb{R}^m\}$, the
*lattice*$\mathcal{L}$ generated by $A$ is the set of integer linear combinations $\mathcal{L} = \{ \sum_i c_i \mathbf{v_i} | c_i \in \mathbb{Z} \}$ - A
*basis*$B$ of a lattice $\mathcal{L}$ is any set of linearly independent vectors $B=\{b_1,...,b_n\}$ that spans the lattice i.e, $\mathcal{L}(B) = \{ \sum_i z_i b_i | z_i \in \mathbb{Z} \}$. - Importantly, the basis for a given lattice is not unique. Two different basis sets $B, B^{\prime}$ can describe the same lattice.

Not every basis is unique - it may just be a different perspective of the same structure.

This leads to an important concept in lattice mathematics, that of . This is the process of taking a given integer lattice and attempting to find a *good* basis comprising short, nearly orthogonal vectors.

*Figure 1. Lattice-basis reduction in two dimensions from a "bad" basis {v1, v2} to a "good basis" {u1, u2}*

Lattice-basis reductions can be performed in polynomial-time using the (LLL).

### The shortest vector problem

The **shortest vector problem** (SVP) is the challenge of finding the shortest distance, i.e. vector, between any two points in the lattice

## Mathematical description of the SVP

Given a lattice $\mathcal{L}$ embedded in a vector space $\mathcal{V}$ with norm $N$, the asks for the shortest non-zero vector in $\mathcal{L}$ as measured by $N$, that is, find a vector $v$ such that its length $\|v\|_N = \min\limits_{v^{\prime} \in \mathcal{L} \smallsetminus \{\mathbf{0}\}} \|v^{\prime}\|_N = \lambda(\mathcal{L})$.

*Figure 2. The SVP on a lattice specified by basis vectors {b1, b2}: The red vector is the shortest vector*

in the worst case for certain random lattices, even when lattice-basis reduction to a *good* basis is possible.

Ajtai also demonstrated that there exists a worst-case to average-case reduction for the shortest vector problem, laying the foundation for its use in cryptography.

### The closest vector problem

In the **closest vector problem** (CVP) we are looking to find the closest point on the lattice to the origin, or location given.

The CVP is also known to be NP-hard.

*Figure 3. The CVP on a lattice specified by basis vectors {b1, b2}: The red point represents the closest lattice vector to the given external vector shown in green that is not on the lattice*

The *shortest vector problem* is a special case of the *closest vector problem*.

## Mathematical description of CVP

A lattice $\mathcal{L}$ embedded in a vector space $\mathcal{V}$ with metric $M$ is given, along with a vector $v$ in $\mathcal{V}$ but not necessarily in $\mathcal{L}$. The task then is to find the vector $u$ in $\mathcal{L}$ closest to $v$ as measured by $M$ i.e., $d_M (u, v) = \min\limits_{u^{\prime} \in \mathcal{L}} d_M(u^{\prime}, v)$ where $d_M$ is the distance function of the metric $M$.

### The bounded distance decoding problem

Very similar to the CVP is the (BDD) problem.

In this case we know that the origin supplied is close to the lattice, but not *how close*, ie *bounded*.

## Mathematical description of the BDD problem

It is additionally specified that the external vector $v$ is at most within a distance $\sigma\lambda(\mathcal{L})$, where $\lambda(\mathcal{L})$ is the length of the shortest vector in the lattice and $\sigma$ is a small parameter.

### The learning with errors problem

The learning with errors (LWE) problem is based on the mathematics of lattices combined with the idea of introducing errors into an equation.

As is the case with any mathematical basis for cryptographic algorithms we can easily introduce noise as we make a series of calculations, but trying to find the inverse is very difficult.

## Mathematics of LWE problem

The ${\mathrm{LWE}_{q,\chi}}$ with modulus $q$ and error distribution $\chi$ is: Given access only to polynomially many samples $(\mathbf {a} ,t)$ of choice from Alice's distribution $A_{\mathbf{s},\chi }$, "learn" and output the secret vector ${\mathbf{s}}\in {\mathbb{Z}}_{q}^{n}$.

Let ${\mathbb {Z} _{q}}$ be the ring of integers modulo $q$.

Let ${\mathbb {Z}_{q}^{n}}$ be the set of $n$-dimensional vectors over $\mathbb{Z}_{q}$.

An adversary — say, Alice — chooses a fixed vector ${\mathbf{s}} \in {\mathbb {Z}}_{q}^{n}$ and keeps it a secret.

Alice also specifies $\chi$, a fixed "error" probability distribution over $\mathbb {Z} _{q}$.

Using the above ingredients, she constructs a distribution $A_{\mathbf{s},\chi }$ on ${\mathbb{Z}}_{q}^{n}\times{\mathbb {Z} _{q}}$ as follows:

- From the uniform distribution over ${\mathbb{Z}}_{q}^{n}$, Alice chooses a vector ${\mathbf{a}}\in {\mathbb{Z}}_{q}^{n}$ .
- From the given distribution $\chi$, she picks a number $\varepsilon \in \mathbb {Z} _{q}$.
- She then evaluates $t=\langle \mathbf{a} ,\mathbf {s} \rangle + \varepsilon$, where $\langle\mathbf{a},\mathbf{s}\rangle =\sum_i a_{i}s_{i}$ is the standard inner product on ${\mathbb{Z}}_{q}^{n}$ and the addition is in $\mathbb {Z} _{q}$.
- Alice outputs $(\mathbf {a} ,t)$.

Note the errors introduced: $\varepsilon$ drawn from $\chi$

Early developments in the LWE problem employed the one-dimensional discrete Gaussian distribution on $\mathbb {Z} _{q}$ as the error distribution.

The remarkable feature of the LWE problem is that without the errors, it reduces to a set of linear equations that can be easily solved using a technique like Gaussian elimination.

However, the inclusion of a suitable error distribution renders it an NP-hard problem.

### Encryption using the Learning with Errors cryptosystem

The LWE problem discussed above leads to a public key cryptosystem introduced by .

The exact process will be covered in the Python example below.

## Mathematics of key generation, encryption, and decryption

**Parameters:** The cryptosystem is characterized by the vector dimension or security parameter $n$, modulus $q$, number of LWE samples $N$, and error distribution $\chi$. Typical choices that guarantee both security and correctness are:

- $q$ is a prime between $n^2$ and $2n^2$
- $N = 1.1\cdot~n~log~q$
- $\chi = \Psi_{\alpha}$, with $\alpha = 1/(\sqrt{n}~log^2{n})$, where $\Psi_{\alpha}$ is obtained by sampling the normal distribution $\mathcal{N}(0, \alpha^2/(2\pi))$ and reducing the result modulo 1

**Private key:** The private key is the secret vector ${\mathbf{s}} \in {\mathbb {Z}}_{q}^{n}$.

**Public key:** Choose $N$ samples $(\mathbf{a}_i ,b_i)_{i=1}^N$ from the LWE distribution $A_{\mathbf{s},\chi }$

**Encryption:**

- Encryption utilizes the public key and is carried out bit by bit.
- For each bit of the message, randomly choose a binary vector $\mathbf{r} \in \{0,1\}^N$.
- If the message bit is 0, the encryption is $(\sum_i \mathbf{a}_i r_i, \sum_i b_i r_i)$.
- If the message bit is 1, the encryption is $(\sum_i \mathbf{a}_i r_i, \lfloor \frac{q}{2}\rfloor + \sum_i b_i r_i)$.

**Decryption:**

- The decryption utilizes the private key ${\mathbf{s}}$.
- The pair $({\mathbf{a}, b })$ decrypts to $0$ if $b-\langle \mathbf{a}\cdot\mathbf{s}\rangle$ is closer to $0$ than $\lfloor\frac{q}{2}\rfloor$ modulo $q$, else it decrypts to 1.

The security of the LWE public key cryptosystem was analyzed by Regev:

- Firstly, it was shown that recovering the error/noise vector $\mathbf{e} = [\varepsilon_1,...,\varepsilon_N] \leftarrow \chi^N$ involved in the construction of the public key is as hard as the bounded distance decoding (BDD) problem, which, as noted above, is believed to be NP-hard. This establishes the correspondence between LWE and explicit lattice problems.
- Secondly, security against chosen plain text attacks (CPA) is based on the observation that an algorithm for distinguishing between encryptions of 0 and 1 in the above cryptosystem can also distinguish between the distribution $A_{\mathbf{s},\chi }$ and the uniform distribution over $\mathbb{Z}_{q}^{n} \times \mathbb{Z}_{q}$, which would imply a solution of LWE itself.

### Illustration of LWE encryption in Python

The following simple example shows the use of for encryption and decryption. Bob will send an encrypted message to Alice.

First, Alice and Bob agree on the **problem parameters**. These are explained in detail in the maths section above, but in summary we require $n, q, N, \chi$.

We start with some of the basic parameters

- $N$ represents the number of samples
- $q$ is a modulus
- n is known as the security parameter, or vector dimension.

These parameters are all public information in principle.

Output:

```
n=8,q=127,N=42,sigma=1.0
```

We also need a way of introducing errors

Here $\chi$ represent the errors we want to introduce - we use a discrete Gaussian distribution on $\mathbb{Z}_{q}$ characterized by mean 0 and standard deviation $\sigma$.

This definition of the errors to introduce is also public.

So in Python, we need a simple function that approximates random error/noise contributions $\varepsilon$ drawn from a discrete Gaussian distribution $\chi$ with standard deviation $\sigma$.

Some example values are printed so that you can see the distribution around 0.

Output:

```
chi = 1
chi = 5
chi = 1
chi = 4
chi = 7
chi = 2
chi = 7
chi = 1
chi = 7
chi = 4
```

Next, Alice needs to generate her key pair.

She sets up her **private key** by choosing $n$ values between 0 and $q$.

Output:

```
Alice's private key: [ 61 81 55 50 113 5 99 67]
```

Alice now sets up her **public key**, by choosing random vectors, which are then combined with the generated errors.

## Mathematics

More precisely, she needs to generate the sample $A_{s,\chi}$.

Accordingly, she randomly chooses vectors $\mathbf{a} \in \mathbb{Z}_{q}^{n}$ and errors $\varepsilon$ from $\chi$ to construct $N$ samples $(\mathbf{a}, b)$.

Output:

```
Alice's public key: [(array([95, 3, 51, 87, 36, 74, 90, 88]), 51), (array([ 24, 5, 111, 82, 55, 38, 73, 121]), 30), (array([ 60, 13, 69, 18, 13, 102, 14, 116]), 100), (array([103, 43, 126, 65, 67, 0, 100, 6]), 100), (array([ 49, 11, 109, 15, 61, 110, 19, 99]), 39), (array([ 23, 48, 22, 72, 56, 41, 27, 122]), 49), (array([ 67, 8, 45, 62, 103, 43, 24, 45]), 124), (array([126, 71, 5, 66, 47, 20, 25, 68]), 115), (array([126, 92, 18, 125, 111, 15, 40, 96]), 49), (array([ 87, 11, 105, 75, 8, 113, 73, 37]), 101), (array([ 52, 112, 98, 108, 106, 69, 88, 43]), 88), (array([51, 55, 85, 18, 18, 59, 26, 81]), 104), (array([ 60, 69, 65, 120, 84, 104, 32, 35]), 58), (array([ 42, 111, 118, 61, 126, 113, 51, 22]), 2), (array([ 98, 55, 86, 103, 42, 7, 53, 3]), 64), (array([ 59, 113, 86, 30, 23, 68, 50, 35]), 6), (array([ 7, 112, 104, 122, 99, 29, 41, 126]), 67), (array([ 75, 101, 103, 55, 37, 45, 35, 41]), 39), (array([ 28, 48, 73, 67, 94, 45, 109, 69]), 104), (array([ 83, 21, 123, 100, 75, 47, 30, 50]), 30), (array([ 39, 3, 31, 108, 61, 113, 103, 13]), 59), (array([ 84, 30, 58, 122, 64, 102, 121, 2]), 122), (array([ 25, 50, 83, 69, 103, 60, 14, 99]), 19), (array([ 24, 45, 47, 56, 37, 110, 10, 109]), 23), (array([125, 25, 70, 76, 18, 57, 96, 74]), 45), (array([ 32, 97, 94, 19, 117, 33, 22, 23]), 15), (array([ 61, 120, 8, 4, 100, 46, 5, 59]), 89), (array([ 34, 61, 107, 108, 67, 0, 103, 37]), 66), (array([ 3, 48, 119, 30, 57, 124, 22, 65]), 55), (array([117, 51, 83, 101, 58, 80, 107, 43]), 36), (array([100, 32, 10, 93, 102, 48, 14, 54]), 54), (array([ 63, 16, 82, 40, 48, 53, 106, 120]), 56), (array([103, 60, 94, 88, 33, 116, 92, 57]), 102), (array([ 97, 65, 123, 110, 73, 4, 17, 80]), 24), (array([ 28, 116, 105, 79, 71, 52, 47, 64]), 82), (array([104, 25, 47, 24, 102, 118, 48, 3]), 13), (array([118, 15, 123, 20, 111, 19, 72, 75]), 75), (array([123, 79, 126, 19, 20, 20, 12, 91]), 56), (array([ 38, 97, 16, 57, 28, 105, 15, 93]), 36), (array([ 25, 35, 75, 15, 118, 50, 64, 108]), 68), (array([ 56, 90, 62, 58, 52, 45, 11, 104]), 59), (array([ 95, 2, 62, 115, 101, 124, 124, 15]), 45)]
```

Alice can now share her public key.

Bob can now use this to send Alice an encrypted message.

Bob's message is a single bit.

Output:

```
Bob's message bit=1
```

To encrypt the message, Bob needs to select an arbitrary number of samples at random from Alice's public key to form the ciphertext.

For this, he creates a mask, a random binary vector $r$ of length $N$.

Output:

```
[1 1 1 1 1 0 1 0 1 0 0 1 1 0 0 0 0 0 0 1 0 1 1 1 1 1 0 0 1 1 0 0 1 0 0 1 1
0 0 1 1 0]
```

We now take this mask and apply it to the relevant entry in Alice's public key, calculating a sum of the values found.

Output:

```
ciphertext is: ([10, 2, 106, 100, 96, 109, 111, 102], 110)
```

Finally, Bob broadcasts the ciphertext, which Alice can decrypt using her private key.

Output:

```
original message bit=1, decrypted message bit=1
```

Since this protocol works bit by bit for encrypting longer bit strings, we simply repeat the operations in a loop. The following shows a scenario where Bob wishes to transfer 16 encrypted bits.

Output:

```
Bob's message bits are : [1 1 0 0 0 1 1 0 1 0 1 0 1 0 0 0]
Decrypted message bits = [1 1 0 0 0 1 1 0 1 0 1 0 1 0 0 0]
```

## Module-LWE and the CRYSTALS suite

The learning with errors (LWE) problem, introduced in a simplified form above and generally valid on arbitrary lattices, has been extended to algebraic *rings* within the so-called framework primarily to improve the efficiency of resulting cryptosystems. However, the extra algebraic structure of Ring-LWE may be potentially exploitable, even though no such exploits are currently known.

Two of the four finalists in NIST's QSC standardization process — namely, the key encapsulation mechanism (KEM) and the digital signature protocol — are based on structures known as and the related .

We refer interested readers to specialized courses on these topics for the mathematical details but note here that Module-LWE is seen as a compromise between LWE and Ring-LWE. It provides more efficiency than LWE (though less than Ring-LWE) but also gives a higher level of security assurance than Ring-LWE because it does not rely as heavily on the algebraic structure of the problem.

### Key Encapsulation Mechanisms and CRYSTALS-Kyber

Traditional asymmetric key cryptosystems are most heavily deployed for their *key-exchange* and *digital signature* functionalities and as such, the NIST standardization process sought to develop quantum-safe alternatives for these two functionalities. The protocol is therefore designed as a dedicated (KEM) rather than as a general purpose encryption scheme such as RSA. A workflow illustrating the use of Kyber to establish a shared secret between two parties is shown below.

#### KEM example using liboqs (local setup required)

NOTE: This example requires some additional local installation, and cannot be run online. Copy the cells to a suitable local environment if you wish to run these examples

Here we make use of the algorithms provided by the

**Python Library:**This provides the Python language binding for liboqs - https://github.com/open-quantum-safe/liboqs-Python**liboqs implementation:**This is required by the Python library, and provides the actual algorithm implementations and must be built - https://github.com/open-quantum-safe/liboqs

Alice wishes to create and securely communicate a shared secret to Bob using a quantum-safe encryption approach. She will use the

We begin by importing relevant Python modules :

Output:

Output:

```
['BIKE-L1', 'BIKE-L3', 'BIKE-L5', 'Classic-McEliece-348864',
'Classic-McEliece-348864f', 'Classic-McEliece-460896',
'Classic-McEliece-460896f', 'Classic-McEliece-6688128',
'Classic-McEliece-6688128f', 'Classic-McEliece-6960119',
'Classic-McEliece-6960119f', 'Classic-McEliece-8192128',
'Classic-McEliece-8192128f', 'HQC-128', 'HQC-192', 'HQC-256', 'Kyber512',
'Kyber768', 'Kyber1024', 'sntrup761', 'FrodoKEM-640-AES', 'FrodoKEM-640-SHAKE',
'FrodoKEM-976-AES', 'FrodoKEM-976-SHAKE', 'FrodoKEM-1344-AES',
'FrodoKEM-1344-SHAKE']
```

We see that one of the options in the list is

- Key generation
- Encapsulation
- Decapsulation

In the key generation step, the recepient Bob, needs to generate an asymmetric key pair using

Output:

Next in the **encapsulation** step, upon receiving Bob's public key, Alice generates both the shared secret and it's corresponding ciphertext using *encapsulates* the shared secret is then broadcasted to Bob.

Output:

Finally, in the **decapsulation** step, Bob uses his private key to recover the shared secret from the ciphertext.

Output:

```
Shared secretes coincide: True
```

Note that in contrast to key exchange using RSA, no extraneous padding or key derivation functions are involved here. Kyber's design as a KEM allows for a very minimal user interface while providing the gold standard security for key exchange.

### IND-CPA and IND-CCA security in lattice-based cryptography

In the lesson on Symmetric Key Cryptography, we introduced the notion of **semantic security** or **IND-CPA** security which refers to .

Traditional cryptosystems whether symmetric (such as AES) or asymmetric (such as RSA) use deterministic functions to implement encryption operations. This means that a given plaintext, combined with a given encryption key will always encrypt to the same ciphertext. Such deterministic cryptosystems are vulnerable to *chosen plaintext attack* whereby an adversary is able to extract information by requesting encryptions of arbitrary plaintexts of their choice from the deterministic encryption function.

To achieve IND-CPA security in this context, **additional randomness** is introduced at encryption time either through *initialization vectors* or *padding*. For instance AES is only IND-CPA secure when used in that use random initialization vectors. Similarly with RSA, is need to ensure IND-CPA security.

In contrast, lattice-based schemes for encryption are inherently randomized due to the problem definition itself. In particular, in the LWE based encryption scheme outlined above, there are two distinct elements of randomness:

**(1)**The error (or noise) $\varepsilon$ drawn from the distribution $\chi$**(2)**The random binary vectors $\mathbf{r} \in \{0,1\}^N$ used for encrypting each bit in the message.

The errors $\varepsilon$ contribute to the security of the public key, ensuring that it's computationally hard to deduce the secret key $\mathbf{s}$. The random binary vectors $\mathbf{r}$ on the other hand provide the essential randomness needed for making repeated encryptions of the same plaintext bit non-deterministic. Thus LWE based schemes are considered IND-CPA secure without the need for external mechanisms such as padding.

Modern cryptosystems aim to achieve so called security which stands for **indistinguishability under chosen-ciphertext attack**. In this setting the adversary has the ability to obtain decryptions of a non-trival set of ciphertexts of their choosing with the aim of extracting information to subsequently break the cryptosystem. A scheme is IND-CCA secure if, even with this capability, the adversary cannot do better than random guessing when trying to distinguish encrypted messages. IND-CCA is a stronger security notion than IND-CPA and subsumes it.

Quantum safe KEMs such as **Kyber** are designed to be IND-CCA secure. This is achieved in two steps:

- An IND-CPA secure public key encryption(PKE) scheme is defined. In the case of Kyber such a PKE is based on
**Module-LWE**. - A variant of the (FO) is applied to obtain a CCA-secure KEM. The FO transformation is a generic method to convert encryption schemes that are IND-CPA secure into ones that are IND-CCA secure. For details we refer readers to the .

For more information on the security features of **Kyber** and **Dilithium**, as well as reference implementations in various programming languages, you are encouraged to consult the CRYSTALS suite documentation.

## Hybrid cryptography

Hybrid cryptography refers to using quantum-safe public-key algorithms are combined with traditional public key algorithms (like RSA or elliptic curves) such that the solution is at least no less secure than existing traditional cryptography.

This can help address 2 issues:

Existing standards may mandate specific algorithms to be used for encryption, or that such algorithms are in some way approved. To add quantum safety, encryption could first be done using the regular algorithm, but then further protected by a quantum-safe algorithm, whilst still meeting required standards.

Quantum algorithms are new, and further analysis is needed to ensure they are indeed safe and correct. For example one of the initial algorithms shortlisted by NIST was subsequently cracked and proven to be insecure. To mitigate this risk, encryption can be done both using a standard, well-reviewed, secure (apart from quantum!) algorithm, in addition to a post-quantum algorithm.

## Summary

The migration to quantum-safe cryptography (QSC) of asymmetric key cryptosystems is necessitated by the expected advent of quantum computers that can break traditional asymmetric key algorithms predicated on the classical hardness of NP-intermediate problems. QSC is conjectured to be robust to quantum attacks because its security is based on NP-hard problems, which generally cannot be efficiently solved even with quantum computers.

In this context, hard problems rooted in the theory of mathematical lattices such as the learning with errors (LWE) problem have emerged as leading contenders for QSC standardization. In particular, the CRYSTALS-Kyber and CRYSTALS-Dilithium algorithms, based on modular lattices, are well positioned as near-term alternatives to popular asymmetric key protocols like RSA, Elliptic Curve, Diffie-Hellman, and DSA.

As we navigate the complex landscape of quantum-safe cryptography, embracing these innovative solutions will be pivotal in upholding the integrity and confidentiality of our digital interactions in the quantum era.