Learning Home Catalog Composer
Learning
Home Catalog Composer Return to course
Learning

Quantum code constructions

Open the YouTube video for this lesson in a separate window.

Introduction

We've seen a few examples of quantum error correcting codes in previous lessons of this course, including the 9-qubit Shor code, the 7-qubit Steane code, and the 5-qubit code. These codes are undoubtedly interesting and represent a natural place to begin an exploration of quantum error correction, but a problem with them is that they can only tolerate a very low error rate. Correcting an error on one qubit out of 5, 7, or 9 isn't bad, but in all likelihood we're going to need to be able to tolerate a lot more errors than that to make large-scale quantum computing a reality.

In this lesson, we'll take a first look at some more sophisticated quantum error correcting code constructions, including codes that can tolerate a much higher error rate than the ones we've seen so far and that are viewed as promising candidates for practical quantum error correction. We'll begin with a class of quantum error correcting codes known as CSS codes, named for Robert Calderbank, Peter Shor, and Andrew Steane, who first discovered them. The CSS code construction allows one to take certain pairs of classical error correcting codes and combine them into a quantum error correcting code. The second part of the lesson is on a code known as the toric code. This is a fundamental (and truly beautiful) example of a quantum error correcting code that can tolerate relatively high error rates. In fact, the toric code isn't a single example of a quantum error correcting code, but rather it's an infinite family of codes, one for each positive integer greater than one. Finally, in the last part of the lesson, we'll briefly discuss a couple of other families of quantum codes, including surface codes (which are closely connected to the toric code) and color codes.

CSS Codes

Classical linear codes

Classical error correcting codes were first studied in the 1940s, and many different codes are now known — with the most commonly studied and used codes falling into a category of codes known as linear codes. We'll see exactly what the word "linear" means in this context in just a moment, but a very simple way to express what linear codes are at this point is that they're stabilizer codes that happen to be classical. CSS codes are essentially pairs of classical linear codes that are combined together to create a quantum error correcting code. So, for the sake of the discussion that follows, we're going to need to understand a few basic things about classical linear codes.

Let Σ\Sigma be the binary alphabet for this entire discussion. When we refer to a classical linear code, we mean a non-empty set CΣn\mathcal{C}\subseteq\Sigma^n of binary strings of length n,n, for some positive integer n,n, which must satisfy just one basic property: if uu and vv are binary strings in C,\mathcal{C}, then the string uvu\oplus v is also in C.\mathcal{C}. Here, uvu\oplus v refers to the bitwise exclusive-OR of uu and v,v, which we encountered multiple times in the Fundamentals of quantum algorithms course.

In essence, when we refer to a classical error correcting code as being linear, we're thinking about binary strings of length nn as being nn-dimensional vectors, where the entries are all either 00 or 1,1, and demanding that the code itself forms a linear subspace. Instead of ordinary vector addition over the real or complex numbers, however, we're using addition modulo 2,2, which is simply the exclusive-OR. That is, if we have two codewords uu and v,v, meaning that uu and vv are binary strings in C,\mathcal{C}, then u+vu + v modulo 2, which is to say uv,u\oplus v, must also be a codeword in C.\mathcal{C}. Notice in particular that this implication must be true even if u=v.u = v. This implies that C\mathcal{C} must include the all-zero string 0n,0^n, because the bitwise exclusive-OR of any string with itself is the all-zero string.

Example: the 3-bit repetition code

The 3-bit repetition code is an example of a classical linear code. In particular, we have C={000,111},\mathcal{C} = \{000,111\}, so with respect to the linearity condition there are just two possible choices for uu and two possible choices for v.v. It's a trivial matter to go through the four possible pairs to see that we always get a codeword when we take the bitwise exclusive-OR:

000000=000,000111=111,111000=111,111111=000.000 \oplus 000 = 000, \quad 000 \oplus 111 = 111, \quad 111 \oplus 000 = 111, \quad 111 \oplus 111 = 000.

Example: the [7,4,3][7,4,3]-Hamming code

Here's another example of a classical linear code called the [7,4,3][7,4,3]-Hamming code. It was one of the very first classical error correcting codes ever discovered, and it consists of these 16 binary strings of length 7. (Sometimes the [7,4,3][7,4,3]-Hamming code is understood to mean the code with these strings reversed, but we'll take it to be the code containing the strings shown here.)

0000000110000110100100110011011010010101011100110000011111110000011001010101010010111001100010110100111101111111\begin{array}{cccc} 0000000 & 1100001 & 1010010 & 0110011\\[1mm] 0110100 & 1010101 & 1100110 & 0000111\\[1mm] 1111000 & 0011001 & 0101010 & 1001011\\[1mm] 1001100 & 0101101 & 0011110 & 1111111 \end{array}

There is a very simple logic behind the selection of these strings, but it's secondary to the lesson and won't be explained here. For now it's enough to observe that this is a classical linear code: XORing any two of these strings together will always result in another string in the code.

The notation [7,4,3][7,4,3] (in single square brackets) means something analogous to the double square bracket notation for stabilizer codes mentioned in the previous lesson, but here it's for classical linear codes. In particular, codewords have 77 bits, we can encode 44 bits using the code (because there are 16=2416 = 2^4 codewords), and it happens to be a distance 33 code, which means that any two distinct codewords must differ in at least 33 positions — so at least 33 bits must be flipped to change one codeword into another. The fact that this is a distance 33 code implies that it can correct for up to one bit-flip error.

Describing classical linear codes

The examples just mentioned are very simple examples of classical linear codes — but even the [7,4,3][7,4,3]-Hamming code looks somewhat mysterious when the codewords are simply listed. There are better, more efficient ways to describe classical linear codes, including the following two ways.

  1. Generators. One way to describe a classical linear code is with a minimal list of codewords that generates the code, meaning that by taking all of the possible subsets of these codewords and XORing them together, we get the entire code.

    In greater detail, the strings u1,,umΣnu_1,\ldots,u_m\in\Sigma^n generate the classical linear code C\mathcal{C} if

    C={α1u1αmum:α1,,αm{0,1}},\mathcal{C} = \bigl\{\alpha_1 u_1 \oplus \cdots \oplus \alpha_m u_m\,:\,\alpha_1,\ldots,\alpha_m\in\{0,1\}\bigr\},

    with the understanding that αu=u\alpha u = u when α=1\alpha = 1 and αu=0n\alpha u = 0^n when α=0,\alpha = 0, and we say that this list is minimal if removing one of the strings generates a smaller code. A natural way to think about such a description is that the collection {u1,,um}\{u_1,\ldots,u_m\} forms a basis for C\mathcal{C} as a subspace, where we're thinking about strings as vectors with binary-valued entries, keeping in mind that we're working in a vector space where arithmetic is done modulo 2.2.

  2. Parity checks. Another natural way to describe a classical linear code is by parity checks — meaning a minimal list of binary strings for which the strings in the code are precisely the ones whose binary dot product with every one of these parity check strings is zero. (Similar to the bitwise exclusive-OR, the binary dot product appeared several times in the Fundamentals of quantum algorithms course.)

    That is, the strings v1,,vrΣnv_1,\ldots,v_r\in\Sigma^n are parity check strings for the classical linear code C\mathcal{C} if

    C={uΣn:uv1==uvr=0},\mathcal{C} = \bigl\{ u\in \Sigma^n\,:\, u\cdot v_1 = \cdots = u \cdot v_r = 0 \bigr\},

    and this set of strings is minimal if removing one results in a larger code. These are called parity check strings because uu has binary dot product equal to zero with vv if and only if the bits of uu in positions where vv has 1s have even parity. So, to determine if a string uu is in the code C,\mathcal{C}, it suffices to check the parity of certain subsets of the bits of u.u.

    An important thing to notice here is that the binary dot product is not an inner product in a formal sense. In particular, when two strings have binary dot product equal to zero, it doesn't mean that they're orthogonal in the usual way we think about orthogonality. For example, the binary dot product of the string 1111 with itself is zero — so it is possible that a parity check string for a classical linear code is itself in the code.

Classical linear codes over the binary alphabet always include a number of strings that's a power of 22 — and for a single classical linear code described in the two different ways just described, it will always be the case that n=m+r.n = m + r. In particular, if we have a minimal set of mm generators, then the code encodes mm bits and we'll necessarily have 2m2^m codewords; and if we have a minimal set of rr parity check strings, then we'll have 2nr2^{n-r} codewords. So, each generator doubles the size of the code space while each parity check string halves the size of the code space.

For example, the 3-bit repetition code is a linear code, so it can be described in both of these ways. In particular, there's only one choice for a generator that works: 111.111. We can alternatively describe the code with two parity check strings, such as 110110 and 011011 — which should look familiar from our previous discussions of this code — or we could instead take the parity check strings to be 110110 and 101,101, or 101101 and 011.011. (Generators and parity check strings are generally not unique for a given classical linear code.)

For a second example, consider the [7,4,3][7,4,3]-Hamming code. Here's one choice for a list of generators that works.

1111000011010010100101100001\begin{array}{c} 1111000\\[1mm] 0110100\\[1mm] 1010010\\[1mm] 1100001 \end{array}

And here's a choice for a list of parity checks for this code.

111100011001101010101\begin{array}{c} 1111000\\[1mm] 1100110\\[1mm] 1010101 \end{array}

Here, by the way, we see that all of our parity check strings are themselves in the code.

One final remark about classical linear codes, which connects them to the stabilizer formalism, is that parity check strings are equivalent to stabilizer generators that only consist of ZZ and identity Pauli matrices. For instance, the parity check strings 110110 and 011011 for the 3-bit repetition code correspond precisely to the stabilizer generators ZZIZ\otimes Z\otimes \mathbb{I} and IZZ,\mathbb{I}\otimes Z\otimes Z, which is consistent with the discussions of Pauli observables from the previous lesson.

Definition of CSS codes

CSS codes are stabilizer codes obtained by combining together certain pairs of classical linear codes. This doesn't work for two arbitrary classical linear codes — the two codes must have a certain relationship. Nevertheless, this construction opens up many possibilities for quantum error correcting codes, based in part on over 75 years of classical coding theory.

In the stabilizer formalism, stabilizer generators containing only ZZ and identity Pauli matrices are equivalent to parity checks, as we just observed for the 3-bit repetition code. For another example, consider the following parity check strings for the [7,4,3][7,4,3]-Hamming code.

111100011001101010101\begin{array}{c} 1111000\\[1mm] 1100110\\[1mm] 1010101 \end{array}

These parity check strings correspond to the following stabilizer generators (written without tensor product symbols), which we obtain by replacing each 11 by a ZZ and each 00 by an I.\mathbb{I}. These are three of the six stabilizer generators for the 7-qubit Steane code.

ZZZZIIIZZIIZZIZIZIZIZ\begin{array}{ccccccc} Z & Z & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] Z & Z & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I} \\[1mm] Z & \mathbb{I} & Z & \mathbb{I} & Z & \mathbb{I} & Z \end{array}

Let us give the name ZZ stabilizer generators to stabilizer generators like this, meaning that they only have Pauli ZZ and identity tensor factors — so XX and YY Pauli matrices never occur in ZZ stabilizer generators.

We can also consider stabilizer generators where only XX and identity Pauli matrices appear as tensor factors. Stabilizer generators like this can be viewed as being analogous to ZZ-stabilizer generators, except that they describe parity checks in the {+,}\{\vert+\rangle,\vert-\rangle\} basis rather than the standard basis. Naturally, stabilizer generators of this form are called XX stabilizer generators — so no YY or ZZ Pauli matrices are allowed this time.

For example, consider the remaining three stabilizer generators from the 7-qubit Steane code.

XXXXIIIXXIIXXIXIXIXIX\begin{array}{ccccccc} X & X & X & X & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] X & X & \mathbb{I} & \mathbb{I} & X & X & \mathbb{I} \\[1mm] X & \mathbb{I} & X & \mathbb{I} & X & \mathbb{I} & X \end{array}

They follow exactly the same pattern from the [7,4,3][7,4,3]-Hamming code as the ZZ-stabilizer generators, except this time we substitute XX for 11 rather than Z.Z. What we obtain from just these three stabilizer generators is a code that includes the 16 states shown here, which we get by applying Hadamard operations to the standard basis states that correspond to the strings in the [7,4,3][7,4,3]-Hamming code. Naturally, the code space for this code also includes linear combinations of these states.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++\begin{array}{cccc} \vert {+++++++} \rangle \quad & \vert {--++++-} \rangle \quad & \vert {-+-++-+} \rangle \quad & \vert {+--++--} \rangle \\ \vert {+--+-++} \rangle \quad & \vert {-+-+-+-} \rangle \quad & \vert {--++--+} \rangle \quad & \vert {++++---} \rangle \\ \vert {----+++} \rangle \quad & \vert {++--++-} \rangle \quad & \vert {+-+-+-+} \rangle \quad & \vert {-++-+--} \rangle \\ \vert {-++--++} \rangle \quad & \vert {+-+--+-} \rangle \quad & \vert {++----+} \rangle \quad & \vert {-------} \rangle \end{array}

We can now define CSS codes in very simple terms.

Definition. A CSS code is a stabilizer code that can be expressed using only XX and ZZ stabilizer generators.

That is, CSS codes are stabilizer codes for which we have stabilizer generators in which no Pauli YY matrices appear, and for which XX and ZZ never appear in the same stabilizer generator.

To be clear, by this definition, a CSS code is one for which it is possible to choose just XX and ZZ stabilizer generators — but we must keep in mind that there is freedom in how we choose stabilizer generators for stabilizer codes. Thus, there will generally be different choices for the stabilizer generators of a CSS code that don't happen to be XX or ZZ stabilizer generators (in addition to at least one choice for which they are).

Here's just about the simplest nontrivial example of a CSS code we can have that includes both a ZZ stabilizer generator and an XX stabilizer generator:

ZZXX\begin{array}{cc} Z & Z \\[1mm] X & X \end{array}

It's clear that this is a CSS code, because the first stabilizer generator is a ZZ stabilizer generator and the second is an XX stabilizer generator. Of course, a CSS code must also be a valid stabilizer code — meaning that the stabilizer generators must commute, form a minimal generating set, and fix at least one quantum state vector. These requirements happen to be simple to observe for this code. As we noted in the previous lesson, the code space for this code is the one-dimensional space spanned by the ϕ+\vert\phi^+\rangle Bell state. The fact that both stabilizer generators fix this state is apparent by considering the following two expressions of an e-bit, together with an interpretation of these stabilizer generators as parity checks in the {0,1}\{\vert 0\rangle, \vert 1\rangle\} and {+,}\{\vert +\rangle, \vert -\rangle\} bases.

ϕ+=00+112=+++2\vert\phi^+\rangle = \frac{\vert 0\rangle\vert 0\rangle + \vert 1\rangle\vert 1\rangle}{\sqrt{2}} = \frac{\vert +\rangle\vert +\rangle + \vert -\rangle\vert -\rangle}{\sqrt{2}}

The 7-qubit Steane code is another example of a CSS code.

ZZZZIIIZZIIZZIZIZIZIZXXXXIIIXXIIXXIXIXIXIX\begin{array}{ccccccc} Z & Z & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] Z & Z & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I} \\[1mm] Z & \mathbb{I} & Z & \mathbb{I} & Z & \mathbb{I} & Z \\[1mm] X & X & X & X & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] X & X & \mathbb{I} & \mathbb{I} & X & X & \mathbb{I} \\[1mm] X & \mathbb{I} & X & \mathbb{I} & X & \mathbb{I} & X \end{array}

Here we have three ZZ stabilizer generators and three XX stabilizer generators, and we've already verified that this is a valid stabilizer code.

And the 9-qubit Shor code is another example.

ZZIIIIIIIIZZIIIIIIIIIZZIIIIIIIIZZIIIIIIIIIZZIIIIIIIIZZXXXXXXIIIIIIXXXXXX\begin{array}{ccccccccc} Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I}\\[1mm] \mathbb{I} & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I}\\[1mm] \mathbb{I} & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I}\\[1mm] \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I}\\[1mm] \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I}\\[1mm] \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & \mathbb{I} & Z & Z\\[1mm] X & X & X & X & X & X & \mathbb{I} & \mathbb{I} & \mathbb{I}\\[1mm] \mathbb{I} & \mathbb{I} & \mathbb{I}& X & X & X & X & X & X \end{array}

This time we have six ZZ stabilizer generators and just two XX stabilizer generators. This is fine, there doesn't need to be a balance or a symmetry between the two types of generators (though there often is).

Once again, it is critical that CSS codes are valid stabilizer codes, and in particular each ZZ stabilizer generator must commute with each XX stabilizer generator. So, not every collection of XX and ZZ stabilizer generators defines a valid CSS code.

Error detection and correction

With regard to error detection and correction, CSS codes in general have a similar characteristic to the 9-qubit Shor code, which is that XX and ZZ errors can be detected and corrected completely independently; the ZZ stabilizer generators describe a code that protects against bit flips, and the XX stabilizer generators describe a code that independently protects against phase flips. This works because ZZ stabilizer generators necessarily commute with ZZ errors, as well as ZZ operations that are applied as corrections, so they're completely oblivious to both, and likewise for XX stabilizer generators, errors, and corrections.

As an example, let's consider the 7-qubit Steane code.

ZZZZIIIZZIIZZIZIZIZIZXXXXIIIXXIIXXIXIXIXIX\begin{array}{ccccccc} Z & Z & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] Z & Z & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I} \\[1mm] Z & \mathbb{I} & Z & \mathbb{I} & Z & \mathbb{I} & Z \\[1mm] X & X & X & X & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] X & X & \mathbb{I} & \mathbb{I} & X & X & \mathbb{I} \\[1mm] X & \mathbb{I} & X & \mathbb{I} & X & \mathbb{I} & X \end{array}

The basic idea for this code is now apparent: it's a [7,4,3][7,4,3]-Hamming code for bit-flip errors and a [7,4,3][7,4,3]-Hamming code for phase-flip errors. The fact that the XX and ZZ stabilizer generators commute is perhaps good fortune, for this wouldn't be a valid stabilizer code if they didn't. But there are, in fact, many known examples of classical linear codes that yield a valid stabilizer code when used in a similar way.

In general, suppose we have a CSS code for which the ZZ stabilizer generators allow for the correction of up to jj bit-flip errors, and the XX stabilizer generators allow for the correction of up to kk phase-flip errors. For example, j=1j = 1 and k=1k = 1 for the Steane code, given that the [7,4,3][7,4,3]-Hamming code can correct one bit flip. It then follows, by the discretization of errors, that this CSS code can correct for any error on a number of qubits up to the minimum of jj and k.k. This is because, when the syndrome is measured, an arbitrary error on this number of qubits effectively collapses probabilistically into some combination of XX errors, ZZ errors, or both — and then the XX errors and ZZ errors are detected and corrected independently.

In summary, provided that we have two classical linear codes (or two copies of a single classical linear code) that are compatible, in that they define XX and ZZ stabilizer generators that commute, the CSS code we obtain by combining them inherits the error correction properties of those two codes, in the sense just described.

Notice that there is a price to be paid though, which is that we can't encode as many qubits as we could bits with the two classical codes. This is because the total number of stabilizer generators for the CSS code is the sum of the number of parity checks for the two classical linear codes, and each stabilizer generator cuts the dimension of the code space in half. For example, the [7,4,3][7,4,3]-Hamming code allows for the encoding of 4 classical bits, because we have just 3 parity check strings for this code, whereas the 7-qubit Steane code only encodes one qubit, because it has 6 stabilizer generators.

Code spaces of CSS codes

The last thing we'll do in this discussion of CSS codes is to consider the code spaces of these codes. This will give us an opportunity to examine in greater detail the relationship that must hold between two classical linear codes in order for them to be compatible, in the sense that they can be combined together to form a CSS code.

Consider any CSS code on nn qubits, and let z1,,zsΣnz_1, \ldots, z_s \in \Sigma^n be the nn-bit parity check strings that correspond to the ZZ stabilizer generators of this code. This means that the classical linear code described by just the ZZ stabilizer generators, which we'll name CZ,\mathcal{C}_Z, takes the following form.

CZ={uΣn:uz1==uzs=0}\mathcal{C}_Z = \bigl\{ u \in \Sigma^n \,:\, u \cdot z_1 = \cdots = u \cdot z_s = 0 \bigr\}

In words, the classical linear code CZ\mathcal{C}_Z contains every string whose binary dot product with every one of the parity check strings z1,,zsz_1, \ldots, z_s is zero.

Along similar lines, let us take x1,,xtΣnx_1,\ldots,x_t\in\Sigma^n to be the nn-bit parity check strings corresponding to the XX stabilizer generators of our code. Thus, the classical linear code corresponding to the XX stabilizer generators takes this form.

CX={uΣn:ux1==uxt=0}\mathcal{C}_X = \bigl\{ u \in \Sigma^n \,:\, u \cdot x_1 = \cdots = u \cdot x_t = 0 \bigr\}

The XX stabilizer generators alone therefore describe a code that's similar to this code, but in the {+,}\{\vert {+}\rangle,\vert {-}\rangle\} basis rather than the standard basis.

Now we'll introduce two new classical linear codes that are derived from the same choices of strings, z1,,zsz_1,\ldots,z_s and x1,,xt,x_1,\ldots,x_t, but where we take these strings as generators rather than parity check strings. In particular, we obtain these two codes.

DZ={α1z1αszs:α1,,αs{0,1}}DX={α1x1αtxt:α1,,αt{0,1}}\begin{aligned} \mathcal{D}_Z & = \bigl\{ \alpha_1 z_1 \oplus \cdots \oplus \alpha_s z_s \,:\, \alpha_1,\ldots,\alpha_s \in \{0,1\} \bigr\}\\[1mm] \mathcal{D}_X & = \bigl\{ \alpha_1 x_1 \oplus \cdots \oplus \alpha_t x_t \,:\, \alpha_1,\ldots,\alpha_t \in \{0,1\} \bigr\} \end{aligned}

These are known as the dual codes of the codes defined previously: DZ\mathcal{D}_Z is the dual code of CZ\mathcal{C}_Z and DX\mathcal{D}_X is the dual code of CX.\mathcal{C}_X. It may not be clear at this point why these dual codes are relevant, but they turn out to be quite relevant for multiple reasons, including the two reasons explained in the following paragraphs.

First, the conditions that must hold for two classical linear codes CZ\mathcal{C}_Z and CX\mathcal{C}_X to be compatible, in the sense that they can be paired together to form a CSS code, can be described in simple terms by referring to the dual codes. Specifically, it must be that DZCX,\mathcal{D}_Z\subseteq\mathcal{C}_X, or equivalently, that DXCZ.\mathcal{D}_X\subseteq\mathcal{C}_Z. In words, the dual code DZ\mathcal{D}_Z includes the strings corresponding to ZZ stabilizer generators, and their containment in CX\mathcal{C}_X is equivalent to the binary dot product of each of these strings with the ones corresponding to the XX stabilizer generators being zero. That, in turn, is equivalent to each ZZ stabilizer generator commuting with each XX stabilizer generator. Alternatively, by reversing the roles of the XX and ZZ stabilizer generators and starting from the containment DXCZ,\mathcal{D}_X\subseteq\mathcal{C}_Z, we can reach the same conclusion.

Second, by referring to the dual codes, we can easily describe the code spaces of a given CSS code. In particular, the code space is spanned by vectors of the following form.

uDX=12tvDXuv(for all uCZ)\vert u \oplus \mathcal{D}_X\rangle = \frac{1}{\sqrt{2^t}} \sum_{v\in\mathcal{D}_X} \vert u \oplus v\rangle \qquad \text{(for all $u\in\mathcal{C}_Z$)}

In words, these vectors are uniform superpositions over the strings in the dual code DX\mathcal{D}_X of the code corresponding to the XX stabilizer generators, shifted by (i.e., bitwise XORed with) strings in the code CZ\mathcal{C}_Z corresponding to the ZZ stabilizer generators. To be clear, different choices for the shift — represented by the string uu in this expression — can result in the same vector. So, these states aren't all distinct, but collectively they span the entire code space.

Here's an intuitive explanation for why such vectors are both in the code space and span it. Consider the nn-qubit standard basis state u,\vert u\rangle, for some arbitrary nn-bit string u,u, and suppose that we project this state onto the code space. That is to say, letting Π\Pi denote the projection onto the code space of our CSS code, consider the vector Πu.\Pi\vert u\rangle. There are two cases:

  • Case 1: uCZ.u\in\mathcal{C}_Z. This implies that each ZZ stabilizer generator of our CSS code acts trivially on u.\vert u\rangle. The XX stabilizer generators, on the other hand, each simply flip some of the bits of u.\vert u\rangle. In particular, for each generator vv of DX,\mathcal{D}_X, the XX stabilizer generator corresponding to vv transforms u\vert u\rangle into uv.\vert u\oplus v\rangle. By characterizing the projection Π\Pi as the average over the elements of the stabilizer (as we saw in the previous lesson), we obtain this formula:

    Πu=12tvDXuv=12tuDX.\Pi \vert u \rangle = \frac{1}{2^t} \sum_{v\in\mathcal{D}_{X}} \vert u \oplus v\rangle = \frac{1}{\sqrt{2^t}} \vert u \oplus \mathcal{D}_X\rangle.
  • Case 2: uCZ.u\notin\mathcal{C}_Z. This implies that at least one of the parity checks corresponding to the ZZ stabilizer generators fails, which is to say that u\vert u\rangle must be a 1-1 eigenvector of at least one of the ZZ stabilizer generators. The code space of the CSS code is the intersection of the +1+1 eigenspaces of the stabilizer generators. So, as a 1-1 eigenvector of at least one of the ZZ stabilizer generators, u\vert u\rangle is therefore orthogonal to the code space:

    Πu=0.\Pi\vert u\rangle = 0.

And now, as we range over all nn-bit strings u,u, discard the ones for which Πu=0,\Pi\vert u\rangle = 0, and normalize the remaining ones, we obtain the vectors described previously, which demonstrates that they span the code space.

We can also use the symmetry between XX and ZZ stabilizer generators to describe the code space in a similar but different way. In particular, it is the space spanned by vectors having the following form.

HnuDZ=12svDZHnuv(for uCX)H^{\otimes n} \vert u \oplus \mathcal{D}_Z\rangle = \frac{1}{\sqrt{2^s}} \sum_{v\in\mathcal{D}_Z} H^{\otimes n}\vert u \oplus v\rangle \qquad \text{(for $u\in\mathcal{C}_X$)}

In essence, XX and ZZ have been swapped in each instance in which they appear — but we must also swap the standard basis for the {+,}\{\vert+\rangle,\vert-\rangle\} basis, which is why the Hadamard operations are included.

As an example, let us consider the 7-qubit Steane code. The parity check strings for both the XX and ZZ stabilizer generators are the same: 1111000,1111000, 1100110,1100110, and 1010101.1010101. The codes CX\mathcal{C}_X and CZ\mathcal{C}_Z are therefore the same; both are equal to the [7,4,3][7,4,3]-Hamming code.

CX=CZ={0000000,0000111,0011001,0011110,0101010,0101101,0110011,0110100,1001011,1001100,1010010,1010101,1100001,1100110,1111000,1111111}\mathcal{C}_X = \mathcal{C}_Z = \{0000000, 0000111, 0011001, 0011110, 0101010, 0101101, 0110011, 0110100, 1001011, 1001100, 1010010, 1010101, 1100001, 1100110, 1111000, 1111111\}

The dual codes DX\mathcal{D}_X and DZ\mathcal{D}_Z are therefore also the same. We have 3 generators, so we obtain 8 strings.

DX=DZ={0000000,0011110,0101101,0110011,1001011,1010101,1100110,1111000}\mathcal{D}_X = \mathcal{D}_Z = \{0000000, 0011110, 0101101, 0110011, 1001011, 1010101, 1100110, 1111000\}

These strings are all contained in the [7,4,3][7,4,3]-Hamming code, and so the CSS condition is satisfied: DZCX,\mathcal{D}_Z \subseteq \mathcal{C}_X, or equivalently, DXCZ.\mathcal{D}_X \subseteq \mathcal{C}_Z. In coding theory terminology, the [7,4,3][7,4,3]-Hamming code is weakly self-dual, meaning that its dual code is a subset of itself, which is what allows it to be used to create a CSS code in this way.

Given that DX\mathcal{D}_X contains half of all of the strings in CZ,\mathcal{C}_Z, there are only two different vectors uDX\vert u\oplus \mathcal{D}_X\rangle that can be obtained by choosing uCZ.u\in\mathcal{C}_Z. This is expected, because the 7-qubit Steane code has a two-dimensional code space. We can use the two states we obtain in this way to encode the logical state 0\vert 0\rangle and 1\vert 1\rangle as follows.

00000000+0011110+0101101+0110011+1001011+1010101+1100110+1111000810000111+0011001+0101010+0110100+1001100+1010010+1100001+11111118\begin{aligned} \vert 0\rangle & \mapsto \frac{ \vert 0000000\rangle + \vert 0011110\rangle + \vert 0101101\rangle + \vert 0110011\rangle + \vert 1001011\rangle + \vert 1010101\rangle + \vert 1100110\rangle + \vert 1111000\rangle }{\sqrt{8}}\\[2mm] \vert 1\rangle & \mapsto \frac{ \vert 0000111\rangle + \vert 0011001\rangle + \vert 0101010\rangle + \vert 0110100\rangle + \vert 1001100\rangle + \vert 1010010\rangle + \vert 1100001\rangle + \vert 1111111\rangle }{\sqrt{8}} \end{aligned}

As usual, this choice isn't forced on us — we're free to use the code space to encode qubits however we choose. This encoding is, however, consistent with the example of an encoding circuit for the 7-qubit Steane code in the previous lesson.

Toric code

Next we'll discuss a specific CSS code known as the toric code, which was discovered by Alexei Kitaev in 1997. In fact, the toric code isn't a single code, but rather it's a family of codes, one for each positive integer at least 2. These codes possess a few key properties:

  • The stabilizer generators have low weight, and in particular they all have weight 4. In coding theory parlance, the toric code is an example of a quantum low-density parity check code, or quantum LDPC code (where low means 4 in this case). This is nice because each stabilizer generator measurement doesn't need to involve too many qubits.

  • The toric code has geometric locality. This means that not only do the stabilizer generators have low weight, but it's also possible to arrange the qubits spatially so that each of the stabilizer generator measurements only involves qubits that are close together. In principle, this makes these measurements easier to implement than if they involved spatially distant qubits.

  • Members of the toric code family have increasingly large distance and can tolerate a relatively high error rate.

Code description

Let L2L\geq 2 be a positive integer, and consider an L×LL\times L lattice with so-called periodic boundaries. For example, this figure depicts an L×LL\times L lattice for L=9.L=9.

A 9-by-9 lattice with periodic boundaries

Notice that the lines on the right and on the bottom are dotted lines. This is meant to indicate that dotted line on the right is the same line as the line all the way on the left, and similarly, the dotted line on the bottom is the same line as the one on the very top.

To realize this sort of configuration physically requires three dimensions. In particular, we could form the lattice into a cylinder by first matching up the left and right sides, and then bend the cylinder around so that the circles at the ends, which used to be the top and bottom edges of the lattice, meet. Or we could match up the top and bottom first and then the sides, it works both ways and it doesn't matter which way it is done for the purposes of this discussion. What we get is a torus — or, in other words, a doughnut (although thinking about it as an inner tube of a tire is perhaps a better image to have in mind because this isn't a solid: the lattice has become just the surface of a torus). This is where the name "toric code" comes from.

A 9-by-9 lattice wrapped into a torus

The way one can "move around" on a torus like this, between adjacent points on the lattice, will likely be familiar to those that have played old-school video games, where moving off the top of the screen causes you emerge on the bottom, and likewise for the left and right edges of the screen. This is how we will view this lattice with periodic boundaries, as opposed to speaking specifically about a torus in 3-dimensional space.

Next, qubits are arranged on the edges of this lattice, as illustrated in the following figure, where qubits are indicated by solid blue circles.

Qubits on the edges of a 9-by-9 periodic lattice

Note that the qubits placed on the dotted lines aren't solid because they're already represented on the topmost and leftmost lines in the lattice. In total there are 2L22L^2 qubits: L2L^2 qubits on horizontal lines and L2L^2 qubits on vertical lines.

To describe the toric code itself, it remains to describe the stabilizer generators:

  • For each tile formed by the lines in the lattice there is one ZZ stabilizer generator, obtained by tensoring ZZ matrices on the four qubits touching that tile along with identity matrices on all other qubits.

  • For each vertex formed by the lines in the lattice there is one XX stabilizer generator, obtained by tensoring XX matrices on the four qubits adjacent to that vertex along with identity matrices on all other qubits.

In both cases we obtain a weight-4 Pauli operation. Individually, these stabilizer generators may be illustrated as follows.

Stabilizer generator types for the toric code

Here's an illustration showing some examples of stabilizer generators in the context of the lattice itself. Notice that the stabilizer generators that wrap around the period boundaries are included. These generators that wrap around the periodic boundaries are not special or in any way distinguished from the ones that don't.

Examples of stabilizer generators on a lattice

The stabilizer generators must commute for this to be a valid stabilizer code. As usual, the ZZ stabilizer generators all commute with one another, because ZZ commutes with itself and the identity commutes with everything, and likewise for the XX stabilizer generators. The ZZ and XX stabilizer generators clearly commute when they act nontrivially on disjoint sets of qubits, like for the examples shown in the previous figure. The remaining possibility is that a ZZ stabilizer generator and an XX stabilizer generator overlap on the qubits upon which they act nontrivially, and whenever this happens the generators must always overlap on two qubits, like in the next figure.

Overlapping stabilizer generators for the toric code

Consequently, two stabilizer generators like this commute, just like ZZZ\otimes Z and XXX\otimes X commute. The stabilizer generators therefore all commute with one another.

The second required condition on the stabilizer generators for a stabilizer code is that they form a minimal generating set. This condition is actually not satisfied by this collection: if we multiply all of the ZZ stabilizer generators together we obtain the identity operation, and likewise for the XX stabilizer generators. Thus, any one of the ZZ stabilizer generators can be expressed as the product of all of the remaining ones, and similarly, any one of the XX stabilizer generators can be expressed as the product of the remaining XX stabilizer generators. If we remove any one of the ZZ stabilizer generators and any one of the XX stabilizer generators, however, we do obtain a minimal generating set.

To be clear about this, we do in fact care equally about all of the stabilizer generators, and in a strictly operational sense there isn't any need to select one stabilizer generator of each type to remove. But, for the sake of analyzing the code — and counting the generators in particular — we can imagine that one stabilizer generator of each type has been removed so that we get a minimal generating set, keeping in mind that we could always infer the results of these removed generators (thinking of them as observables) from the results of all of the other stabilizer generator observables of the same type. This leaves L21L^2 - 1 stabilizer generators of each type, or 2L222L^2 - 2 in total, in a (hypothetical) minimal generating set. Given that there are 2L22L^2 qubits in total, this means that the toric code encodes 2L22(L21)=22L^2 - 2 (L^2 - 1) = 2 qubits.

The final condition required of stabilizer generators is that at least one quantum state vector is fixed by all of the stabilizer generators. We will see that this is the case as we proceed with the analysis of the code, but it's also possible to reason that there's no way to generate negative-one times the identity on all 2L22L^2 qubits from the stabilizer generators.

Errors and syndromes

The toric code has a simple and elegant description, but its quantum error-correcting properties may not be at all clear from a first glance. As it turns out, it's an amazing code! To understand why and how it works, let's begin by considering different errors and the syndromes they generate.

The toric code is a CSS code, because all of our stabilizer generators are either ZZ or XX stabilizer generators. This means that XX errors and ZZ errors can be detected (and possibly corrected) separately. In fact, there's a simple symmetry between the ZZ and XX stabilizer generators that allows us to analyze XX errors and ZZ errors in essentially the same way. So, we shall focus on XX errors, which are possibly detected by the ZZ stabilizer generators — but the entire discussion that follows can be translated from XX errors to ZZ errors, which are analogously detected by the XX stabilizer generators.

The following diagram depicts the effect of an XX error on a single qubit. Here, the assumption is that our 2L22L^2 qubits were previously in a state contained in the code space of the toric code, causing all of the stabilizer generator measurements to output +1.+1. The ZZ stabilizer generators detect XX errors, and there is one such stabilizer generator for each tile in the figure, so we can indicate the measurement outcome of the corresponding stabilizer generator with the color of that tile: +1+1 outcomes are indicated by white tiles and 1-1 outcomes are indicated by gray tiles. If a bit-flip error occurs on one of the qubits, the effect is that the stabilizer generator measurements corresponding to the two tiles touching the affected qubit now output 1.-1.

The effect of a single bit-flip error on the toric code

This is intuitive when we consider ZZ stabilizer generators and how they behave. In essence, each ZZ stabilizer generator measures the parity of the four qubits that touch the corresponding tile (with respect to the standard basis). So, a +1+1 outcome doesn't indicate that no XX errors have occurred on these four qubits, but rather it indicates that an even number of XX errors have occurred on these qubits, whereas a 1-1 outcome indicates that an odd number of XX errors have occurred. A single XX error therefore flips the parity of the four bits on both of the tiles it touches, causing the stabilizer generator measurements to output 1.-1.

Next let's introduce multiple XX errors to see what happens. In particular, we'll consider a chain of adjacent XX errors, where two XX errors are adjacent if they affect qubits touching the same tile.

The effect of a chain of bit-flip errors on the toric code

The two ZZ stabilizer generators at the endpoints of the chain both give the outcome 1-1 in this case, because an odd number of XX errors have occurred on those two corresponding tiles. All of the other ZZ stabilizer generators, on the other hand, give the outcome +1,+1, including the ones touching the chain but not at the endpoints, because an even number of XX errors have occurred on the qubits touching the corresponding tiles.

Thus, as long as we have a chain of XX errors that has endpoints, the toric code will detect that errors have occurred, resulting in 1-1 measurement outcomes for the ZZ stabilizer generators corresponding to the endpoints of the chain. Note that the actual chain of errors is not revealed, only the endpoints! This is OK — in the next subsection we'll see that we don't need to know exactly which qubits were affected by XX errors to correct them. (The toric code is an example of a highly degenerate code, in the sense that it generally does not uniquely identify the errors it corrects.)

It is, however, possible for a chain of adjacent XX errors not to have endpoints, which is to say that a chain of errors could form a closed loop, like in the following figure.

A closed loop of bit-flip errors for the toric code

In such a case, an even number of XX errors have occurred on every tile, so every stabilizer generator measurement results in a +1+1 outcome. Closed loops of adjacent XX errors are therefore not detected by the code.

This might seem disappointing, because we only need four XX errors to form a closed loop (and we're hoping for something a lot better than a distance 4 code). However, a closed loop of XX errors of the form depicted in the previous figure is not actually an error — because it's in the stabilizer! Recall that, in addition to the ZZ stabilizer generators, we also have an XX stabilizer generator for each vertex in the lattice. And if we multiply adjacent XX stabilizer generators together, the result is that we obtain closed loops of XX operations. For instance, the closed loop in the previous figure can be obtained by multiplying together the XX stabilizer generators indicated in the following figure.

A closed loop of bit-flip errors generated by X stabilizer generators

This is, however, is not the only type of closed loop of XX errors that we can have — and it is not the case that every closed loop of XX errors is included in the stabilizer. In particular, the different types of loops can be characterized as follows.

  1. Closed loops of XX errors with an even number of XX errors on every horizontal line and every vertical line of qubits. (The example shown above falls into this category.) Loops of this form are always contained in the stabilizer, as they can effectively be shrunk down to nothing by multiplying them by XX stabilizer generators.

  2. Closed loops of XX errors with an odd number of XX errors on at least one horizontal line or at least one vertical line of qubits. Loops of this form are never contained in the stabilizer and therefore represent nontrivial errors that go undetected by the code.

An example of a closed loop of XX errors in the second category is shown in the following diagram.

A closed loop of bit-flip errors not in the stabilizer

Such a chain of errors is not contained in the stabilizer because every XX stabilizer generator places an even number of XX operations on every horizontal line and every vertical line of qubits. This is therefore an actual example of a nontrivial error that the code fails to detect.

The key is that the only way to form a loop of the second sort is to go around the torus, meaning either around the hole in the middle of the torus, through it, or both. Intuitively speaking, a chain of XX errors like this cannot be shrunk down to nothing by multiplying it by XX stabilizer generators because the topology of a torus prohibits it. The toric code is sometimes categorized as a topological quantum error correcting code for this reason. The shortest that such a loop can be is L,L, and therefore this is the distance of the toric code: any closed loop of XX errors with length less than LL must fall into the first category, and is therefore contained in the stabilizer, while any chain of XX errors with endpoints is detected by the code. Given that the toric code uses 2L22L^2 qubits to encode 22 qubits and has distance L,L, it follows that it's a [[2L2,2,L]][[2L^2,2,L]] stabilizer code.

Correcting errors

We've discussed error detection for the toric code, and now we'll briefly discuss how to correct errors. The toric code is a CSS code, so XX errors and ZZ errors can be detected and corrected independently. Keeping our focus on ZZ stabilizer generators, which detect XX errors, let us consider how a chain of XX errors can be corrected. (ZZ errors are corrected in a symmetric way.)

If a syndrome different from the (+1,,+1)(+1,\ldots,+1) syndrome appears when the ZZ stabilizer generators are measured, the 1-1 outcomes reveal the endpoints of one or more chains of XX errors. We can attempt to correct these errors by pairing together the 1-1 outcomes and forming a chain of XX corrections between them. When doing this it makes sense to choose shortest paths along which the corrections take place.

For instance, consider the following diagram, which depicts a syndrome with two 1-1 outcomes, indicated by gray tiles, caused by a chain of XX errors illustrated by the magenta line and circles. As we have already remarked, the chain itself is not revealed by the syndrome; only the endpoints are visible.

Correcting for X errors with a shortest path

To attempt to correct this chain of errors, a shortest path between the 1-1 measurement outcomes is selected and XX gates are applied as corrections to the qubits along this path (indicated in yellow in the figure). While the corrections may not match up with the actual chain of errors, the errors and corrections together form a closed loop of XX operations that is contained in the stabilizer of the code. The correction is therefore successful in this situation, as the combined effect of the errors and corrections is to do nothing to an encoded state.

This strategy won't always be successful. For example, a different explanation for the same syndrome as in the previous figure is shown in the following figure.

Completing a logical error with corrections

This time, the same chain of corrections as before fails to correct for this chain of errors, because the combined effect of the errors and corrections is that we obtain a closed loop of XX operations that wraps around the torus, and therefore has a nontrivial effect on the code space. So, there's no guarantee that the strategy just described, of choosing a shortest path of XX corrections between two 1-1 syndrome measurement outcomes, will properly correct the error that caused this syndrome.

Perhaps more likely, depending on the noise model, is that a syndrome with more than two 1-1 entries is measured, like the following figure suggests.

Multiple correction chains

In such a case, there are different correction strategies known. One natural strategy is to attempt to pair up the 1-1 measurement outcomes and perform corrections along shortest paths that connect the pairs, as is indicated in the figure in yellow. In particular, a minimum-weight perfect matching between the 1-1 measurement outcomes can be computed, and then the pairs are connected by shortest paths of XX corrections. The computation of a minimum-weight perfect matching can be done efficiently with a classical algorithm known as the blossom algorithm, which was discovered by Edmonds in the 1960s.

This approach is generally not optimal for the most typically studied noise models, but based on numerical simulations it works very well in practice below a noise rate of approximately 10%, assuming independent Pauli errors where X,X, Y,Y, and Z,Z, are equally likely. Increasing LL doesn't have a significant effect on the cross-over point at which the code starts to help, but does lead to a faster decrease in the probability for a logical error as the error rate decreases past the cross-over point.

Other code families

It's been over 25 years since the toric code was discovered, and there's been a great deal of research into quantum error correcting codes since then, including the discovery of other topological quantum codes inspired by the toric code, as well as codes based on different ideas. A comprehensive list of known quantum error correcting code constructions would be impossible to include here — but we will scratch the surface just a bit to briefly examine a couple of prominent examples.

Surface codes

As it turns out, it isn't actually necessary that the toric code has periodic boundaries. That is to say, it's possible to cut out just a portion of the toric code and lay it flat on a two-dimensional surface, rather than a torus, to obtain a quantum error correcting code — provided that the stabilizer generators on the edges are properly defined. What we obtain is called a surface code.

For example, here's a diagram of a surface code, where the lattice is cut with so-called rough edges at the top and bottom and smooth edges at the sides. The edge cases for the stabilizer generators are defined in the natural way, which is that Pauli operations on "missing" qubits are simply omitted.

Diagram of a surface code

Surface codes of this form encode a single qubit, rather than two, like the toric code. The stabilizer generators happen to form a minimal generating set in this case, without the need to remove one of each type like for the toric code — but despite these differences, the important characteristics of the toric code are inherited. In particular, nontrivial undetected errors for this code correspond to chains of errors that either stretch from the left edge to the right edge (for chains of XX errors) or from top to bottom (for chains of ZZ errors).

It's also possible to cut the edges for a surface code diagonally to obtain what are sometimes called rotated surface codes, which are so-named not because the codes are rotated in a meaningful sense, but because the diagrams are rotated (by 45 degrees). For example, here is a diagram of a rotated surface code having distance 5.

Diagram of a rotated surface code

For this type of diagram, black tiles (including the rounded ones on the edges) indicate XX stabilizer generators, where XX operations are applied to the (two or four) vertices of each tile, while white tiles represent ZZ stabilizer generators. Rotated surface codes have similar properties to (non-rotated) surface codes but are more economical in terms of how many qubits are used.

Color codes

Color codes are another interesting class of codes, which also fall into the general category of topological quantum codes. They will only briefly be described here.

One way to think about color codes is to view them as geometric generalization of the 7-qubit Steane code. With this in mind, let's consider the 7-qubit Steane code again, and suppose that the 7 qubits are named and ordered using Qiskit's numbering convention as (Q6,Q5,Q4,Q3,Q2,Q1,Q0).(\mathsf{Q}_6,\mathsf{Q}_5,\mathsf{Q}_4,\mathsf{Q}_3,\mathsf{Q}_2,\mathsf{Q}_1,\mathsf{Q}_0). Recall that the stabilizer generators for this code are as follows.

ZZZZIIIZZIIZZIZIZIZIZXXXXIIIXXIIXXIXIXIXIX\begin{array}{ccccccc} Z & Z & Z & Z & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] Z & Z & \mathbb{I} & \mathbb{I} & Z & Z & \mathbb{I} \\[1mm] Z & \mathbb{I} & Z & \mathbb{I} & Z & \mathbb{I} & Z \\[1mm] X & X & X & X & \mathbb{I} & \mathbb{I} & \mathbb{I} \\[1mm] X & X & \mathbb{I} & \mathbb{I} & X & X & \mathbb{I} \\[1mm] X & \mathbb{I} & X & \mathbb{I} & X & \mathbb{I} & X \end{array}

If we associate these 7 qubits with the vertices of the following graph, we find that the stabilizer generators match up precisely with the faces formed by the edges of the graph.

Diagram illustrating geometric locality of the 7-qubit Steane code

That is, for each face, there's both a ZZ stabilizer generator and an XX stabilizer generator that act nontrivially on those qubits found at the vertices of that face. The 7-qubit Steane code therefore possesses geometric locality, so in principle it's not necessary to move qubits over large distances to measure the stabilizer generators. The fact that the ZZ and XX stabilizer generators always act nontrivially on exactly the same sets of qubits is also nice for reasons connected with fault-tolerant quantum computation, which is the topic for the next lesson.

Color codes are quantum error correcting codes (CSS codes to be more precise) that generalize this basic pattern, except that the underlying graphs may be different. For example, here's a graph with 19 vertices that works. It defines a code that encodes one qubit into 19 qubits and has distance 5 (i.e., it's a [[19,1,5]][[19,1,5]] stabilizer code).

Diagram of a color code

This can be done with many other graphs, including families of graphs that grow in size and have interesting structures.

Color codes are so-named because one of the required conditions on the graphs that define them is that the faces can be 3-colored, meaning that the faces can each be assigned one of three colors in such a way that no two faces of the same color share an edge (as we have in the previous diagram). The colors don't actually matter for the definition of the code itself — there are always ZZ and XX stabilizer generators for each face, regardless of its color — but the colors are important for analyzing how the codes work.

Other codes

Quantum error correction is an active and rapidly advancing area of research. Those interested in exploring deeper may wish to consult the Error Correction Zoo, which lists numerous examples and categorizations of quantum error correcting codes.

Example: The gross code. The gross code is a recently discovered [[144,12,12]][[144,12,12]] stabilizer code. It is similar to the toric code, except each stabilizer generator acts nontrivially on two additional qubits, slightly further away from the tile or vertex for that generator (so each stabilizer generator has weight 6). The advantage of this code is that it can encode 12 qubits, compared with just 2 for the toric code.

Was this page helpful?