Classical error correction is ubiquitous in modern computing. The basic idea is to pad out a message we want to send with some extra redundant data, which give us enough info to recover the original message as long as no (catastrophically bad) errors occurred. In the 1990s, people started to notice that many of the same concepts could be applied to quantum data. Namely, by encoding some qubits into a higher-dimensional system and making careful choices of measurements, we can often detect whether errors occurred and even correct them (again if they are not too bad). One of the key ideas behind quantum error correction is identifying analogous notions between classical, linear, error correction and stabiliser theory. We have already seen in previous chapters that there are several deep connections between stabiliser (i.e. Clifford) states and -linear structures, and it just so happens that -linear structures are the bread and butter of classical error correction. In this chapter, we will start to cash out these connections and show how quantum error correction can be done in a way that is, in many ways, similar to its classical counterpart. For example, if I want to send you three bits, and actually I send you six bits , then I can always recover the original message even if one of these six bits gets flipped. In this simple example, we encode our message (an element of ) into a 3D subspace of called the codespace. If a single error occurs in such an encoded bit string, it leaves the codespace, so we can detect it just by checking if the last three bits are indeed the correct parities of the first three. Furthermore, there is at most 1 element in the codespace that is within 1 bit flip of any string of six bits, so we can actually correct a single error. Hence, a major aspect of designing good error correcting codes is finding particular subspaces of that have nice properties like this, as we’ll discuss in Section 11.1. Quantum error correcting codes can be defined as subspaces of , the space of qubits, which have analogous nice properties. Unlike the classical case, these are going to be huge -linear subspaces, namely -dimensional subspaces of the -dimensional space . As a result, it is not practical to work with these spaces explicitly (e.g by enumerating a spanning set of vectors). However, as we have already seen in Chapter 6, stabiliser theory gives us a powerful tool for efficiently representing complicated subspaces of . Thanks to the Fundamental Theorem of Stabiliser Theory, we can fix a -dimensional stabiliser subspace of by giving independent generators of the stabiliser group.
Let’s look again at the codespace
| (11.1) |
We claimed before that can correct a single error. That is, if I start with a vector in and flip 1 bit, this vector will no longer be in , and furthermore, there is a unique vector in that is one bit-flip way. For example, if I receive the string: , I can tell there is an error somewhere, since and , but . It could be that either my first 3 data bits are wrong, or it could be that the parity bits are wrong. However, there is only one way to satisfy all the parities by flipping a single bit, namely setting . Similarly, if I look at a string , the only possibility for a single-bit error is that the final parity bit is wrong. Since is a linear subspace, it turns out we can succinctly capture the property of being able to detect any of some family of errors (e.g. single bit-flips) and to correct those errors in a single concept called code distance.
Definition 11.1.1. For a linear subspace , the code distance of is the smallest non-zero Hamming weight of a vector in .
Suppose we could flip a single bit of a vector and obtain another vector . Then, their sum is also in , and it would just be a unit vector consisting of a single 1 in the location of the flipped bit, e.g.
The same applies if we flip bits: the sum will have 1’s in precisely the locations of the flipped bits. So, saying that flipping bits leaves is the same thing as saying there is no vector in with Hamming weight . In other words, a space with code distance can detect any amount of errors strictly less than . It is also the case that being able to correct a given number of errors can be stated in terms of the code distance. We noted before that the code in (11.1) has the property that, if we take any vector and flip a single bit, there is no other vector that is a single bit-flip away. In other words, there are no two vectors that are two bit-flips apart, or equivalently, has a code distance of (at least) 3. In fact, the code distance is exactly 3, which we can see by enumerating all 8 vectors in this 3D subspace:
and noting that all the non-zero vectors have Hamming weight 3 or more. This is a general feature: any code with distance can always correct an error consisting of at most bit flips. To move toward the quantum analogue of classical linear codes, we can proceed by thinking of stabiliser measurements as analogous to classical parity checks. As we already mentioned, for classical codes, we can very quickly check whether by taking XORs of certain bits. For the code in (11.1), then is in precisely when the following linear equations are satisfied:
As we saw in Chapter 4, given a system of homogeneous -linear equations fixing a linear subspace essentially the same as giving a spanning set for . In this case, this is:
To check , it suffices to check for each of the basis vectors of . If this holds, then we can be sure no error of weight has occurred. If , this is called a syndrome, because it indicates that there is an error somewhere in .
In this section, we’ll expand on the analogy between parity checks and stabiliser measurements, and show how to define the notion of code distance for a stabiliser subspace. Fix a stabiliser group with associated stabiliser subspace . Since we can use stabiliser subspaces to correct errors, we will from henceforth use the terms stabiliser subspace and stabiliser code interchangeably. This is analogous to the classical case, where we referred to subspaces of as codes. Thanks to the Fundamental Theorem of Stabiliser Theory from Chapter 6, we know that is -dimensional, for . Hence, we can think of the stabiliser subspace as encoding logical qubits in physical qubits. This gives us quantum analogues to the parameters and from classical codes. To build up to the notion of code distance, let’s have a look at how we can use to detect and correct errors. We can treat the generators of as “quantum parity checks” in a very specific way. If we measure an arbitrary state with one of stabiliser generators, we can compute the Born rule probability as:
If we restrict to the case where , then , then by Proposition 6.2.2 we know . Hence:
So, we will always get outcome if no error occurred on . From this, we can also conclude that we’ll never get outcome :
Now, suppose an error does occur on . For simplicity, we’ll initially assume that this error takes the form of a self-adjoint Pauli string . In Section 11.2.2, we’ll show that arbitrary errors can be reduced to this case. Namely, if we can detect/correct Pauli errors, then we can detect/correct arbitrary errors. There are two possibilities, either the error commutes with or it anti-commutes with . These two possibilities translate into the following commutation rule with respect to the Pauli box .
As a consequence of Exercise 11.1, if a commuting error happens to , the outcome of measuring is unaffected:
So, . Hence, we’ll again get outcome with certainty. However, if anti-commutes with , it kicks a up into the measurement outcome, so the probabilities flip:
Hence, we’ll get outcome with certainty. We can measure all stabilisers generating . Since all of the stabilisers commute, we can either think of doing each of these measurements in sequence (where the order doesn’t matter), or equivalently, doing one big measurement whose outcome is given by:
If is of the form for some state in the stabiliser subspace and some (possibly trivial) Pauli error , this will yield some particular bitstring of outcomes with certainty. If , the outcome will always be . Hence, each time , this indicates that (1) some non-trivial error has occurred, and (2) that error anti-commutes with the -th stabiliser . By analogy to the classical case, this vector of measurement outcomes is called the syndrome. If we are lucky, this syndrome will provide us with enough information to send back to the error free state . For example, if it uniquely fixes the Pauli string , that is definitely good enough, because we can then just apply again to get . However, we don’t even need to hit on-the-nose, because if we correct with some that is the product of and some stabiliser , we get:
So, we again recover the state , possibly up to an irrelevant global phase. Lets see how this works by means of a simple example. Consider the following stabiliser group on three qubits:
has two independent stabilisers, so . Hence, this code, called the GHZ code encodes one qubit into three. Fixing a basis for , we have
For the following discussion, it will be conventient to write as the image of a map from one qubit into three qubits, called the encoder of . We will say more about encoders in the following sections, including how to build them from stabiliser codes. But for now, note that:
| (11.2) |
It thus follows that:
Hence, we can write a generic state as:
As before, we can look at the Born rule probabilities associated with measuring the stabiliser generators and . As was shown in Exercise 6.7, all- and all- Pauli projectors take a particularly simple form:
This generalises to Pauli strings formed just from and (or just and ) by connecting the projector to just the subset of qubits were a (or ) appears. For example, we have:
Applying to an encoded state:
we see that, as expected:
By symmetry, we see the same probabilities for measuring . This is consistent with what we saw before: measuring states in with the generators of will leave the states unchanged, and yield outcome 0 with probability 1. However, suppose we introduce an error to our encoded state, such as a ‘bit flip’, i.e. a Pauli applied to the first qubit. Since anti-commutes we , we get:
Hence the Born rule probabilities are flipped:
so we obtain for out first syndrome bit. On the other hand, if we measure instead, this commutes with , so:
so we obtain outcome for the second syndrome bit. By symmetry, it is easy to check that each of the single bit-flip errors can be associated with a unique syndrome:
Hence, if any single bit-flip error occurs, we always know how to fix it. We just need to measure the stabiliser generators and apply a Pauli on the correct qubit to cancel out the error. Unfortunately, the GHZ code is not good enough to correct arbitrary single qubit errors. Notably, if a Pauli Z error occurs on one of the qubits (which is sometimes called a ‘phase-flip’ error), it commutes with all of the generators of . Hence, the syndrome will always be , and we’ll have no idea that the error even happened. In this sense, the GHZ code is not very good at detecting or correcting errors. By analogy to the classical case, this a distance-one code, meaning that there are even single-qubit errors that can go undetected. In order to understand what this means (and ultimately to find codes which can detect and correct errors), we will now formalise the notion of code distance for quantum stabiliser codes.
We saw in Section 11.1 that we could quantify the number of detectable errors using the classical code distance. We can do a similar thing for stabiliser codes, where the following notion plays the role of the Hamming weight.
Definition 11.2.1. The weight of a Pauli string is the number of non-identity Pauli operators that appear in .
For example, the Pauli string has weight 2 and the identity string always has weight 0.
Using this notion, we can quantify the number of detectable errors. Note that an error produces a non-zero syndrome if and only if it anti-commutes with at least one of the generators of the stabiliser group. So, to be undetectable, it must commute with everything in . Furthermore, we only care about errors that actually mess up the states in our stabiliser subspace, so we shouldn’t count elements of the stabiliser group itself as errors.
Definition 11.2.2. For a stabiliser group , we define the distance of the associated stabiliser code as the minimum weight of a Pauli string that commutes with every element in .
Note that, since is a subgroup, , so must have non-zero weight. Also, note that commutes with everything in if and only if it commutes with each of the generators, so we can efficiently decide whether commutes with everything in . However, like in the classical case, we cannot in general compute the code distance efficiently, since we need to show that all Pauli strings with are either in or anti-commute with something in . If a stabiliser code has distance , it can detect any error with . In other words, it can detect up to single-qubit errors. Similarly to the classical case, it can also correct up to errors. More precisely, errors with are uniquely fixed, up to stabilisers.
Theorem 11.2.3. Suppose defines a stabiliser code with distance . Then if two errors , with yield the same syndrome , then for some .
Proof. If , then and both commute with the -th stabiliser generator , whereas if , then they both anti-commute with . In either case, the product commutes with all of the generators of . By Exercise 11.2, we know . Because has distance , equals some . Hence . □
So, as in the classical case, a quantum code can correct errors if . Putting the three parameters together, we will write to indicate a quantum error correcting code that encodes logical qubits in physical qubits, with code distance .
Example 11.2.4. The following 7-qubit code is called the Steane code:
Since it consists of 7 physical qubits and 6 stabiliser generators, it encodes logical qubit. Any Pauli consisting of a on one or two qubits anti-commutes with at least one of generators, whereas any Pauli consisting of on one or two qubits anti-commutes with at least one of the generators. Any Pauli string of weight can be written as a product of an all-X string of weight and an all-Z string of weight . Hence must anti-commute with some generator. On the other hand, commutes with all of the generators above and can’t be obtained as a product of them. Hence, this code has a distance of 3, i.e. it is a code.
If we don’t care about error correction and just want to be able to detect an arbitrary weight-1 error, the following code will work.
Example 11.2.5. The following is a error detecting code:
Clearly any weight-1 Pauli will anti-commute with one of the two stabilisers above, hence it detects a single error. Furthermore, since it has only 2 stabilisers for 4 physical qubits, it encodes 2 logical qubits.
In the last two examples, the codes were split into all-X and all-Z generators, which as we’ll see in Section 11.3 has some nice consequences which make them easier to work with. However, if we drop this restriction, we can find even smaller distance 3 codes.
In the next section, we will see how the code distance enables us to detect and correct not just Pauli errors, but a large class of errors.
Exercise 11.3. Consider the following encoder map, which is a “doubled up” version of the GHZ encoder from the previous section:
| (11.3) |
It embeds 1 logical qubit into 9 physical qubits, so its image should have 8 stabilisers. What are they? What is the code distance?
A natural question arises when one first encounters quantum error correcting codes: why the fixation on Pauli errors? In the classical case, it is quite natural to focus on bit-flips (and sometimes bit-loss) as the only basic kinds of errors we care about. Intuitively, these are the only kinds of errors that can occur for classical data. However, there are many ways for a qubit to experience an error, so why is it we can get away with just Pauli X, Y, and Z? Indeed, quantum theory says that the physical processes that cause quantum errors could act in infinitely many ways on our qubits, and even interact our qubits with some external systems outside our control, introducing unwanted entanglement. To capture all such possibilities, a generic error can be modelled as a unitary like this:
where the first qubits represent the system we are actually doing the computation on, and the extra system represents some environment that those qubits might interact with when the error occurs. The key trick at this point is to realise that the Pauli matrices span the whole 4D space of matrices.
Consequently, Pauli strings of length span the whole space of matrices. Using this fact, we can decompose as follows:
where is some linear map on the rest of the space that can vary with . We don’t expect to be able to correct all errors of this form. For example, if simply swapped all our qubits out into and replaced them with garbage from the environment, there is no way to recover. So, we should put some kind of reasonable restriction on . First, let’s split into two parts based on the weight of the Pauli strings:
For reasonably well-behaved error processes, the second part of the sum will get exponentially small as we increase . So, in true physicist style, we will just ignore it! This gives us the following pretty good approximation for :
| (11.4) |
This is a non-trivial assumption, which comes from the fact that we assume the interference from the environment is relatively localised (cf. Remark 11.2.7). An important consequence is that as gets larger, the form (11.4) gives us a better approximation of an arbitrary error process.
Remark 11.2.7. We can motivate the form of a “reasonably well-behaved” error process (11.4), by considering only those that come from local interactions with the environment. Recall from Section 7.5 that unitary evolutions in quantum theory are generated by Hamiltonians, so we can write for some Hamiltonian . Many physical interactions are well-approximated by so-called -local Hamiltonians, which are those that can be written as linear combinations of operators with weight at most . Usually is small (e.g. ), so terms of high weight in all come from higher-order terms in the Taylor expansion (i.e. terms with high in equation (7.57) from Section 7.5). When is small, these terms get exponentially close to zero as we increase .
Let’s use this form of an error to talk about a single round of error detection or correction. We’ll start by assuming that we can perform stabiliser measurements perfectly (i.e. without introducing more errors). This is of course not the case, as we’ll see when we discuss fault-tolerant measurements in Section 11.4.2. Fix a stabiliser code of distance given by the stabiliser group . Suppose we start with a state at , then measure the generators of at . If we assume an arbitrary error of the form of (11.4) could happen between and , the resulting state, which depends on the measurement outcomes looks like this, up to normalisation:
| (11.5) |
If we did not detect an error, this means we get outcome . In that case, the resulting state is the following:
| (11.6) |
From (11.4), we can expand as a sum over Pauli strings with :
| (11.7) |
Then, since has distance , all are either elements of or anti-commute with at least one . However, terms in (11.4) corresponding to anti-commuting Pauli strings vanish. To see this, note that stabiliser generators all commute, so we can always move the projector corresponding to to the front of the list of projectors in (11.4). Then:
where is using the fact that since . The only remaining terms are those where . But then, . Combining this with the fact that is also invariant under the projection on to , (11.6) reduces to:
where is some (irrelevant) process acting independently on the environment, without disturbing . Hence, even though might not be a Pauli string, if we perform a round of stabiliser measurements and detect no errors, we will nevertheless project back on to the error-free state . But what happens if we do detect errors? Then, we can attempt to decode the error syndrome by finding some Pauli string that could have produced that error and undo it. In fact, for reasons that will soon become clear, we often want to find such a with weight as small as possible.
Definition 11.2.8. For a stabiliser code defined by and syndrome , a minimum weight decoding is a Pauli string of minimal weight that anti-commutes with if and only if . Diagrammatically, we have for all :
Clearly if , the minimum weight decoding is . Otherwise, will always be a non-trivial Pauli string. If we post-compose the result of a round of stabiliser measurements (11.5) with , we can push inside, which cancels out the :
Hence, we obtain the form of (11.6) for a new error process . If is a sum over Paulis with weight , and , then is a sum over Paulis with weight at most . Hence, if has distance , we can use the same argument as before to reduce the expression above to , so we have successfully corrected the error . Note that the weight of the terms in depends on the weight of . This is why we try to find with weight as small as possible, to give the greatest chance to correcting the error. Naïvely, we can find by enumerating all of the Pauli strings of weight until we find one with the correct syndrome. Clearly this will take exponential time, so it becomes infeasible for large codes. Minimum weight decoding for stabiliser codes is (at least) as hard as for classical codes, which is already known to be NP-hard for some families of error correcting codes. Hence, for a stabiliser code to be useful, it needs to not only have good parameters , but also an efficient way to find minimum weight (or at least low weight) decodings.
An stabiliser code is defined by a stabiliser group with generators and fixes a -dimensional subspace . Hence, we can think of this as logical qubits embedded in a space of physical qubits. tells us where those qubits lie within the big space, but it does not tell how exactly how those qubits are embedded. For example, if we are encoding 2 qubits, which part of corresponds to the first qubit and which to the second qubit? How do unitaries applied to the physical qubits map on to transformations of the logical qubits? If we only intend to use quantum error correction for quantum memory, the answers to these questions are not too important. However, as we will see in Section 11.4, they are of central importance when we start wanting to perform fault-tolerant computations on encoded qubits. For that reason, it is useful to define an explicit isometry such that the image of is . This isometry is called an encoder associated with code :
We already saw some explicit examples of encoder maps, built as ZX diagrams in Equations (11.2) and (11.3). On some occasions, we may actually want to implement the encoder physically, e.g. as a unitary circuit with ancillae. However, this need not be the case, and in fact most of the time, it suffices to treat this as a purely mathematical object, tracking how our logical qubits are embedded into the space of physical qubits. Most importantly, it tracks the relationship between logical maps performed on our logical qubits and the associated physical maps performed on the physical qubits.
Intuitively, acts like on the codespace of , which is isomorphic to the space of qubits. To formalise exactly what “acts like ” means, we need to choose an isomorphism between this subspace and , which is exactly what does for us. This is a very important concept, since in order for quantum error correction to work, our quantum data needs to be continuously protected from errors. Hence, we cannot decode and re-encode qubits every time we want to apply a gate. As we will see in Section 11.4, finding an satisfying equation (11.8) will help us to implement fault tolerant operations on encoded qubits. Given a generating set of stabilisers for , we can always derive an encoder by following the procedure we used to prove the FTST in Section 6.2.1. Namely, we can on to by finding an isometry such that :
| (11.9) |
Furthermore, an encoder derived this way will always be a Clifford isometry, so it is easy to reason about the propagation of Pauli errors. Note that the stabiliser group only fixes the image of (or equivalently, the projector ). There are many different ways to split a projector, so following the procedure from the proof of the FTST gives us just one possible choice. In particular, for any unitary , the encoders and have the same image in , hence they correspond to the same stabiliser code . However, whenever is Clifford, we can perform a nice trick to describe fully in terms of Pauli strings: we bend the wires! If we bend the input wires around using cups, we obtain an qubit state :
Now, we know from the Fundamental Theorem of Stabiliser Theory that independent stabilisers will uniquely fix , and hence will uniquely fix . We already have stabilisers given by the generators of :
| (11.10) |
So, the question is: where can we find more stabilisers? The first thing to note is, assuming is a Clifford isometry, we can always push arbitrary Pauli strings from its inputs to its outputs, thanks to Proposition 6.1.6. In particular, we can always push single-qubit Pauli and gates through . As a consequence, we can always find a Pauli string that implements (in the sense of Definition 11.2.9) a Pauli gate applied in the -th logical qubit, i.e.
| (11.11) |
Similarly, for a gate on the -th qubit, there exists a different Pauli string such that:
| (11.12) |
This gives us Pauli strings called the logical operators associated with .
Exercise 11.5. Show that the logical operators are self-adjoint and commute with every element of the stabiliser group . Furthermore, show that the pair of logical operators and anti-commute for all and all other pairs of logical operators commute.
Exercise 11.6. Let be a stabiliser group and let be the logical operators. Show that the group of Pauli strings that commute with all elements of is . Hint: If an element commutes with all , then is an element of the maximal stabiliser group , and hence a product of these elements. If instead it does not commute with some show that we can still write it as a product of the logicals and stabilisers.
By moving the gate to the right-hand side of (11.11) and bending the wires, we see that each logical operator gives us a stabiliser for the Choi state :
| (11.13) |
Similarly, each gives us a stabiliser for :
| (11.14) |
Note that the operators of the form (11.13) and (11.14) all pairwise commute. This is because the coming from anti-commuting an and on the -th qubit is always cancelled out by anti-commuting with on the last qubits (cf. Exercise 11.5). If we combine the stabilisers coming from in equation (11.10) with those coming from the logical operators, we obtain stabilisers for , which is by construction an qubit state. Hence, is totally fixed up to a scalar factor. Hence, if we perform the method from Section 6.2.1 for the full set of stabilisers for , then bend the wires, we’ll get the encoder associated with a stabiliser group and a set of logical operators. In general, this will be a relatively deep ZX-diagram, with approximately one layer for each stabiliser and logical operator. We could then try and simplify things to get a more compact form, e.g. by reducing to GSLC or AP form, but in general this picture of the encoder might be a bit awkward to work with. However, as we’ll see in Section 11.3, for a certain family of codes, called CSS codes, things get a lot simpler!
The encoder has a dual operation, which we call the ideal decoder, or simply the decoder. This is slightly more difficult to work with than the encoder, so whenever possible we will state things in terms of the encoder rather than the decoder. However, having a way to at least in principle decode physical qubits back into logical qubits while detecting/correcting errors is still a useful thing to have in our back pocket. The encoder is an isometry, so we know how to implement it straight away as a unitary circuit with ancillae. In fact, this definition of is already clear from the definition of the encoder in equation (11.9) in terms of the splitting of the projectors onto the stabiliser subspace. However, we can’t deterministically implement because it would require performing the unitary then post-selecting each of the ancilla qubits on to . However, we can measure those ancillae instead:
But what are these measurement outcomes ? Recall the following equation from Exercise* 6.10:
This implies that for an encoded state , performing and measuring the ancillae will give us the same outcome as measuring all of the stabilisers. Hence, we can use the measurement outcomes from the ancillae to do error detection or correction. If we get outcome , we can conclude that no logical error has occurred with weight . If we get any other syndrome, we can do least-distance decoding to find the smallest weight physical error with error syndrome , and then figure out the appropriate correction to do on the logical qubits we have left. Since is Clifford, we can do this efficiently.
Exercise 11.7. Fix a stabiliser code with generators and distance . For a syndrome , let be a Pauli of minimal weight with that syndrome, and let be the first qubits of the Pauli string . Show that for any Pauli string with syndrome and weight where , we have:
Given this, we therefore define the decoder as a non-deterministic process consisting of a measurement yielding syndrome followed by the associated Pauli correction defined as in Exercise 11.7:
| (11.15) |
When , , so . Hence this process acts like in the error-free case, but it additionally also “swallows” any correctable errors when .
Remark* 11.2.10. If we don’t care about the outcome of the syndrome measurement and just want the error-corrected state out, we can represent the decoder as a single quantum channel (cf. Section* 2.7.1) with Kraus operators , i.e.
One can check this is trace-preserving and for any correctable Pauli error , we have .
In this section, we’ll define a family of stabiliser codes called Calderbank-Shor-Steane codes, or CSS codes for short. As you may have already noticed from Examples 11.2.4 and 11.2.5, some stabiliser codes have generators that split nicely into an all-X part and an all-Z part. In a sense, such codes behave like two classical error correcting codes mushed together, where the parity checks of one code become the Z stabilisers (which detect bit errors) and those of the other code become the X stabilisers (which detect phase errors). That is pretty much all there is to defining a CSS code, once you additionally account for the fact that all of the stabilisers need to commute. It turns out there is a very natural way to impose this in terms of bitstrings.
Exercise 11.8. Suppose we define two Pauli strings from the bitstrings and as follows:
Show that and commute if and only if and are orthogonal, i.e. (mod 2).
Hence, if we fix two classical codespaces, one for the X-stabilisers and one for the Z-stabilisers, all the stabilisers will commute precisely when those subspaces are mutually orthogonal. We say two subspaces are orthogonal if for all , or equivalently .
Definition 11.3.1. The CSS code generated by orthogonal subspaces is a stabiliser code whose generators are of the following form:
where is a basis spanning and a basis spanning . A CSS code is called maximal if .
That is, we let the basis vectors of define the X generators and we let the basis vectors of define the Z generators. Since addition in is taken modulo 2, orthogonality guarantees that each X generator overlaps with a Z generator in an even number of places, which makes the group commutative. Using this fact, it is easy to verify the resulting group is a stabiliser group.
Example 11.3.2. Let be a 3D subspace of spanned by:
This particular subspace is known as a Hamming code. It’s classical code distance is 3, and it has the nice property that is orthogonal to itself. Hence, we can use to derive both the X and Z generators of a CSS code. In fact, we already saw this code: it is the Steane code from Example 11.2.4.
Recall from Section 4.3 that phase-free ZX-diagrams can always be put into Z-X or X-Z normal form, which correspond respectively to representations of an -linear subspace using a basis for or a basis for :
If we combine this knowledge with these equations, which follow from :
| (11.16) |
we can see immediately how to derive a generating set of stabilisers for a phase-free ZX-diagram. As we’ll see in this section, those stabilisers always generate a CSS code, and conversely any CSS code can be presented using a phase-free ZX-diagram.
Eq. (11.16), plus the two normal forms from Section 4.3.1, will give us everything we need to prove our main theorem.
Theorem 11.3.3. For any -linear subspace , is stabilised by the maximal CSS code if and only if it is equivalent, up to a scalar factor, to a phase-free ZX diagram with an X-Z normal form given by (or equivalently with a Z-X normal form given by ).
Proof. Suppose is described by a phase-free ZX-diagram. Then it can be translated into X-Z normal form, for some basis of an -linear space . Then, for each , we can apply Eq. (11.16) to introduce an X phase of on every wire adjacent to the Z spider labelled by and commute it to the output using (sp’):
This shows that is invariant under the action of the Pauli operator . Hence, is the eigenstate of all of the independent X stabilisers of the maximal CSS code . Similarly, we can compute the Z-X normal form of and show that it is the eigenstate of all independent Z stabilisers . This gives independent stabilisers for , hence it is uniquely fixed by the FTST Theorem 6.2.11. Conversely, any maximal CSS code fixes a state whose stabilisers are given by and as before, so they will be equal to a phase-free ZX diagram with X-Z normal form given by , or equivalently, with Z-X normal form given by . □
This proof gives us an evident way of translating the stabiliser generators of a maximal CSS code into a ZX diagram. In fact, it gives us two equivalent ways, using the Z-X normal form, which gives us a generating set of X stabilisers, or the X-Z normal form, which gives us the Z stabilisers. Interestingly, we only ever need to represent one kind of stabilisers for a maximal CSS code diagrammatically, because is uniquely fixed by and vice-versa.
For a non-maximal CSS code, we should end up with an encoder map from logical qubits to physical qubits. Since we already know how to turn CSS stabilisers into phase-free diagrams, we can use the wire-bending trick from Section 11.2.3 to treat logical operators as stabilisers on a bigger qubit state and therefore add them to the diagram as well. As an example, lets return to the 7-qubit Steane code from Example 11.3.2. We will switch to a more compact notation for writing its stabilisers, where (resp. ) corresponds to an -qubit operator acting non-trivially on the -th qubit with a Pauli (resp. ):
This CSS code is non-maximal, and encodes logical qubits. Hence we should fix 2 additional logical operators , . As we noted before, we only need one kind of stabiliser to build the ZX diagram, so applying the recipe from the previous section to the stabilisers and logical operator, we obtain the following picture, where we label the logical qubit 0 and the physical qubits 1-7. We can then put the logical qubit on the left and rearrange some of the physical qubits to obtain the following:
Note that, if we follow this recipe, we will always get identity spiders on the inputs, which are redundant. If we simplify them away, we’ll always end up with something in generalised parity form (for the X form of the encoder) or the colour-reverse of generalised parity form (for the Z form of the encoder).
| (11.17) |
As a result, we can always write a CSS encoder map in generalised parity form using just its X-stabilisers and X-logical operators as follows:
Place an output X-spider on all outputs.
For each of the X-logical operator, add an input Z-spider connected to the output X-spiders of its support.
For each X-stabiliser generator, add an internal Z-spider connected to the output X-spiders of its support.
Equation (11.17) gives the X-form of the Steane code encoder. If we used the Z-stabilisers and logicals instead, we’ll obtain the Z-form. In the case of the Steane code, this is the exact same picture, but with the colours reversed:
| (11.18) |
This will not always be the case, and it comes from the fact that the Steane code is self-dual. We’ll discuss self-dual codes more in Section 11.4.1.1.
Example 11.3.4. Recall the error detecting code from Example 11.2.5:
This has two stabilisers on 4 qubits, so it encodes 2 logical qubits. Hence, in order to fully specify the encoder map, we should fix two pairs of anti-commuting logical operators which commute with the stabilisers and each other. There are multiple solutions to this problem, so we will just choose one:
These all have weight 2, so they will have even overlap with both stabilisers (and hence commute with them). It is also easy to check that and all other pairs of logicals commute. We can now use this data to construct the X-form and Z-form of the encoder for the code:
| (11.19) |
So, we have seen how to turn any CSS code described by stabilisers and logical operators into a phase-free ZX-diagram of its encoder. Conversely, we can treat any isometry described by a phase-free ZX diagram as a CSS code just by computing its generalised parity form. Recall from Chapter 4 that the generalised parity form looks like this:
Furthermore if is an isometry, we can conclude from Proposition 4.2.8 that , giving us:
From this picture, we can immediately read off the X-stabilisers and X-logical operators associated with . If do everything colour-reversed, we’ll get a different normal form for the encoder:
from which we can read off the Z-stabilisers and Z-logical operators. In summary, we have given an efficient procedure for writing a phase-free isometry from a CSS code with logical operators and for turning any phase-free isometry into its associated CSS code and logical operators.
One particular family of CSS codes has been so well-studied in recent years that it deserves some special attention. The surface code is a family of CSS codes that encode a single logical qubit in a square (or rectangular) lattice of physical qubits. Larger lattices define codes with a higher code distance. While they aren’t the most memory-efficient codes we know about, surface codes have a number of nice properties coming from this regular geometric structure. For one thing, they can be implemented using only nearest-neighbour gates on a 2D architecture, so there is never any need to perform swap gates or otherwise break planarity. A second nice feature is that even for surface codes with very high code distances, we have efficient algorithms for decoding error syndromes, as we’ll see in Section 11.3.4.1 below. Finally, they serve as a useful baseline for fault-tolerant computation, as we know (at least in principle) how to implement all the ingredients needed to implement universal quantum computation on surface-code-encoded qubits in a fault-tolerant manner. In fact, fault-tolerant computation in the surface code has one of the best thresholds we know about. That is, they can tolerate quite a bit of noise at the hardware level while still being able to supress errors. More on that later. In this section, we will use the slightly more compact, “rotated” version of the surface code. Stabilisers are defined as follows. We start with a rectangular lattice with vertices, corresponding to qubits. We aim to encode a single logical qubit, so we need to fix stabilisers. To do so, we first colour in the lattice in a chequerboard pattern, where each red (darker) area corresponds to an stabiliser on all of its adjacent qubits, whereas each green (lighter) area corresponds to a stabiliser. Colouring the inside of the lattice in this way gives stabilisers. To get all stabilisers, we still need to fix additional stabilisers. To get these, we introduce weight-2 stabilisers along the boundaries at every other edge, which we depict as “blobs”. We colour these blobs in the oppose colour to the nearest tile, obtaining the following picture:
| (11.20) |
There are edges around the whole boundary, so adding a “blob” to every other edge gives us more stabilisers as required. By design, all stabilisers of different types overlap on two qubits, so they commute. Since we alternate edges, one pair of opposite boundaries (in this case the left and right) will end up with X-blobs and one pair (top and bottom) with Z-blobs. In the literature, these are called X-boundaries and Z-boundaries, respectively. The logical operator consists of a line of Pauli operators connecting the two X-boundaries, whereas the logical operator consists of a line connecting the two Z-boundaries:
Note that the specific choice of path between the boundaries is not important and we are even allowed to cross tiles diagonally. However, it is important that the path touches each area of the opposite colour an even number of times. This ensures that the logical operator commutes with all of the stabilisers. In the example above we could have equivalently chosen or for , but not , because the latter touches a green tile 3 times. Using the stabilisers and the logical operators for the surface code, we can apply the recipe from the previous section to construct its encoder. In fact, we can represent the encoder using two equivalent ZX diagrams, one based on the X-stabilisers and one on the Z-stabilisers:
| (11.21) |
Note how the diagrams of the encoders have a direct visual relationship to the picture (11.20): to draw the X-stabilisers, we put an X spider on every vertex, place a Z spider in the centre of each red region in (11.20), and connect it to all of the adjacent vertices. Finally, we “embed” the logical X operator by placing a Z spider on the input and connecting it to each of the vertices where has support. For the Z-stabilisers, we apply the same routine, reversing the roles of X and Z. In the surface code, we can topologically deform a logical operator by multiplying it by any stabiliser. We can perform the same calculation graphically using strong complementarity. As we saw in the proof of Lemma 4.3.4, we can treat spiders as bit-vectors, and by applying strong complementarity, we can “add” the neighbourhood of one spider to that of another, modulo 2. Applying this concept to the surface code, we obtain for example:
| (11.22) |
This just amounts to changing the basis for the linear space represented by this normal form, which has the same effect as changing the generators for the associated stabiliser group.
Aside from the fact that they can be implemented with nearest-neighbour operations in a 2D architectures, surface codes are also popular because error syndromes can be efficiently decoded by taking advantage of their geometric structure. Recall from Section 11.2.2 that the decoding problem for stabiliser codes refers to the problem of mapping the outcome of a sydrome measurement to a Pauli correction that is most likely to recover our error-free state. There are many proposals for fast decoders for the surface code. Here, we will sketch how one of the simplest ones, based on minimum weight perfect matching works. First note that since surface codes are CSS codes, we can identify Z-errors in using the outcomes of X stabiliser measurements, and vice-versa. Hence, we can treat these two types of errors independently. Suppose a certain configuration of Z errors occurs on our physical qubits. Much as we did in Section 11.2.2, we can figure out where the error syndrome will be by “pushing” the error through the Pauli projectors corresponding to X measurements, using the -copy rule, and seeing which projectors end up with a :
This is the same thing as figuring out which stabilisers anti-commute with the given error. If an even number of Z errors lie in the support of an X projector, a pair of phases will cancel out, yielding a measurement outcome. Hence, the syndrome only reveals the places where an odd number of errors overlap with a stabiliser. As a consequence, error syndromes will always appear as the endpoints of a “path of errors” through the lattice:
| (11.23) |
These paths will always either connect a pair of locations where the error syndrome is 1, as we see above, or they will connect a single syndrome-1 location to a boundary of the surface, as in this example:
Since Z boundaries don’t have any X stabiliser measurements, this also works. Hence, if we get just two syndrome-1 outcomes far from the boundary, the most likely place errors occurred were along the shortest path between those two spots (assuming as we have been that errors occur independently and with the same probability on every qubit, since a longer path must have involved more errors occurring). If we get just one syndrome-1 outcome, the most likely place errors occurred was along the shortest path between that syndrome and the boundary without non-commuting stabilisers. “Hang on!” you might say, “there can be multiple shortest paths between points!” And you’d be right. However, all of these paths are equivalent, up to stabilisers. For example, there are 3 shortest paths between the syndromes in (11.23). The 2 other ones can be obtained from the one shown by multiplying by Z stabilisers:
| (11.24) |
Here we started with the diagram (11.23), composed with one of the Z stabilisers that is not shown explicitly in this Z-X normal form. Since stabilisers don’t change the encoded state, correcting errors along any of these 3 paths (or actually any path between these endpoints) will have the same effect. The reason this works is, as we showed for the logical embedding at the end of the previous section, the stabilisers of the surface code relate topologically-equivalent paths through the surface, provided they have the same end points. Of course, this only tells us what to do if we see zero, one, or two syndrome-1 measurements. However, we can extend this to an algorithm that does a pretty good job decoding any syndrome. First, we make a graph whose nodes correspond to places where we observe a syndrome-1 outcome (and an extra dummy node for the boundary). We label the edges with a weight indicating the length of the shortest path between those locations, e.g.
| (11.25) |
A perfect matching is then a subset of the edges of this graph, where every node (except the “dummy” node for the boundary) is adjacent to exactly 1 edge. A minimum weight perfect matching (MWPM) is a perfect matching where the sums of the weights on the chosen edges is minimal. An example of a MWPM for the graph above is:
The simplest minimum weight perfect matching decoder therefore proceeds as follows. For the X stabiliser measurements, make a syndrome graph as in equation (11.25) and compute its MWPM. Then, correct Z errors along the paths indicated by the MWPM. After that, rinse and repeat for Z stabiliser measurements and X errors. It is worth noting that this algorithm does not always find the minimum weight error associated with a syndrome, but it often gets pretty close, especially in the case where there aren’t too many syndrome-1 outcomes. It is also very fast, particularly if we are allowed to just approximate the MWPM and/or compute in parallel. We will briefly discuss various implementations of this algorithm and their performance in the References of this chapter.
As might be clear from diagrams like (11.17), writing CSS encoder maps explicitly can start to get unwieldy for large numbers of stabilisers or stabilisers with relatively high weight. Later on, we will also want to prove some generalities about all CSS codes, so it would be useful to come up with some notation for writing a generic CSS code as a ZX-diagram. Thankfully, the scalable ZX-calculus, which was introduced in Section 10.2, can solve both of these problems. To see how this works, we can start from the recipe given in the previous section for building an encoder for a CSS code from its X-logical operators and X-stabilisers. Suppose we represent this data as a pair of boolean matrices and , whose columns correspond to each of the operators and whose rows correspond to qubits, where a 1 indicates the presence of an . For example, the matrices associated with the Steane code are the following:
Whereas in the previous section, we needed to say in words how to build the encoder out of X operators, we can now do it succinctly as a single diagram, which works for any CSS code described by the pair :
| (11.26) |
Inspecting this picture, we can indeed see that it is saying to introduce input spiders for the X-logicals, and connect them according to where ’s appear, then introduce internal spiders for the X-stabilisers and again connect them to outputs according to where ’s appear. This gives us the X form for the encoder, but of course we should also be able to construct a Z form by reversing the role of the two colours of spider. To get the colour reverse of a matrix arrow, we can just reverse the direction and take the transpose of the matrix:
| (11.27) |
Hence, if we equivalently present a CSS code as a pair of matrices describing the Z-logical operators and Z-stabilisers, respectively, we can write the associated encoder map as:
| (11.28) |
This not only gives us convenient notation for the encoder, but also for the stabiliser measurements themselves. It was shown in Exercise 6.7 that the Pauli projectors associated to all-X or all-Z Pauli stabiliser measurements take a particularly simple form:
If , the projector on to a stabiliser consists of an isolated X-spider connected to Z-spiders on the support of the . If we compose all of the projectors on to Z-stabilisers together and fuse spiders, we’ll obtain this picture:
| (11.29) |
Similarly, if we project on to the 0 outcome with all of the X stabilisers, we’ll obtain:
| (11.30) |
If we simply compose maps (11.29) and (11.30), we will get the projection onto the stabiliser subspace . This is what happens if we measure all of the stabilisers and get outcome 0. To represent other outcomes, we should place a or a on each of the internal spiders of the Pauli projections. To do this for generic outcomes, we need to extend the scalable notation a bit. Back when we introduced the scalable notation, we interpreted a scalable spider labelled by a phase as spiders, all with the same phase on them. However, we could just as well label a spider with a vector of different phases and define scalable spiders as:
Of particular interest for us are vectors of s and s, i.e. vectors of the form for some boolean vector . Using this extended notation, a full stabiliser measurement for a CSS code with outcome syndrome can be written compactly as:
| (11.31) |
For a CSS code, we can assume without loss of generality that all of the X-logical operators and X-stabilisers are linearly independent. In other words, we can assume the block matrix is injective. Consequently, there exists some parity matrix such that:
| (11.32) |
Since injective parity maps are always isometries, we have:
| (11.33) |
This implies that the encoder (11.28) is an isometry, and is in fact a strictly stronger condition.
Exercise* 11.10. In Exercise 11.5, you were asked to show that for logical operators, anti-commutes with if and only if . We can express this condition in terms of boolean matrices by stating that . In terms of scalable notation, this implies that:
| (11.34) |
We also know that stabilisers should commute with logical operators and with each other. In terms of matrices, we can express this as . Pictorially:
| (11.35) |
Give a graphical derivation of (11.34) and (11.35) using just (11.28), the scalable ZX rules, and the fact that the encoder is an isometry. Hint: Start by composing the Z-form of the encoder with its adjoint and applying idempotence to remove one projector from the middle before applying (11.28).
This gives us some nice tools for working with CSS codes generically, which we’ll use several times when it comes to deriving fault-tolerant operations in Section 11.4.
If we want to perform quantum computations in a way that can withstand errors, coming up with a good quantum error correcting code is only half the story. We also need to figure out how to perform state preparations, gates, and measurements on encoded qubits. Suppose that every time we performed a gate we had to decode our physical qubits into logical qubits, perform the gate, then re-encode. This seems like a lot of work just to do one gate, but what is actually worse is that our logical state will be completely unprotected between when we decode and re-encode our qubits. We will run into similar problems with state preparations and measurements. We saw half of a solution to this problem already in Section 11.2.3 with Definition 11.2.9. If we want to perform a unitary gate on our logical qubits, we can find some other acting on the physical qubits, satisfying:
Essentially the same idea applies to state preparations, except there are no input wires, so we don’t need an encoder on the RHS. That is, we can implement the preparation of a logical state by preparing some physical state satisfying:
Of course, one could prepare by preparing then performing the encoder isometry as an actual quantum operation, which we could implement using unitaries and ancillae. However, as we’ll see later, there are various reasons we might want to prepare directly in a different way. Finally, implementing measurements on encoded qubits is also a similar idea, but with a little twist accounting for the fact that our physical measurement might have more outcomes than our logical one. While we might in general want to implement arbitrary measurements on encoded qubits, lets focus just on ONB measurements to make things a bit simpler. Let be a finite set of indices for any . We say a physical ONB measurement on qubits implements a logical ONB measurement on qubits if there exists a function where, for all states , we have:
| (11.36) |
This condition says that, for any logical state, the Born rule probabilities obtained from measuring can be computed in terms of the probabilities of measurements. Normally the function is very simple, e.g. if , it could be simply computing the parity of boolean outcomes on the physical qubits. In Section 11.2.3, we mentioned that we usually don’t actually perform the encoder map on our quantum computer, but use it purely as a mathematical object to relate the logical qubits to the physical ones. Now we can see why that is the case. If we can find encoded states, gates, and measurements, then we can simulate any logical computation on qubits using a physical one on qubits without ever explicitly performing . Suppose we are interested in the results of a logical computation involving applying a sequence of gates to a state and measuring in the logical ONB . Then we would like to sample from this distribution:
Then, we can translate the logical measurement into a physical measurement using (11.36):
We can keep pushing logical gates through the encoder to get physical ones:
Finally, when we push the logical state through, the encoder is gone, and we’re left with a fully encoded computation that simulates our logical computation:
Great! We’re doing everything on physical qubits encoded in our error correcting code the whole time. We only forgot one thing: we need to do error correction! That was of course the point of this whole exercise. If we are being particularly conservative, we can do one or more rounds of error correction, i.e. measuring all of the stabilisers and applying a Pauli correction, at each step of the computation:
We should also try to engineer our encoded operations, and the error correction itself, in such a way that they avoid spreading errors uncontrollably. For example, if involves multi-qubit gates, this turn a single error into multiple errors. In this section, we will see how to implement (almost) everything we need for universal computation in such a way that we can keep errors under control. This property of encoded gates is what is known as fault tolerance. There are several definitions of fault tolerance in the literature, with various levels of mathematical rigour. Here’s the one we will use. Note that we refer to a single collection of physical qubits encoded in an error correcting code as a code block.
Definition 11.4.1 (Fault tolerance). An operation on a single code block is called fault tolerant if, whenever there are errors on at most qubits before applying , and then faults occur while performing , there are errors on at most qubits afterwards.
An example of an operation that doesn’t spread errors is a transversal gate. For a single code block, transversal gates are simply tensor products of single-qubit operations. An example of something that does spread errors is a multi-qubit gate involving physical qubits in the same code block. For example, a CNOT gate can spread Pauli X errors from its control to its target qubit:
In this case, there was originally 1 error on the physical qubits, but afterwards there are 2. Note that we were not too specific about what it means for a fault to occur during the performance of . What counts as a single fault depends on how we represent the operation and how we will choose to model potential errors that could happen while attempting to perform , i.e. the error model we are using. By their nature, error models are always a simplification of what happens in reality. One of the simplest error models is the phenomenological error model with Pauli errors. In this model, we decompose into basic gates and assume at each time step, a Pauli error could occur on each qubit with some fixed (hopefully small) probability . Consider implementing a swap gate on our physical qubits with three CNOT gates:
The swap gate never multiplies Pauli errors. So, if Pauli errors occur at , then only errors will come out at . However, if an error happens in the process of implementing , e.g. an X error on qubit 2 at , then it will propagate to X errors on qubits 2 and 5 at . In some circumstances, this model is somewhat overly optimistic. For example, one would expect that most physical implementations of 2-qubit gates can actually lead to correlated errors on the qubits involved, while we just assume that an error happens on each of the qubits independently with some fixed probability. However, this will be good enough for our purposes of illustrating the basic principles of constructing fault-tolerant implementations. For more elaborate and realistic error models, we point to the References of this chapter. We’ll conclude this section by noting that Definition 11.4.1 extends to a slightly more flexible notion of “not spreading errors” when multiple code blocks are involved.
Definition 11.4.2 (Multi-block fault tolerance). An operation on code blocks is called fault tolerant if, whenever errors only occur within each code block, and faults occur with performing , then afterwards at most errors occur in each code block.
This definition allows errors to propagate between the code blocks, as long the number of errors appearing in a single block after the operation don’t exceed the total number of errors from before the operation. Consider for example applying a CNOT gate between the -th qubit of one codeblock and the -th qubit of another one:
| (11.37) |
Then errors could indeed multiply:
Even though there are a total of 4 errors after performing this tower of CNOT gates, there are only 2 errors in each code block, which is the same as the number of errors we started with. This is still okay for fault tolerance, as we can then perform error correction independently on each code block. The map (11.37) is an example of a transversal, multi-block operation. We will look at these, and transversal gates in general, in the next section. After that, we will turn to the somewhat trickier issue of fault-tolerant stabiliser measurements. We call a recipe for constructing a universal set of fault-tolerant operations in a given code or code family a fault-tolerant scheme. It makes intuitive sense that having a fault-tolerant scheme is A Good Thing if we want to get to the end of a computation with a relatively low probability of suffering an uncorrectable error. But we can make it more precise why this is the case by discussing the threshold of a fault-tolerant scheme. While we can never push the probability of a logical error occurring all the way to zero, one could imagine having an infinite series of fault-tolerant simulations of a given circuit , each producing a better and better approximation of a logical circuit on physical hardware, at the cost of extra resources needed to do more and more error correction. The main example of such a series would be an infinite family of quantum error correcting codes with increasing distance, such as the surface code, and recipes for implementing a universal family of fault-tolerant operations on an any code in that family.
Definition 11.4.3. The threshold of a fault-tolerant scheme is a probability such that for any circuit , if all physical operations produce a fault with probability , the family of fault-tolerant simulations has the property that for any , there exists some such that the output probabilities of are -close the those of the ideal circuit .
The notion of a threshold is an important one, and perhaps the most important in fault-tolerant quantum computation. Performing encoded operations and error correction involves more basic quantum operations, which of course will produce more errors. However, under some critical hardware error rate, a fault-tolerant scheme will start to suppress more errors than it generates. This means that, if we can demonstrate a threshold, it is possible (at least in principle), to perform arbitrarily large quantum computations on noisy hardware. The threshold theorem states that, under certain assumptions, any universal set of fault-tolerant operations can be turned into a family of fault-tolerant simulations with a threshold , and furthermore the overhead scales reasonably in the size of the circuit and error parameter . Proving this theorem is complicated, so we will leave it to other resources that have done this in detail (see the Reference), and we will proceed now to the practical construction of fault-tolerant operations in a given error-correcting code.
Transversal unitary gates are the easiest kind of fault-tolerant operation to understand, although for a given error correcting code, it can be highly non-trivial to characterise the set of transversal gates that can be implemented in that code.
Definition 11.4.4. A transversal implementation of a logical unitary on a single code block is a gate of the form where the are all single-qubit unitaries. A transversal implementation of a logical unitary acting on multiple code blocks consists of unitaries each acting only on the -th qubit of each code block, for .
An example of a transversal unitary on multiple code blocks is the transversal CNOT we saw in (11.37). It is easy to check that transversal gates satisfy the fault-tolerance conditions laid out in Definitions 11.4.1 and 11.4.2. So, if we can find a universal set of transversal gates, life is good! Unfortunately, there is this little nugget:
Theorem 11.4.5 (Eastin-Knill). For any error-correcting code capable of detecting an arbitrary single-qubit error, the set of logical unitaries with a transversal implementation is finite. In particular, no non-trivial error-correcting code has a universal set of transversal gates.
The proof of this theorem uses some tricks from Lie theory which are a bit technical for our purposes, so we won’t reproduce it here (see the References). Very roughly, the proof shows that the error-detection property of the code forces the group of distinct unitaries with transversal implementations to be discrete: there is no way to “nudge” a small distance in any direction, while keeping transversal. But then the group of all logical unitaries must also be compact, and any discrete subgroup of a compact group is finite. If that explanation didn’t do much for you, don’t worry. The main thing you should get is that Theorem 11.4.5 is Bad News if we were hoping to do all of our fault-tolerant computation with transversal gates. However, many codes still have lots of useful transversal gates. Also, as we’ll see in Section 11.5, we can still achieve universal computation fault-tolerantly, but we’ll need to go beyond transversal unitaries to find a bit of extra magic.
Transversal Clifford gates are usually the easiest fault-tolerant operations to understand for stabiliser codes. In the general case, we can use stabiliser theory to find the transversal gates of a stabiliser code and compute how they act on logical qubits.
Exercise* 11.11. Let be the encoder of a stabiliser code and a Clifford unitary. Show that we have:
for some Clifford unitary if and only if we can push stabilisers through . That is, for all there exists such that .
In particular, as soon as we have a Clifford unitary that has the “stabiliser pushing” property, we can compute just by looking at how acts on the logical operators . We can perform all the computations involving stabiliser groups efficiently, e.g. using the symplectic matrix representation from Section 6.4.4. Hence, a naïve way to find all the transversal Clifford gates in a code is to just enumerate every local Clifford and check if it sends all the generators of the stabiliser group to stabilisers. Of course, there are exponentially many local Clifford operations, so it may not be practical to enumerate them all for large . However, there are many results that show that some families of codes always have certain transversal Clifford gates. To start, the definitions of the logical and operators (11.11) and (11.12) essentially say that any stabiliser code has transversal implementations of all the logical and operators:
A bit less trivially, one of the most well-known results about transversality says that CNOT gates are transversal for CSS codes. In its simplest version, this means that applying a CNOT gate transversally across all physical qubits of two identical codeblocks has the effect of applying a CNOT across all logical qubits. We can formalise this using the scalable notation as follows.
Proof. We begin with the right-hand side of (11.38) and rewrite the encoder of the second code block into Z-form:
We can then push the CNOT through the encoders using strong complementarity and the scalable ZX rules:
then apply equations (11.34) and (11.35):
Rewriting the second encoder back into X-form yields the left-hand side of (11.38). □
All CSS codes have transversal CNOT gates. However, it might not be so convenient to implement a CNOT between each qubit in the code block. For example, as the surface code is a CSS code, we know it has a transversal CNOT, which looks like this:
Note however, that implementing it would ruin the nice 2D structure which makes the surface code so attractive for some hardware platforms. We’ll see in Section 11.4.3 that there is another way to implement 2-qubit gates in the surface code and friends without destroying planarity. Not all CSS codes admit transversal H gates, but quite a few do. To see what conditions we need, let’s start with a transversal H and see what happens when we push it into the encoder. First, we can introduce a scalable H gate to mean copies of the H gate:
Then the usual colour change rules work just like they would for the normal H gate, but we also get an additional rule. Using (11.27), we can push a scalable H gate through an arrow as follows:
| (11.39) |
Using this new rule, we can introduce H gates on all of the physical qubits of the encoder in Z-form and push it to the left:
| (11.40) |
The RHS almost looks like a layer of Hadamard gates followed by the X-form of the encoder, but not quite. The X-form should be labelled by and , not and . We could therefore consider codes where and and we’d be done. However, it turns out that is stricter than we need, so we’ll consider codes where just the X-stabilisers and Z-stabilisers are the same.
Examples of self-dual CSS codes are the code from Example 11.2.5 and the Steane code. Continuing from (11.40) with a self-dual code, we can see that we almost get to the form we need:
| (11.41) |
but the logical operators on the RHS are still the wrong type to get an encoder in X-form. The trick now is to realise that even though the X-logical and Z-logical operators might still be different in a self-dual CSS code, they always generate the same subspace together with the stabilisers. Hence, we can prove the following proposition.
Proof. Suppose consists of independent stabilisers, then the whole CSS code has stabilisers. It must then have X-logical operators. Hence, the columns of and together consist of independent vectors in . Since is -dimensional, that means they span the whole space. But then, , so each column of is in . Hence, we can write each column of as a linear combination of columns from and , i.e. we can find matrices such that . □
Exercise 11.12. Show using the scalable rules from Section 10.2 and Proposition 11.4.8 that, for a self-dual code, we have for some matrix :
Hint: Recall Proposition 10.2.2.
Theorem 11.4.9. H gates are transversal for self-dual CSS codes. That is, for any self-dual CSS code, there exists a CNOT circuit with parity matrix such that:
| (11.42) |
Proof. Starting from the RHS of (11.42), we can switch to the Z-form of the encoder and apply the derivation in (11.41) and Exercise 11.12:
is the parity matrix of a CNOT circuit if and only if it is invertible (Proposition 4.2.12). This follows immediately from the fact that the map above is an isometry, and hence in particular is non-singular. □
This means applying a transversal Hadamard at the physical level is not necessarily the same thing as applying a transversal Hadamard at the logical level. However, it always does something non-trivial to our logical qubits, consisting of a layer of Hadamard gates followed by some (possibly trivial) CNOT circuit. In the case the Steane code, the logical operators are identical, so applying H to every physical qubit is the same as applying H to the single logical qubit:
However, the code from Example 11.3.4 has X-logical operators and Z-logical operators that differ by a logical swap:
In this case, the parity matrix from Theorem 11.4.9 is
corresponding to a swap gate (or equivalently, 3 CNOTs). Note that the surface code almost has a transversal gate. When we apply everywhere on the X-form of the encoder, we get the following:
We almost get back to where we started, except the encoder has “rotated ”, which follows from applying the colour-reverse of equation (11.21). Technically this is a different error-correcting code. But of course this new code has all the same properties as before, since it is just a permutation of the physical qubits. As long as we keep track of where the qubits are, this is “close enough” to a transversal for many purposes. We could at this point complete the Clifford story and give the conditions for a CSS code to have transversal S gates. Rather than working this out explicitly, we will go first to the harder case of transversal T gates, which as we will explain in Remark 11.4.13, generalises readily to a characterisation of transversal gates for all .
We now turn to the somewhat thornier question of when stabiliser codes admit transversal non-Clifford gates. For the sake of simplicity, we will start with gates, but in fact the story will be much the same in the next section when we talk about all diagonal gates on the third level of the Clifford hierarchy. Once we pass to non-Clifford gates, we can’t expect to be able to just conjugate each of the generators of the stabiliser group and get Paulis again. Hence, we no longer have such an easy way to check if a given operation on physical qubits preserves the codespace. Anyway, we still have the ZX-calculus at our disposal, so let’s be brave. Just like we did with transversal Hadamards, we can start with a transversal application of gates on the physical qubits and try to push it through the encoder of a CSS code. If it comes out as some non-trivial unitary on the logical qubits, we’re golden. Since we’ve left Clifford-land, we don’t expect life to be that easy, but we will just start and see where we get stuck. We begin by applying strong complementarity and the scalable rules:
| (11.43) |
This gets us pretty far, but we still have that pesky connecting our phase gadgets to some wires that don’t correspond to logical qubits. In order to preserve the codespace, we want to end up with some phase gadgets just supported on the logical qubits. Just like in the case of Hadamards, this could act differently on the logical space from the physical space, so we don’t know a priori what this action is. We will start with the simplest case, where on the physical qubits acts as on the logical qubits. That is, we want to satisfy this equation:
| (11.44) |
The reason it is easier to use and not is we can now move everything to one side and we’ll only have phase gadgets:
Now, we can “zip up” this equation into a single matrix over a register of qubits:
| (11.45) |
Thanks to Section 10.3.1, we know that a configuration of phase gadgets equals the identity if and only if its associated matrix is strongly 3-even. Hence, we are ready to characterise when CSS codes have transversal gates.
Definition 11.4.10. A CSS code is called triorthogonal if the following matrix is 3-even:
| (11.46) |
and a code is called strongly triorthogonal if the matrix above is strongly 3-even.
Theorem 11.4.11. A CSS code admits a transversal in the sense that:
| (11.47) |
if and only if it is strongly triorthogonal.
Proof. First assume the CSS code is strongly triorthogonal. From the calculations above, this is equivalent to equation (11.44). Hence, following calculation (11.43), we have:
Conversely, if satisfies Eq. (11.47), then:
We can then apply Eq. (11.33) to cancel most of the encoder from both sides:
Note that the right-hand side consists of a diagonal unitary applied to , but diagonal unitaries send the state to itself if and only if they are the identity. Hence, we can conclude from Definition 11.4.10 is strongly 3-even and hence is strongly triorthogonal. □
Exercise 11.13. Show that triorthogonal codes implement a transversal gate, possibly up to a diagonal Clifford map on the outputs. That is, for some :
Hint: Use equation (11.32) to move a diagonal Clifford map on the logical side to the physical side.
Example 11.4.12. The degree-1 monomial gives the following strongly 3-even matrix:
The matrices on the right define a quantum Reed-Muller code. This is, by definition, a strongly triorthogonal code, so applying on the physical qubits results in a logical . This property will be used to perform a protocol called magic state distillation in Section 11.5.
Remark 11.4.13. Note that almost the exact same argument as above can be given for transversal gates for any . We can define strongly -even matrices as those whose columns sum to , pairs of columns sum to , and so on up to groups of columns summing to . Following essentially the same reasoning as Section 10.3.1, we can conclude that
for any strongly -even matrix . Hence, any CSS code whose matrix
is strongly -even has a transversal gate. Furthermore, we can generalise Exercise 11.13 to show that if is only -even rather than strongly -even, it is transversal up to a phase gadget. In particular, if the matrix is 2-even, the code has transversal gates up to Paulis, and hence it can implement a logical transversally.
We can generalise from characterising transversal or gates to transversal implementations of more general unitaries in the third level of the Clifford hierarchy. Recall from Section 6.5 that the third level of the Clifford hierarchy consists of all the unitary maps with the property that, for all Pauli strings , is Clifford. While it seems to be a difficult problem to characterise all transversal gates in , it is much easier if we just focus on the diagonal unitaries . As noted also in Section 6.5, forms a group which is generated by the phase gadgets. Consequently, an arbitrary unitary in can be written as
| (11.48) |
for some boolean matrix . We can also capture what it means to be a transversal unitary on qubits in . These consist precisely of some power on each qubit. Hence, a transversal gate in is of the form for some matrix where each row contains exactly one 1. Such a matrix represents a gate on the -th qubit by containing the -th unit vector as a row. We can then represent higher powers as repeated rows.
Example 11.4.14. The following is a transversal application of powers of and its associated -matrix:
From here, the story goes much like in the previous section. Rather than focusing just on , we can start with any transversal unitary and try to push it past the encoder:
| (11.49) |
Now, in order to preserve the codespace, the resulting diagonal unitary should be supported only on the logical qubits. That is, for some , we should have:
As in the previous section, it is convenient to use phase gadgets on the right-hand side, but note that, since can now be arbitrary, this generates the same set of unitaries as phase gadgets. Moving everything to one side, we get:
...or equivalently, in “zipped up” form:
| (11.50) |
Hence, we can now state a generalised version of Theorem 11.4.11.
Theorem 11.4.15. A CSS code with X-logical operators and X-stabilisers and admits a transversal implementation of a logical gate
if and only if there exists a matrix whose rows are unit vectors, such that the matrix
| (11.51) |
is strongly 3-even.
Example 11.4.16. The constant polynomial gives us a strongly 3-even matrix whose columns are all the 4-bitstrings. If we partition the matrix as follows:
| (11.52) |
then this gives a matrix of the form (11.51) with , with defining the X-logical operators and X-stabilisers of an CSS code, which is sometimes called the “smallest interesting colour code” (see the References for why). It has three X-logical operators and 1 X-stabiliser:
We can draw this encoder as a cube, with the X-logical operators connecting to 3 adjacent faces and the X-stabiliser connecting to everything:
In the upper left corner of the matrix (11.52), we see the phase gadgets of a CCZ gate as in equation (9.6), hence this code admits a logical , implemented via physical gates:
Remark 11.4.17. While Theorem 11.4.15 gives a complete characterisation for the transversal gates in a CSS code, it is not obvious from the statement whether it can be used to efficiently find such gates. However, it turns out that this is possible. In fact, there are three related problems: (1) for fixed logical find transversal gates , (2) for fixed find , and (3) compute a generating set of all logical gates and their associated transversal implementations. All three of these problems can be posed as a system of linear equations over the ring , and can be solved in polynomial time using a generalisation of Gaussian elimination that works over more general kinds of rings. We will say a bit more about this in the References.
In order to do error correction, we should be able to do stabiliser measurements on our physical qubits. Since any non-trivial stabiliser generator will have support on multiple qubits, these will in general be entangling operations involving multiple physical qubits, so it is not immediately obvious we can perform such measurements fault-tolerantly. Before we think about a fault-tolerant implementation of a Pauli measurement, we should think about how we would even implement a generic Pauli measurement in the first place. For the first part of this section, we will focus on measuring , but everything we say below will apply to measuring any Pauli string just by conjugating by the appropriate local Cliffords:
It could be that our hardware natively implements multi-qubit measurements, however at the time of writing, this is not possible (or at least quite challenging) on most quantum hardware platforms. Even if it were possible to implement high-weight Pauli measurements on physical qubits, we might not want to, as it could introduce errors on many qubits at once. Hence, the first thing we should do is decompose this measurement into more basic operations like basic gates and single-qubit demolition measurements. This is particularly nice in ZX, because it just amounts to coming up with a good way to unfuse the big spider in the projector. One such unfusion is:
| (11.53) |
We can now interpret the right-hand side as preparing an ancilla in the state, applying a series of CNOT gates, then measuring the ancilla in the Z basis. In terms of not spreading errors on the physical qubits, this is actually pretty good. Z-errors will just commute through, whereas X-errors will copy on to the ancilla and flip the measurement outcome:
...which is what they are supposed to do! The problem comes from errors that might occur on the ancilla qubit while implementing the measurement. If an X error occurs on the ancilla, the whole measurement will report the wrong outcome:
This is bad of course, because doing error correction depends on being able to report the results of syndrome measurements reliably. However, this kind of error is not fatal. As long as the error rate is not too high, we can increase the reliability of our stabiliser measurements just by repeating the measurement multiple times and reporting whatever outcome we got the majority of the time. The worse thing that can happen is Z-errors, i.e. “phase flips” occurring on the ancilla, because these can propagate out to errors on data qubits:
Worse still, it doesn’t change the measurement outcome, so this measurement has introduced a new, undetected multi-qubit error. We hence see that in the naïve implementation of a multi-qubit Z measurement, a single fault causes multiple errors on the data qubits. Hence, it doesn’t satisfy the fault-tolerant criterion from Definition 11.4.1. To solve this problem, we need to come up with a “better unfusion” of this Pauli measurement. One pretty good solution is, rather than unfusing the X-spider sequentially, we unfuse it in parallel:
| (11.54) |
where . Now, rather than having a single ancilla prepared in the state, we have ancillae for a Pauli measurement of weight prepared in a generalised GHZ state, with respect to the X basis: . Especially in the context of fault-tolerance, this is often called a cat state. Then, we perform a CNOT between the -th data qubit and the -th qubit of the cat state and measure each of the ancilla qubits in the Z basis. The XOR of all the measurement outcomes then gives the result of the syndome measurement. This implementation of a multi-qubit measurement is called a Shor fault-tolerant measurement. For the sake of concreteness, we have shown the 4-qubit Shor measurement, but this generalises in the obvious way to Pauli measurements of higher weight. There are a few things to say about Shor measurements. First, notice how in the process of unfusing spiders, we can turn one measurement into multiple measurements. Second, we have now fixed the problem that our naïve implementation had: single Z errors in the RHS of (11.54) now result only in single errors on the data qubits, e.g.
However, this doesn’t fully solve the problem yet: it has shifted the difficulty of fault-tolerantly measuring to fault-tolerantly preparing cat states. If we assume that our basic operations are single-qubit preparations and measurements, as well as basic 2-qubit gates like CNOT, then we can prepare cat states using some ladder of CNOTs like this:
| (11.55) |
There are many basic circuits that can prepare a cat state, but they all have one thing in common: there is always some location where a single error in the preparation can propagate out to multiple errors on the cat state, and hence the measurement (11.54). For example:
To solve this issue, we can perform some rudimentary error detection or correction on the cat state. The first thing to note is that, because of the orientation of CNOT gates, X errors will only possibly mess up the measurement outcome, but never otherwise effect the data qubits. As mentioned before, we can mitigate this by simply repeating the measurement and taking the outcome we get the majority of the time. Hence, we can focus just on Z errors. For a cat state prepared as in (11.55), the only place a single Z error during preparation can result in 2 errors on the cat state is the one shown above. We can therefore detect this error by measuring any stabiliser of the cat state that anti-commutes with this error. will work:
If we get outcome , we can just throw out the cat state and try again. This is called a repeat-until-success strategy for state preparation. But then, what if this nested error-detection measurement propagates errors? Do we need to do error-detection on it? Is it Turtles All the Way Down? Thankfully no. Due to the direction of the CNOT gates, there is no way for this nested measurement to propagate additional Z errors into the cat state. This nested measurement could indeed cause more bit errors into the cat state, but as in the case of our naïve measurement, bit errors on the ancillae are not a bit deal. They can lead to an erroneous syndrome, which can be mitigated by repeated measurements, but crucially they won’t cause any additional errors on the data qubits. There exist several refinements to the idea behind Shor measurements, which consist of preparing (possibly elaborate) ancilla states, performing some transversal CNOTs and doing single-qubit measurements. One that is particularly nice to analyse in the scalable ZX-calculus is the Steane fault-tolerant measurement protocol. This protocol works for any CSS code, and allows one to extract syndrome information for all the stabilisers of a single kind (X or Z) at once. First, it relies on being able to reliably prepare the encoded and states associated with a CSS code. We can see what these look like by plugging the all-0 and all-plus logical states into the CSS encoder:
| (11.56) |
| (11.57) |
Now, suppose we start with an ancilla system prepared in the encoded state, perform a transversal CNOT between the ancilla and a block of our CSS code, then measure all the qubits of the ancilla system. Here is what we’ll get:
| (11.58) |
We can reduce this further once we know how the arrow acts on the effect corresponding to our measurement outcome arising from doing a Z-measurement on all the physical qubits. First, note that an X-spider labelled by corresponds to the computational basis state . We already know that map described by parity matrix sends a state to the state . Hence:
Applying the colour-change rule for the arrow (11.39), we get:
Continuing from (11.58), we have:
We now have a protocol for measuring all of the X-stabilisers at once. We apply the fault-tolerant circuit on the left-hand side above, obtain an outcome , then compute the X part of the error syndrome as . Similarly, we can measure the Z part of the syndrome, by reversing all the colours:
Thus we obtain the Z part of the error syndrome as . Performing these two protocols in sequence therefore gives us a full round of syndrome extraction:
Assuming we have fault-tolerant protocols to prepare the two ancilla states, everything else in sight is fault-tolerant. There are a handful of ways to do this. For example, to prepare the encoded all-0 state, one can start with and do several rounds of measurements using some other fault-tolerant protocol, such as the Shor method. One could also attempt to detect errors that occured during preparation with an extra layer of error detection or do some sort of “distillation” procedure akin to the magic state distillation we will discuss in Section 11.5. One limitation of the Steane method is it only works for CSS codes. The following exercise describes a procedure that works for all stabiliser codes.
Exercise* 11.15. Consider the following “logical Bell state”, prepared on two blocks of an stabiliser code with encoder :
The Knill fault-tolerant measurement protocol measures the full error syndrome by performing a multi-qubit Bell meausurement between our data and one code block of this state:
Show that this acts the same on the quantum state as measuring all of the stabilisers, and show how the syndrome can be computed from the measurement outcome .
As we mentioned in Section 11.4.1.1, the transversal CNOT gate might not be the most convenient way to implement multi-qubit operations in the surface code, because it breaks the 2D planar structure. It would be nice if there was a way to implement multi-qubit operations between neighbouring patches of surface code that only touch the boundaries of the code patches, hence not requiring lots of non-local gates, which might be hard to implement on platforms where qubits are embedded in the plane. This is where lattice surgery comes in. This is a particular technique for implementing multi-qubit operations in the surface code (or CSS codes with similar structure) fault-tolerantly. Unlike the transversal gates we considered before, these operations are not unitaries. The first class of operation splits a logical qubit into two, and comes in two varieties:
As these are both isometries, they can be performed deterministically. However, the dual operation that merges two logical qubits into one is non-deterministic:
In other words, each of these operations is a degenerate measurement that projects the 4D space of two logical qubits onto a 2D space, which we can then treat as a single logical qubit. These operations can be combined to produce multi-qubit operations such as a CNOT on the logical level:
up to a possible Pauli error, which can be corrected for in subsequent operations. If one is additionally able to prepare single logical qubits in a handful of fixed states, and can use merge operations to obtain arbitrary single-qubit gates, and hence a universal model of computation. This explains how lattice surgery works on the logical level. To explain what is happening at the physical level, we can push these operations through the encoder. However, unlike previous examples, these operations can actually change the code we are using to encode our logical qubits. The surface code works for any grid of qubits, and has distance . To explain the split and merge operations, we will use not one encoder but a whole family of encoders of the form which embed a single logical qubit into a grid of the appropriate size. Then, the physical operations operations and should commute with this family of encoders as follows:
We’ll demonstrate these operations concretely on surface code patches, but in fact the same derivation will work for surface code patches of any size. Let’s start with the Z-split, which is performed on a patch of surface code by performing measurements down the -th column as if this were the rightmost X boundary of a surface code patch. This will have the overall effect of splitting the patch in twain. For this derivation, it will be most convenient to use the X-form of the encoder. To perform the split itself, we do measurements down the 3rd column as if this were the right boundary of a surface code patch. In this case, there is only one XX measurement to do. We can then use the -copy rule to push the phase coming from the measurement outcome on to the outputs:
Note we write to mean the two diagrams are equal up to “error correction”, i.e. Pauli operators applied just on the outputs. These can be treated as errors on the physical qubits and corrected later, so we will disregard them in our calculation. Using the complementarity rule (c), we see that the existing stabiliser becomes a pair of stabilisers. This can then be translated into a Z-copy followed by two encoders by unfusing the bottom spider:
Next we do an X-merge by performing stabiliser measurements along the boundary between two vertically-stacked surface code patches, as if these were stabilisers of one big patch. We can eliminate the phases from the encoder by error correction, but this time we pick up a phase of where on the input (or more generally for bigger patches). We can then use the “deformation” trick from equation (11.22) to move the two logical operators on top of each other and apply strong complementarity:
Note that the applications of the spider law, complementarity, strong complementarity, and -copy rules in these two derivations extend naturally to larger surface code sizes. Also note that reversing the colours of these two derivations and rotating gives recipes for remaining two operations of X-split and Z-merge, using the Z-representation of the surface code embedding rather than the X-representation. This gives us a nice 2D way to build CNOT gates, as well as more general multi-qubit operations described by phase-free ZX diagrams, using just local measurements in the surface code. We already noted in Section 11.4.1.1 that we nearly have transversal Hadamards in the surface code, as long as we are willing to account for the fact that our surface might get rotated . If we only had T gates, we would have a universal set of gates which can be conveniently performed on surface-code-encoded qubits. We can check that the surface code is not a triorthogonal code, so it definitely doesn’t have a transversal T gate. More generally, due to the Eastin-Knill theorem 11.4.5, we know it doesn’t have any universal set of transversal one-qubit gates. So, we need a different idea. This is where magic state distillation comes in.
We saw all the way back in Section 3.3.1 that if we have access to states of the form:
then we can do magic state injection using just CNOT, S, and a Pauli measurement:
In fact, with lattice surgery in the surface code, we can do this even more directly, since we can just MERGE the magic state right in:
In the case of the surface code, we don’t have a transversal S gate to perform the classical correction. However, if we do state injection for S we only need to do a Pauli correction:
Paulis are just logical operators, so they are always transversal in any error correcting code. Even better, if we have an state and access to H and CNOT, we can use it to catalyse as many S gates as we want, following Section 10.5:
Thus, all we need to boost the surface code (and actually quite a wide family of stabiliser codes) to computational universality is access to one state and lots of states. In fact, as soon as we have states, we can build states, at least probabilistically, so all we really need is enough states. That is what makes these states so magic. We found a code with a transversal gate (Example 11.4.12) and one with a transversal CCZ (Example 11.4.16), but these codes are actually not that good. They are not high distance, and they can’t perform some other useful gates transversally. However, even though these codes are not any good for doing all of our quantum computation, they can help us prepare magic states (and later ) using a technique called magic state distillation. Magic state distillation works by taking many noisy copies of a state and turning them into one less noisy one. To do this, we first form the encoded state, then apply a noisy gate to each of our qubits using a bunch of noisy states and magic state injection, then decode back to a 1-qubit state to produce a less noisy magic state. To convince ourselves this could work, let’s start with an idealised case where we want to produce a magic state on a single qubit and we have access to two things:
perfect, noiseless Clifford operations, and
a procedure that produces a “good” magic state with probability and an erroneous magic state with probability .
We’ll see how to get something approximating these assumptions on our actual hardware later, but this simple case will be enough to get the main idea. Now, let’s start with an state, which we prepare using our “perfect” Clifford operations, and encode it with some error correcting code with a transversal T gate, such as the code from Example 11.4.12. We can then do magic state injections on all 15 of the physical qubits using 15 (possibily erroneous) magic states. We can then decode back to a single qubit using the decoder described in Section 11.2.4, post-selected onto syndrome :
| (11.59) |
If no errors occurred in this process, we will indeed always get a syndrome which is what we want. If instead we get a non-zero syndrome , then we throw away the resulting state away and try again. Note that since we are assuming that all Clifford operations are perfect and noiseless, that the only way we could get a non-zero error is because of some fault in the states. Hence if we indeed got a syndrome of , then the resulting ‘distilled’ state only contains an error if the noisy introduced some undetected error. Since the code has distance three, at least three magic states needed to get an error. If we assume the errors occur independently, this probability is at most (ignoring the much less likely scenario of getting more than 3 errors). In fact, even distance-3 codes can detect some 3-qubit errors, so the probability can actually be a bit lower than that. For the case of the , there are just 35 undetectable triplets of errors, and so the probability is . Hence, while we started with 15 states that have probability of getting an error, we ended up with 1 state with probability of having an error. Hence, as long as , we have made progress. In fact, for reasonably small , we have made a LOT of progress. Let’s do some calculations to see how much we have accomplished. First, note that when . So even if we start with states that have an error rate of up to 17% we can use this protocol to improve them. But let’s suppose we start with , a 1% error rate. There are already many quantum devices that are below this error rate. We see that after a single successful round distillation, we have . If this isn’t good enough, we just take 15 of these better states and use them in another round of distillation. After a second round we have . A third round gives . This is already a much better error rate then we would ever need to do any realistic computation. So, doing successive rounds of magic state distillation exponentially suppresses the probability of an error. However, it also has an exponential cost in noisy states. For example, doing three rounds of this 15-to-1 protocol requires at least noisy states. However, we haven’t yet accounted for the states we have to throw away when the protocol fails. Our protocol aborts when we get a non-trivial syndrome. By far the most likely reason to get a non-trivial syndrome is when we see 1 error. The probability of this happening is . So in the first layer of distillation, when we have , the probability the protocol fails is . To get the actual cost estimate, we hence need to multiply the cost of running the protocol once by the estimated number of runs we need to get a successful run. For the first layer this is . Hence, the expected cost to produce one state of error is instead of . By the time we get to the second and third layers, the probability of getting an error is already so small that the expected cost is very close to 15. Multiplying , we see we can distill a very, very good state for an expected cost of around 4000 noisy ones. Inserting some magic into our computation sure seems to come at a cost. But while this is expensive, it is not totally crazy. This was one of the first protocols for doing magic state distillation, and its possible to bring this cost down quite a bit by finding better codes and protocols. In the next section, we will show a state-of-the-art distillation protocol using almost all of the ingredients we’ve covered in this book so far, but before we get there, let’s revisit the two simplifying assumptions we made at the start. The first was that we can do perfect, noiseless Clifford operations. Of course, nothing in quantum-computing land is noiseless, but if we have a family of error correcting codes of increasing distance with fault-tolerant implementations of all Clifford operations, we can perform Cliffords with arbitrary low levels of noise. To benefit from this, we can do magic state distillation inside of another code that has high-distance and fault-tolerant implementations of Clifford gates. Rather than starting with 15 physical states and performing a round of distillation using physical gates and measurements, we prepare 15 noisy logical states:
and then perform the whole protocol (11.59) encoded within . Since doesn’t have transversal gates, we can’t prepare fault-tolerantly, but we can still prepare possibly faulty states, which we then distill to get better ones. Generally to make this procedure as efficient as possible, we want to pick an ‘ambient’ code whose probability of producing a logical error during the distillation protocol is just a bit lower than the logical error of the produced distilled states. This makes the Clifford operations ‘just good enough’ so that we can treat them as perfect, while not being overkill by using an overly expensive code.
Remark 11.5.1. The surface code doesn’t have transversal gates, so we don’t know yet that it implements the full Clifford group. Nevertheless, we can still perform the analogous protocol to (11.59) using only CNOT, H, and Paulis in order to distill an state. As we already remarked at the end of Section 11.4.3, once we have one good state, we can use it to catalyse as many fault-tolerant gates as we need, and hence the full Clifford group.
The second assumption was that the only kind of error we get is getting instead of with some probability . Of course, many kinds of errors could happen when trying to prepare , not just those that make a logical error of . However, there is a nice trick, called twirling, which we can use to project all the other kinds of errors that might happen down to either “no error” or a error. Start with a state subject to arbitrary noise, and then with 50% probability, apply a logical unitary . After we do this, we can treat the resulting state as either staying the same ( is an eigenstate of after all) or flipping to with some small probability. It probably seems rather counter-intuitive that this works. The best way to understand this is in the language of quantum mixed states and channels from Section* 2.7.1. We leave the details as a starred exercise:
Exercise* 11.16. In this exercise we will find out why we may assume that the error in preparing a is only a , and nothing else.
Show that and are eigenstates of the operator .
Let be the quantum channel acting on a qubit density matrix via . Show that and are fixed points of and that it sends and to zero.
Conclude that is a projector to the space spanned by and .
Argue that we hence have for some probability .
We see then that the resulting mixed state is a convex combination of the pure states and , which behaves identically to a perfect state that gets a -flip with probability .
In summary, we have managed to convince ourselves that magic state distillation works, even starting from fairly realistic assumptions about what we can do on a real quantum computer. However, 4000-to-1 is still pretty expensive, so let’s see if we can bring that cost down.
We can use magic state distillation to build other kinds of non-Clifford states as well. A particularly handy one is the CCZ magic state:
In this section, we’ll build a distillation protocol for it based on the code given in Example 11.4.16. In this code we can use 8 gates to implement a CCZ gate:
Using this property, we can start with an encoded state and apply transversal T gates to end up with a magic state. This in turn can be used to inject a CCZ gate at will using the procedure described in Exercise 9.5 using just Clifford operations. We can use this 3-to-8 code to distil a with a lower error probability than the input states using the repeat-until-success approach we described above. Because the code is distance , it can detect up to one error. As there is just one stabiliser with support on all 8 qubits, we know that any combination of two errors leads to a trivial syndrome and hence will not be detectable, so that any pair of two errors will lead to the protocol failing. An input error probability of for the gates hence gets boosted to . This procedure then has a threshold of , or for improving the error in the CCZ. Note that this protocol is not directly stackable: it takes in magic states, but outputs a magic state. For this reason (and another one we will see soon), this distillation protocol is mostly used as a final stage, where we already have relatively high-quality states, and we wish to convert them into a still higher quality . Suppose for instance that we start with states with an error rate of (current machines get pretty close to being able to prepare states with this precision). Then one round of the 15-to-1 distillation described above gives us . Using these states in our CCZ distillation protocol gives us a CCZ gate which has an error probability of . That is: we can expect to do about 30 trillion CCZ gates before we encounter one that is faulty. The cost for this is only about noisy states per distilled CCZ. Instead of using the magic states to implement a CCZ gate, we can also use them to implement gates. As we saw in Section 10.5, a CCZ gate can be converted back into two separate states using catalysis. If we do that with the CCZs produced by this protocol, this then gives us a pipeline to convert 8 gates into one better CCZ, which is then converted into gates. This is then effectively an 8-to-2 protocol with an error improvement of . But there is an important detail here we shouldn’t forget: when the protocol fails, i.e. more than one error happens so that we get a CCZ with an error, then when we convert this CCZ into two states, both of these states will potentially inherit this error. Hence, when we do this we will get correlated errors. If these gates are directly used in the final computation we wish to execute, this is not a problem since there we would consider any single error already catastrophic, so the probability of a double error being higher doesn’t affect the overall success probability of the computation. However, if we were to reuse these potentially “damaged” states in another round of distillation, we might get a much higher chance of the protocol failing, and hence not get as good a quality of states out of it as we would expect. Despite these problems, the fact that we can distil at a rate of 8-to-2 is clearly a lot higher than 15-to-1, and as such this catalysis-based protocol, or variations of it, form part of some of the best-performing proposals for magic state distillation out there. In particular, as it takes 120 for one CCZ, catalysing this back into 2 ’s gives an effective rate of 60 states with error getting converted into 1 state with error .
A quantum error correcting code is a subspace of -qubit space. It lets us detect and correct errors by performing measurements to see if we have left the subspace.
Stabiliser codes are a family of quantum error correcting codes we can efficiently represent and manipulate using stabiliser theory and/or the Clifford ZX-calculus. They are subspaces of the form:
for a stabiliser group .
An stabiliser code encodes logical qubits in physical qubits and can detect any Pauli error of weight and correct any error of weight where .
We can relate logical qubits to physical qubits via an isometry called the encoder map:
For CSS codes, the encoder can always be written in one of two simple equivalent forms: the X-form and the Z-form. Using the scalable ZX calculus, these are:
where the columns of and represent X-logical operators and X-stabilisers, respectively, and similarly the columns of and represent Z-logical operators and Z-stabilisers.
The surface code is a well-studied family of codes, with many nice properties such as high levels of noise resistance, efficient decoding, and the ability to implement it using just 2D nearest-neighbour gates.
In order to achieve fault-tolerance, we must find ways to implement error correction, as well as the building blocks of universal quantum computation (e.g. state preparation, unitaries, and measurements) on encoded qubits without spreading errors.
Transversal unitary operations are fault tolerant because they only involve tensor products of operations on one qubit from each code block. However, no single code admits a universal set of transversal gates, due to the Eastin-Knill theorem.
CNOT gates are transversal for CSS codes and H gates are transversal for self-dual CSS codes:
Strongly triorthogonal codes have transversal T gates:
and triorthogonal codes have transversal T gates up to a (possibly non-transversal) Clifford unitary on the physical qubits.
Fault-tolerant syndrome-extraction protocols perform stabiliser measurements without spreading errors on the data qubits. Many of these protocols, like Shor’s, can be seen as “unfusing” the Pauli projector:
Magic state distillation lets us turn many noisy copies of a magic state into fewer, less-noisy copies:
This can be very costly, requiring thousands of noisy states to distil a very clean one.
Using a high-distance code with transversal Clifford operations, such as the surface code, along with magic state distillation, is a promising method for implementing universal fault-tolerant quantum computation.
Origins of quantum error correction All the core ideas of quantum error correction—quantum codes, fault-tolerant computations, the treshold theorem, stabiliser theory—originated in a flurry of activity between 1995 and 1999, with especially significant contributions by Shor, Steane, Gottesman and Kitaev. Many ideas were discovered independently by several people at the same. The original idea of quantum error correction was introduced independently by [211] and [216]. General conditions that any (not necessarily stabiliser) code must satisfy to correct errors were independently found in [27] and [156] and are now sometimes called the Knill-LaFlamme conditions. The “History and further reading” section of Chapter 10 of [188] provides many more references to the early developments in quantum error correction.
Origin of some quantum codes The 9-qubit Shor code was discovered by, well, Shor [211] and the 7-qubit Steane code by, you guessed it, Steane [216]. The 5-qubit code was discovered independently (again) by both [27] and [161]. Stabiliser theory and the idea of stabiliser codes was introduced by [112]. The idea to combine a pair of orthogonal classical codes into a single quantum code was discovered independently by [45] and [217] and hence these are named Calderbank-Shor-Steane (CSS) codes in their honour. The ‘smallest interesting colour code’ was described as such by [46], where Campbell shows that this code has a transversal CCZ gate (note that the 7-qubit Steane code is the smallest non-trivial colour code, but since it has no non-Clifford transversal gates it was not interesting according to that post).
Fault tolerance An early statement and proof of a fault-tolerant threshold theorem goes back to [212]. This was improved upon by many other groups, including [5], [157], [153], and [8] under various assumptions and error models. Shor’s protocol for fault-tolerant stabiliser measurements is from [212], Steane’s from [218], and Knill’s from [155].
Transversal gates A characterisation of when a stabiliser code has a transversal Hadamard, or CNOT gate was given in for instance the thesis of [112]. There he in fact shows that if a stabiliser code has a transversal CNOT, then it must be a CSS code. The characterisation of transversal diagonal gates from the Clifford hierarchy is based on [235], though the ZX version we present here is from [149]. In [235] they also give an efficient algorithm for finding codes with transversal diagonal gates.
Surface codes The surface code was introduced by [38], based on the slightly older toric code of Kitaev [152, 153] which considers a 2d lattice defined on a torus, i.e. a donut. So whereas the surface code has an actual boundary where the lattice ends, in the toric code the surface loops around to create the surface of a donut. An in-depth study of correcting errors on the surface code by identifying connecting lines, and doing universal computation using transversal CNOT gates and magic state injection was done in [81]. There they also found a first estimate of a treshold for the surface code. Transversal CNOTs are of course not practical for surface codes. In [200, 197] they use the method of introducing punctures, i.e. holes, into a surface code in order to encode multiple qubits into a single surface. The distance of the code is then the distance between two holes and the boundary. We can then perform two-qubit gates by deforming the code and ‘rotating the holes around each other’. They find that this way of performing computations gives a threshold of 0.75% (later improved to in [231]). A more accessible description of these results is given in [100], and an extensive review of this topic in [99] where they also give an estimate that running Shor’s algorithm to factor a 2000-bit number would take about 200 million qubits and a full day of computation. The more compact, ‘rotated’ version of the surface code we use was first introduced in [130, Section 7.1].
Decoding and perfect matching Decoding classical linear codes is in general NP-hard [28], and even approximating the minimal-weight decoding remains NP-hard [18]. This remains the case for quantum (stabiliser) codes [132] (note that this not obviously follows, since for stabiliser codes we only care about decoding up to stabilisers and this kind of degeneracy is not present in the classical case, so that a priori the problem might become easier). However, this hardness only holds for arbitrary codes with non-local stabiliser generators. The minimum-weight perfect matching problem can be efficiently solved using the blossom algorithm [93]. However, even though it is efficient in the asymptotic sense, for a practical implementation it must be really fast, and hence people have spent a lot of effort to make refined algorithms that lose optimality, but can run very fast or only using a local amount of data [230, 128, 80, 134, 214]. PyMatching is an open-source Python package that implements several methods for decoding topological codes [127].
Lattice surgery As might be clear from those latter numbers, performing CNOTs by rotating qubits around each other tends to be expensive. The idea of merging and splitting rectangular patches of surface codes by lattice surgery was introduced by [130]. That this indeed seems to be much more efficient than braiding was argued in [98]. An experimental demonstration of lattice surgery on real hardware was presented in [95].
Error correcting codes and ZX The surface code was first presented in the ZX-calculus by [129], who also found that the logical function of the merging and splitting operation is actually just Z- and X-spiders [77]. [89] was the first paper to use ZX-calculus to verify the correctness of an error correcting code (the Steane code), which was followed up by a verification of ‘the smallest interesting colour code’ [101], which we gave in Example 11.4.16. In [51] they used a proto version of scalable ZX notation to find a new class of quantum codes. The correspondence between phase-free ZX-diagrams and CSS codes was established by [142], where he also proved the correctness of lattice surgery. The ZX description of transversal non-Clifford gates in triorthogonal codes is from [149].
Magic state distillation The concept of distilling a noisy non-Clifford state by encoding it in a code with a transversal non-Clifford gate was introduced by [36], where they found the 15-to-1 protocol. This was generalised to an entire family of protocols based on triorthogonal codes in [35]. A protocol based on using the transversal Hadamard in the code was given in [172]. This works a bit differently, as we use the transversal Hadamard to perform a logical Hadamard eigenbasis measurement, which distills the Hadamard eigenstate, which is Clifford equivalent to . The CCZ distillation used to implement gates via catalysis was introduced in [105]. A comprehensive analysis of running Shor’s algorithm to factor a 2048-bit number using surface code lattice surgery with this improved catalysed CCZ distillation scheme was given in [104], where they find you require 20 million qubits and 8 hours of computation time, a large improvement over the older scheme using braiding and iterated 15-to-1 distillation.
Even further reading An excellent overview of quantum error correction and fault-tolerance can be found in the book of [114], which at the time of this writing was available as a preprint freely online. A standard, comprehensive, and approachable text on classical error correction is [167]. An accessible and quite comprehensive fault-tolerant quantum computing scheme based on surface code lattice surgery and magic state distillation is A Game of Surface Codes by [164].