Partial Syndrome Measurement for Hypergraph Product Codes

This article is meant to be a less technical summary of my paper, Partial Syndrome Measurement for Hypergraph Product Codes, as well as a cool visualization of decoders. Skip to the end if you just want to play with the small-set flip decoder.

Before we introduce quantum error correction, let's start with some classical error correction. What exactly is error correction and why would you need it? When sending information over a noisy channel, it's possible that the message could be corrupted in some way before it reaches the receiver. Error correction let's us redundantly encode the message in such a way that the receiver can recover the original message, even in the presence of errors.

For much the same reason, we need quantum error correction to be able to do any sizable computations on noisy quantum computers.

Formally, an error correcting code is a transformation that takes messages of length $k$ and maps them to messages of length $n$. If it can correct $\lfloor \frac{d-1}{2} \rfloor$ errors, then we say that such a code is a classical $[n,k,d]$ code.

Here we have a $[7,4,3]$ classical error correcting code as represented by a bipartite Tanner graph. It redundantly encodes messages of length 4 into ones of length 7 and can tolerate $\lfloor \frac{3-1}{2} \rfloor = 1$ error on any of the bits.

Valid messages have the property that all of the checks are satisfied. If we receive a message where some of the syndrome is unsatisfied, then we know that an error has occurred that we need to correct.

Throughout the article, the interactive illustration will update to go along with the text. Try exploring by hovering and clicking, or learn how to use it by hovering over the question mark in the upper right corner.

Flip Decoding

Classical linear codes like the $[7,4,3]$ code can be decoded with the flip decoder, a suprisingly simple procedure. This decoder is the basis for the quantum decoder we look at later. The algorithm is as follows:

  • Flip a bit if it will result in a greater total number of satisfied checks.
  • Repeat until a codeword is reached or all flips increase the syndrome weight.
That's it. It's kind of amazing that this algorithm works at all, and yet it is guaranteed to succeed for codes made from sufficiently large expander graphs.

The way to read this chart is that by flipping a given bit, the bar indicates whether resulting the syndrome weight will be larger (have more unsatisfied checks) or smaller (haver fewer unsatisfied checks).

Masking

The main concept the paper introduces is that of masking. It comes from the question: what if we didn't use didn't use the information from every check node when doing error correction?

For classical error correction, there isn't really a practical need to do this; however, as we will briefly argue in the quantum case, masking could potentially be useful for certain code types and quantum computing architectures.

By masking check nodes, we are changing what information is available to the flip decoder. In some cases, a mask has no effect on the decisions made by the decoder, but in extreme cases a mask could completely hide errors. This is reflected in the visualization by the bars changing size, even without adding any errors.

Whether or not quantum error correction is possible with masking is the main question of the paper.

Quantum Error Correction

Without going into the quantum mechanics, there are now two types of error that can affect the quantum bits (qubits), X- and Z-type errors. As such, we need quantum codes that can diagnose and correct both types.

The codes we mainly focus on in the paper are hypergraph product codes. This construction takes two classical codes and creates a quantum code by essentially taking the graph product of them. Intuitively, this means that, when the quantum code is laid out as shown, each row and each column is a copy of the underlying classical code.

Taking the hypergraph product of two copies of the $[7,4,3]$ classical code above yields the $[[58,16,3]]$ quantum code here.

For hypergraph product codes, the X- and Z-type errors can be corrected independently. When we use two copies of the same classical code like we do here and in the paper, they have the same performance. So we only look at correcting X-type errors.

Small-Set Flip Decoding

To decode hypergraph product codes, we can use the small-set flip (SSF) decoder, which is inspired by the flip decoder we used for classical expander codes.

The idea for the flip decoder was to look for bit flips that decrease the syndrome weight. Expanding this idea slightly gives us the following algorithm:
  • In the support of each X-type check, find the smallest set of qubits which, when flipped, decreases the syndrome weight by the most.
  • Rank all X-type checks and perform the best small-set flip.
  • Repeat until zero syndrome is reached or all flips increase the syndrome weight.
The SSF decoder has a number of attractive properties, such as being able to tolerate large random errors, as well as noise in the syndrome.

The X-type checks are colored according to the relative syndrome weight difference of the best small-set flip for that check. This is calculated by taking the resulting syndrome weight difference and dividing by the number of flips in the correction.

Just as we did with the classical code, we can mask some (Z-type) checks and see how it affects the small-set flip decoder.

For quantum codes, there is actually practical reason you might want to mask checks. To determine whether a check is satisfied or unsatisfied, we have to measure it by interacting with the qubits in its support. For some quantum computing architectures, like nuclear magnetic resonance or superconducting qubits, qubits can only interact with their 2D nearest neighbors, which poses a problem if the qubits we want to interact with are far apart.

The basic idea, and the underlying motivation for this paper, is that during error correction we use the checks whose qubits are far apart less frequently than the checks whose qubits are close together. That is, some checks have a mask on them during decoding. Doing this has the possibility of making quantum computation on 2D-local architectures easier.

The first step is seeing if error correction is even possible when we don't use every check while decoding.
In the paper, we investigate exactly that question. We looked at a family of hypergraph product codes of up to 8,784 qubits and ran simulations of a multi-round decoding protocol using the SSF decoder. The protocol consisted of doing several rounds of masked error correction, followed by rounds with a smaller mask or no mask at all.

We found evidence to suggest that error correction is possible (although with poor performance) even if you randomly mask 50% of the checks.

This result is a promising step toward using masks to do quantum computation in 2D, but there is lots of work to do to figure out if the procedure as a whole is possible:
  • How do you lay out the qubits of the code in 2D in an optimal way?
  • How long does it take to physically measure the masked/unmasked checks?

The End

Thanks for reading! Check out the paper here if you're interested in learning more.

Much of the scrollytelling boilerplate and styles came frome the great tutorial and example by Cuthbert Chow.


Try out decoding a larger code.