IBM researchers say they’ve found a better way to detect and correct errors that could compromise quantum computing. Specifically, they can detect two types of error at the same time, something previously not possible.

Quantum computing is based around a simple, if somewhat mindnumbing concept. ‘Traditional’ computing breaks information down into bits, which can each be a 0 or a 1. That’s represented in physical form by an electronic switch in an integrated circuit, with the switch being either on or off.

However, quantum computing is based around physical components that take advantage of quantum mechanics: in very simplified terms, that’s where a particle can exist in two states at the same time. Instead of bits, quantum computing uses qubits, which represent a 0, a 1, or both at the same time.

That in turn means a computer could be carrying out two calculations (and in a wider sense, processing two sets of data) at the same time, with a particular qubit representing a 0 for one calculation and a 1 for the other calculation. Apply that to all the available qubits and in theory you get a spectacular increase in computing power.

The problem is that for both physical and logistical reasons, quantum computing also offers more scope for errors. One type of error is a straightforward “bitflip” as seen in traditional computing: in effect, the computer incorrectly reads a 0 as a 1 or vice versa. Another type of error is “phaseflip” (also called a sign flip). Again in simplified terms, that means mistakenly reading that the qubit represents both 0 and 1 simultaneously when it actually only represents one of them (or vice versa.)

To date it’s proven challenging to find a way of detecting both of these possible errors at the same time, something that’s necessary to get further towards quantum computing reaching its potential.

The solution found by IBM is, to say the least, complicated: it describes it as code that “detects arbitrary single-qubit errors in a non-demolition manner via syndrome measurements”. In simpler terms, that means it not only spots an error and discovers the nature of the error, but does so without changing the state of the qubit concerned.

The implementation involves a surprisingly simple technique however. IBM discovered the error correction works if you set up the quantum circuit to be a lattice pattern that arranges qubits in 2×2 squares, rather than the more common linear arrangement.

In the image above, which shows IBM’s test circuit with qubits marked Q1 through Q4, at a precise moment Q1 could check Q3 and Q4 for bit flips while Q2 checks Q3 and Q4 for phase flips. Repeatedly changing the roles each qubit plays means that every individual qubit is frequently checked for both error types.

IBM says that making the error correction work on a larger lattice — necessary to make this a practical answer — will be tricky, but that the testing has proven the principle works.