Error-correcting codes for possible future use on quantum computers

Product Image

At IBM Quantum, we are well aware that error-tolerant universal quantum computers are not going to be produced on a production line any time soon. Rather, we anticipate that quantum computers will evolve in stages, just as conventional computers did before us, as the underlying hardware and error-handling technology improves. During this time, we anticipate that their usefulness will gradually increase, and that they will be able to solve more difficult problems.

Error correction is often seen as a technological development that is still some time in the future, since it won't be practical until we have discovered the appropriate methods and have developed sufficient technology. In point of fact, many industry professionals think that fault-tolerant quantum computing (FTQC) would need millions of physical quantum bits (qubits), which is a quantity that we feel is too high to be practically applicable at this time in the research process.

Error correction via the use of codes

Error correction is not a novel idea, but quantum error correction, also known as QEC, requires a little bit more critical thinking. Bit flip errors, in which bits inadvertently transition from 0 to 1 or vice versa, are the sole kind of error that standard classical error correction needs to fix. Errors of other types, such as phase errors, which may contaminate the additional quantum information that qubits contain must also be corrected by quantum computers. In addition to this, the QEC approaches must be able to fix faults without the capability of copying unfamiliar quantum states, as well as without damaging the fundamental quantum state.

These codes function by encoding the information contained in logical qubits2 over a greater number of physical qubits. Constrained by certain parameters, a realistic error-correcting code must adhere to certain standards. A QEC code should only demand qubit interactions that are feasible to implement in practise and on a large scale. For example, it is generally agreed that it would be impractical for large systems to implement an all-to-all connection, in which every qubit would be linked to each and every other qubit.

We also need a code that can function on qubits that have a fair amount of background noise; many researchers think that it is very improbable that the rate of physical errors would be lowered to a level much lower than.0001, which is equivalent to around a one in ten thousand error rate. In addition, the code must have logical qubit error rates that are sufficiently low for us to be able to carry out complicated computations using our technology.

A good rule of thumb is that the intended logical error rate should be lower than the inverse of the total number of logical operations. This means that if an algorithm needs one million logical operations, one should strive for an error rate for logical qubits that is at least one part in ten thousand or better. The last need for the code is that it must have an acceptable overhead. Adding extra physical qubits would raise both the cost and complexity of the computer, and if they are too many, it is possible that the QEC code will no longer be usable.

In order to safeguard information from being corrupted in any manner, several QEC codes are now in the process of being developed. These codes reflect the various methods in which one might encode quantum information over physical qubits. Quantum low-density parity check codes, sometimes known as LDPC codes for short, are among the most intriguing and potentially useful codes available today. These codes are capable of satisfying a significant number of the aforementioned practical restrictions, most notably the fact that each qubit is only coupled to a small number of other qubits and that faults may be identified via very simple circuitry without the risk of the errors propagating to an excessive number of other qubits.

An example of a Low Density Parity Check (LDPC) code that encodes information into a two-dimensional grid of qubits is the surface code. Surface codes have been intensively investigated, and their methodology has been at the forefront of many envisioned QEC systems. Researchers working in teams have already been successful in demonstrating some surface coding, including some work done by IBM in this area.

But for the time being, surface codes have limitations that make it doubtful that they could ever be feasible for the construction of a practical quantum computer: they need too many physical qubits, maybe 20 million qubits6, in order to solve problems that are of interest. As a result, the search for QEC codes at IBM continues in an effort to lessen the burden of this overhead. Beyond the surface code, we've discovered that LDPC codes have a lot of potential.

Coding updates for more effective error correction

Scientists at IBM have recently uncovered LDPC codes that include a more than tenfold decrease in the amount of physical qubits in comparison to the surface code. These codes contain a few hundred physical qubits each. This is made feasible by the fact that the code packs more information into the same amount of physical qubits, while still displaying great performance at error rates that are lower than 0.001. In addition, each qubit is connected to a total of six additional qubits; this is a fair need for the hardware in order to successfully implement the error correction protocol.

A 2D lattice of qubits is used in the surface code. These qubits are coupled to one another in a manner analogous to the edges of checkerboard squares. The scientists need to break the plane in order to create more efficient codes, such as this one. This includes edges that curve over the checkerboard and link qubits that are farther apart. At IBM, we are now working on the development of non-local "c" couplers that are capable of producing these additional edges.

The current generation of superconducting quantum computing processors does not, however, fulfil these physical criteria. The six-way communication and the need for lengthy cables provide a higher level of difficulty than anything else that IBM has developed up to this point. The most significant difficulty in terms of technology is presented here, yet it is not insurmountable. We are working on building greater degrees of connection in addition to non-local "c" couplers that behave as lengthy cables connecting dispersed qubits.

XPERIMEN.COM

TwitterFacebook