Jun 1 2018
A theoretical approach to quantum computing, which is 10 billion times more tolerant to errors when compared to previous models, has been proposed by researchers from Hokkaido University and Kyoto University.
The technology enables us to get closer to building quantum computers that harness the distinctive characteristics of subatomic particles to transfer, process, and store excessively huge amounts of complex information.
Quantum computing can be used to overcome challenges in processing huge amounts of information, for instance, modeling complex chemical processes, much better and rapidly when compared to modern computers.
Existing computers store data by coding it into “bits.” A bit can be in one of two states: 0 and 1. Researchers have been looking for means to use subatomic particles, known as “quantum bits,” with the ability to exist in more than just two distinctive states, to store and process considerably large amounts of information. Quantum bits are the building blocks of quantum computers.
One such means is harnessing the intrinsic characteristics of photons of light, for instance, encoding information in the form of quantum bits into a light beam through digitization of patterns of the electromagnetic field. However, during quantum computation, the encoded information could be lost from light waves, resulting in piling up of errors.
Researchers have been trying to “squeeze” light to minimize information loss. Squeezing is a process in which tiny quantum-level fluctuations, termed noise, are eliminated from an electromagnetic field.
A specific level of uncertainty is introduced by the noise into the phase and amplitude of the electromagnetic field. Hence, squeezing is an efficacious tool for the optical implementation of quantum computers, although the prevalent usage is not enough.
Akihisa Tomita, an applied physicist at Hokkaido University, and his team have proposed an innovative method for drastically reducing errors by employing this approach; they have reported their findings in a paper published in the Physical Review X journal.
They created a theoretical model in which the properties of quantum bits as well as the modes of the electromagnetic field in which they occur are used. As part of this approach, light is squeezed by eliminating error-prone quantum bits, when quantum bits cluster together.
When compared to prevalent experimental approaches, this model is 10 billion times more tolerant to errors, that is, it can tolerate up to one error for every 10,000 calculations.
The approach is achievable using currently available technologies, and could further advance developments in quantum computing research.”
Akihisa Tomita, Hokkaido University