Velocity by Booz Allen

integration of quantum computing into machine learning systems, offers one promising way to do just that. QML is anticipated to provide improvements in speed and performance to diverse AI application areas spanning medicine, finance, data analysis, and more. Evidence has been mounting that QML is indeed capable of delivering on these promises. For example, a 2021 study in the Journal of Chemical Information and Modeling demonstrated how using QML could accelerate the process of drug discovery to combat diseases, such as COVID-19 and tuberculosis, compared to using analogous machine learning methods (see "Quantum Machine Learning Algorithms for Drug Discovery Applications"). This was demonstrated by using the quantum

versions of common classical machine learning methods, such as deep neural networks and support vector machines, to classify which molecules were potential inhibitors for a target disease. Despite the quantum computer’s imperfect, error-prone nature, the QML models achieved similar accuracy to the classical models while demonstrating a speed advantage that grew with the size of the dataset. The timing data suggests that as molecule databases continue to expand, QML’s superiority will only become more pronounced and remain viable when classical machine learning methods begin to struggle. This is not limited merely to drug discovery; it stands as a strong affirmation that the theoretical speedups for QML will translate into wide-reaching, real-world impacts.

Both quantum and classical computers compute by executing a series of instructions called an algorithm, which manipulates the states of their underlying (qu)bits. Unlike classical algorithms, which can only flip bits between 0 and 1, quantum algorithms use a richer variety of operations that take advantage of the qubit’s complexity. Quantum Computing Meets Machine Learning From computer vision to cybersecurity and everything in between, this past decade has shown us the versatility and power of machine learning technologies. Given this, we can expect that expanding the capabilities of machine learning will continue to drive progress across the board. QML, the

Figure 1: Illustrative steps for inference and training of a hybrid quantum-classical model

Û(φ 1 )

Û(φ 3 )

Û(φ 2 )

Standard ML training algorithms to improve quantum circuit

Convert classical data into qubits

Apply adjustable quantum circuit

Measure qubits to get back classical data


Are quantum computers better at everything?

No. Quantum computers excel at a limited number of tasks. Examples include, but are not limited to, simulating quantum physics for material science or drug development, optimization for logistics or finance, and math for machine learning or cryptography. They offer no benefit for many things we do with computers. That said, the advantage of using a quantum computer for the right kinds of problems can be enormous. No. Quantum and classical computers will work together. Because quantum computers offer no advantage for most computing tasks, we will still want classical computers to handle most of our computations. In the same way many current high-performance computing setups call a GPU (graphics processing unit) to accelerate certain tasks, future setups will likely be classical computers that can call a QPU (quantum processing unit) as needed.

Will quantum computers replace classical computers?

DID YOU KNOW: A quantum simulator is classical software which mimics a quantum computer. Researchers often use this because 1) it mimics a perfect quantum computer without errors and 2) it can be difficult and expensive to get time on a real quantum computer. Of course, because quantum computers are more powerful than classical ones, the quantum simulator is very inefficient and only works for simulating a small number of qubits.

In addition to speed advantages, QML is predicted to perform well using significantly smaller models than classical machine learning, making it possible to tackle previously infeasible problems. The number of features a classical model can represent is directly related to its size. Qubits, however, contain more information than bits, so significantly smaller quantum models can represent the same number of features. This may allow for reasonably sized quantum models to perform well in situations where an infeasibly large classical model would be needed. Evidence of this was provided by a 2022 computer vision study in Quantum Science and Technology that focused on learning with unlabeled data (see "Quantum Self-Supervised Learning"). It is quickly becoming infeasible to label many datasets of interest due to their sheer size, such as photos on social media or images taken by self- driving cars, so the ability to efficiently learn on unlabeled data is increasingly critical. Techniques for learning on

complex, unlabeled datasets often require impractically large models since they must discern and represent complicated patterns in the data. In this study, the researchers trained a model to classify simple images of planes, cars, birds, cats, and deer. They then trained the same model again but replaced part of the model with a quantum equivalent. Using a quantum simulator , the authors showed that the quantum model outperformed a classical model of the same size and that a smaller quantum model could achieve the same performance as the classical model. The quantum model was then run on a real, imperfect quantum computer and matched the classical performance, despite the high error rate of the quantum computer. The study shows that this QML technique is capable of overcoming the classical bottlenecks and is robust against the errors in our current quantum computers, suggesting that QML for computer vision may become a practical application soon.

So, quantum computers actually exist?

Yes. Many companies, governments, and academic institutions currently have quantum computers that vary greatly in size, power, architecture, and more.

If quantum computers already exist and are supposed to be so powerful, then why aren’t we doing more with them?

Quantum computers are still not sufficiently powerful to be commercially impactful. Despite the largest general-purpose quantum computers theoretically having plenty of qubits (hundreds) to completely outclass our best supercomputers for certain problems, these quantum computers are unlikely to outclass a smartphone. This gap between theory and practice exists because current quantum computers are prone to errors that severely limit their power. It is known, in theory, that these errors can be overcome, and that “perfect” quantum computers can be built. The timeline for achieving this is still unclear, but most experts estimate in terms of decades, not years (see The Quantum Threat Timeline Report 2022). We expect that imperfect quantum computers will be useful for solving real-world problems soon, with many estimates for achieving a quantum advantage falling between this year and 2028. It remains an open question of when and for what quantum computers will first become impactful. One promising candidate application for near- term quantum computers is machine learning.

If “perfect” quantum computers won’t be available soon, why should we care about them now?




Powered by