Which Problems Are Quantum Computers Good For?
Introduction
In the previous lesson, we tackled a single problem in-depth – solving the Max-Cut optimization problem using the QUBO formulation. Today we are going to take a different approach and discuss near-term applications more broadly. We'll start by giving you a sense for how we decide on the types of problems we think might benefit from a quantum solution. Then, we'll look at some recent examples of work done in our community. This will help you start to develop an intuition for different types of quantum computing problems and how we approach solving them.
Classical vs. quantum difficulty
Before jumping into the examples, let's first discuss how we study and categorize the difficulty of various problems. Some problems can easily be solved on a classical computer, and we have no need for a quantum computer to solve them. On the other hand, there are very hard problems for which quantum computers are necessary to solve. One famous example is finding the prime factors of enormous integers. RSA encryption relies on the difficulty of this problem, and Shor's algorithm was designed to solve it on a quantum computer. Another example is finding a solution in an unsorted data set – this can theoretically be solved by the quantum algorithm known as Grover's algorithm. However, most experts agree that these types of algorithms will necessitate the implementation of error correction and the technology just isn't there yet.
So, we are looking for problems that we can tackle somewhere in a sweet spot between the very easy and very hard – ones that today's quantum computers can tackle, but classical computers have trouble with.
Complexity classes
The difficulty of these problems is categorized and analyzed in a branch of computer science called computational complexity theory. There are a ton of different complexity classes in classical computing, but some of the most fundamental are:
- P: Problems which can be solved in polynomial time as the scale of the problem increased. They're easy to solve.
- NP: This stands for nondeterministic polynomial. These problems cannot necessarily be solved in polynomial time, but their answers can be verified in polynomial time.
- NP-complete are the most difficult problems in NP and have no known polynomial solution. This is where famous problems like the traveling salesman and the game Soduku live.
- BPP, or bounded-error polynomial problems, which can be solved within some error threshold by a probabilistic classical computer in polynomial time.
When the concept of quantum computing was invented, people spent considerable efforts trying to figure out what class of problems these new types of computers would be able to solve efficiently. A new class of problems was invented:
- BQP, or bounded-error quantum polynomial problems. This is the quantum equivalent of BPP: It is the class of decision problems solvable by a quantum computer in polynomial time with a small chance of error.
All these classes live in a larger class we call PSPACE. Above is a diagram of the suspected relationships among some of the complexity classes, but this is very hard to definitively prove mathematically. You'll notice that BQP does not necessarily overlap with NP-complete. But you might have still seen some quantum computing approaches that aim to try to solve problems in NP-complete.
One common misconception is that there is no point in exploring quantum solutions to any problems where a mathematical proof for a quantum speed-up has not been found. But a mathematical proof that a quantum algorithm is faster than its classical counterpart is hard to find. Shor's and Grover's are two of only a handful of examples where this has been done so far. In fact, rigorously proving that P and NP are different is one of the most notorious open questions in all of mathematics, even though all intuition tells is they must be.
But the way an algorithm scales with increasing problem size – which is what is reflected in the complexity class – is not always the most relevant feature of an algorithm. This scaling is often the worst-case scenario. It is quite possible that, in practice, the worst-case scenario is not what we most commonly encounter.
Just because hardness proofs are tricky doesn’t mean we can’t make progress. We introduce the idea of heuristic solutions. If you're an experimentalist you likely know and love these types of solutions. A heuristic is any approach to solving a problem that is pragmatic, but not necessarily optimal, since solutions don't need to be optimal to be useful. For instance, think about financial applications. We haven't found an exponential speedup for most financial algorithms that quantum could be used for yet, but we don't need an optimal solution. In finance, even a solution that is just 0.1% more efficient could equate to billions of dollars of profit.
Today's quantum computers and their limits
So, how do we know what use cases and problems could be suitable for quantum computing right now? Is there is good reason to believe quantum utility, or even advantage, can be found either now or in the near future?
Maybe it's easier to first name the things the problem should certainly not have. It can't require a huge number of qubits. We don't have processors that have thousands to millions of qubits available yet. That's one of the main reasons that Shor's algorithm and the like are so far from being realized. The circuits also can't be too deep. The limit to circuit depth depends on many factors, but in general, if your experiment requires a depth that you haven't seen achieved in the literature yet, it's probably not going to work. And lastly, any type of algorithm that we know will require error correction cannot be done yet.
All of these limitations are addressed in the IBM Quantum roadmap and we expect to achieve error correction in the early 2030's, but for now, we need to look for experiments that make use of most of the qubits currently available on a given QPU. We also emphasize the importance of error mitigation and suppression. And lastly, there should be an obvious extension to future applications that would be important for society and that we could see eventually leading to quantum advantage.
Application Areas and Use Cases
Now let's talk about some examples of use cases, which fall into three main categories that we have identified as most likely to see favorable outcomes in the near to middle term:
Nature simulations. Current classical methods of atomic and molecular simulations are limited by inefficient mathematical descriptions of the atomic structure. Storing and manipulating a quantum state takes exponentially many resources on a classical computer but can be done efficiently on a quantum computer. This could lead to developments in carbon dioxide sequestration, alternative batteries, or the invention of new drugs. Some algorithms that are especially relevant in this area are: the Variational Quantum Eigensolver (VQE), which is used to estimate certain properties of a material, such as equilibrium or minimum energy states; the Time Dynamics Simulation (TDS) algorithm, which is used to estimate response functions or spectral properties of materials; and a newcomer, Sample-based Quantum Diagonalization (SQD), which we think we will be hearing much more about in the near future.
Optimization. This area is ubiquitous in computing, so the use-cases are numerous and varied. Some examples that we hear about a lot are portfolio optimization in finance, industrial design, and distribution and supply chain. The most common algorithm you will probably hear related to finance is the one we have already covered in some depth: the quantum approximate optimization algorithm, or QAOA.
Quantum machine learning. This area has generated a lot of excitement in the past few years, but it's likely that QML won't be useful as soon as simulation will. But there are nevertheless some impressive algorithms that are being worked on to address some very important uses cases. Some of these possible use cases are natural language processing, network traffic analysis, and even fraud detection in financial transactions. Relevant algorithms in this area are the quantum support vector machine (QSVM), quantum neural networks (QNN), and quantum generative adversarial networks.
Within these broad application areas, the community sees the benefit in groups working together who are focused on a more specific topic. IBM spearheaded an initiative called Working Groups to try to help collaborators meet each other and create productive synergy in four specific areas: healthcare and life science, materials and high-performance computing (HPC), high energy physics, and optimization. And recently, a fifth working group on sustainability was created.
We're now going to zoom in on a few problems that were recently tackled by some of these working groups. The main goal here is not to understand every detail of an experiment – this can be intimidating for even experts if the paper is slightly out of your area of expertise. The goal is simply to help develop an intuition for the types of problems quantum computers are good for and how to handle them. And if you're interested, we encourage you to read the full papers.
Use Case 1: Simulating hadron dynamics
First, we're going to delve into a paper by Martin Savage's group at the University of Washington called Quantum Simulations of Hadron Dynamics in the Schwinger Model Using 112 Qubits.
If you're not a high energy physicist, you still might be familiar with the term “hadron,” as in the Large Hadron Collider (LHC), which is the giant particle accelerator, 27-km in circumference, that made it possible to finally observe the Higgs boson. A hadron is a subatomic composite particle made up of other small particles called quarks. Some examples of hadrons are neutrons and protons.
For a little context, the LHC was built to enable the study of fundamental physics by colliding particles at super high energies. With the LHC, scientists hope to learn more about the early universe and the fundamental laws of nature. In principle, the interactions of these particles could be simulated from start to finish with a sufficiently powerful quantum computer. We're not quite there, but we're making progress.
The Schwinger model is a popular, simple model used to simulate some of these dynamics. It is a model that describes the behavior of electrons and positrons interacting via photons in 1+1D, meaning time and one spatial dimension. The model has a lot of similarities to quantum chromodynamics (QCD), which describes how quarks and hadrons interact, but QCD is extremely hard to simulate. So, the Schwinger model is often used as a toy model to investigate some phenomena that are common to both.
To understand why they tackled this problem, let's ask ourselves a series of questions.
First, why did they have reason to believe simulating this on a quantum computer would work at all? In this case, the electrons and positrons in the Schwinger model have a screening effect, causing correlations between distant fermions to decay exponentially with separation. This means there aren't as many necessary long-range interactions from a qubit on one side of the chip to another, which we know is very error-prone. So, this is great for the hardware we have available today.
Next, why is this topic of interest? High-energy physics in general is of great interest. People were willing to spend billions of dollars to build the LHC, and many thousands of scientists and technicians across the globe have dedicated their careers to this field. Although the Schwinger model is simplistic and isn't designed to cover three spatial dimensions, it is still a useful simplification of the full theory.
Last, how was this work done, or how would we approach the problem if we were looking to continue this work? In simulation-type experiments, VQE is one of the most common approaches, and the first step is almost always the same: prepare the ground state. In this case, it is a vacuum state. In this experiment, they use a new version of VQE called SC-ADAPT-VQE (which stands for Scalable Circuits - Adaptive Derivative-Assembled Pseudo-Trotter ansatz-VQE) to prepare both the ground state and the hadron wave packet on this vacuum. The next step is to allow the hadrons to evolve in time. Lastly, identify the observables you want to measure and measure them.
If those steps sound a bit familiar, minus the hadron wavepacket part, that's because these steps are very similar to what we covered in the QAOA example in the previous lesson. We start in a familiar state (here the vacuum state), and then we let it evolve in time with a series of exponentiated Hamiltonians. Many variational algorithms follow this general approach. A big difference here, though, is that we create the wave packet of hadrons centered within our circuit, before we start letting it evolve.
So, how do we create the wavepacket? On the vacuum, a hadron can be excited by creating a fermion-antifermion pair on adjacent sites. By preparing a superposition of such hadrons at different locations, an arbitrary wavepacket can be prepared. The authors centered their wavepacket in middle of the circuit to observe the evolution without hitting a boundary.
But remember: the name of the game when working with noisy QPUs is to keep the circuit depth manageable. To do this, the SC-ADAPT-VQE protocol uses symmetries and hierarchies in length scales to determine low-depth quantum circuits for state preparation. This will create an ansatz with a smaller number of parameters, and therefore, a shallower depth.
The experiment was run on an IBM Quantum Heron device and included a few different types of error mitigation and suppression: dynamical decoupling, zero noise extrapolation, Pauli twirling, and a recently developed technique called operator decoherence renormalization.
Above is a figure from the paper showing the observable of interest, the chiral condensate, which is basically a superfluid phase of the hadrons. Now, we can see the wavepacket at the center of the sites that have been designated to run this experiment. The black lines are the error-free results from the (computationally expensive) classical simulation, while the points with error bars are the results from the 133-qubit IBM quantum computer, Torino.
We see two different time steps in the wavepacket evolution. At time , you can see that the chiral condensate is narrow and localized, and it also matches the classical simulation well. At , it is much more spread out. The comparison to the simulator isn't quite as perfect now, but you can still obviously see very good agreement between theory and data, which is encouraging.
In conclusion, this is a very cool example of the type of simulation work you might not initially think of applying quantum computing to, but which shows real promise. It's not perfect but you don't have to be a particle physics expert to see that the quantum computer accurately predicts the outward propagation of the wavepacket, which is exactly what we would expect to find. Hopefully future work in this area will continue and high energy physicists will continue to find ways to incorporate quantum computing into their workstreams. The aim is to solve difficult theoretical problems more precisely and use experiments to accept or reject theories in hopes of discovering new physics, building improved detectors, and leading to a better understanding of nature at its most fundamental level.
Use Case 2: Optimization of an Ising spin-glass
Our next example focuses on optimization and will be a deep dive into a paper called Bias-Field Digitized Counterdiabatic Quantum Optimization, which was done by members of the Kipu Quantum team and the University of the Basque Country in Spain.
In the paper, the authors developed a new optimization method and applied it to find the ground-state of an Ising spin-glass. As we discussed previously, many combinatorial optimization problems can be reformulated as solving for low-energy states of Ising Hamiltonians. The Ising model describes the interaction of an array of microscopic spins. In some regimes, the model predicts that the spins behave as a glass, in which the magnetic moments are disordered above a so-called "freezing temperature."
We'll begin like we did before with a series of definitions. The first is counterdiabatic, which is a type of evolution that suppresses non-adiabatic effects experienced by a system, regardless of how fast those processes occur. Recall the adiabatic theorem from the last episode – you usually need to evolve a system very slowly if you wish for it to remain in the ground state. This is a big problem because the slower we must evolve things, the more time we have for errors to occur. Counterdiabatic driving (CD) aims to combat this by adding terms that counteract these unwanted excitations. The main idea here is to speed up the whole experiment and reduce the quantum circuit depth by suppressing excitations that could cause spurious transitions.
Now for the other piece of jargon in the title: the bias field. Other iterative algorithms, like VQE, take classical parameters into the states and use classical optimizers to search the many-dimensional parameter space for the set of parameters that yields a minimum expectation value for a fixed Hamiltonian. In this case, they instead vary the Hamiltonian each time, moving adiabatically from a known case to the case of interest. To change the Hamiltonian, they simply directly apply the Pauli-Z expectation value from one iteration as a bias field in the Hamiltonian for the next iteration. In this way, they steer the dynamics toward the actual solution without the need for classical optimizers.
So, why is this experiment of interest? Ising spin-glasses are of fundamental interest in physics, but this new approach is even more general than that. It could be applied to many optimization problems, so the paper is of wide interest.
And why did we think this would work? The algorithm they are proposing speeds up the evolution to reduce the circuit depth, while also suppressing non-adiabatic transitions. Furthermore, it does not rely on any classical optimization subroutines, which can be an issue leading to barren plateaus and getting stuck at local minima. Lastly, the authors also make sure to align the interactions in the problem Hamiltonian with the hardware connectivity in the real QPUs, which is always very important.
So, how does this method work? Again, it does not use any classical optimizers, unlike most other iterative quantum algorithms. Instead, by feeding the solution from each iteration into the input for the next one, the bias-field digitized quantum optimization algorithm incrementally refines the ground state, bringing it closer and closer to the final evolved state. And combined with the counterdiabatic protocols, we can do this even with short-depth quantum circuits that should run smoothly on noisy hardware.
So, when the experiment was performed, the authors chose to run the algorithm on the 127-qubit IBM Quantum computer Brisbane. Below is a figure showing the 8th iteration of the optimization algorithm for a nearest-neighbor, randomly generated spin-glass instance on 100 qubits. They compare idealized classical simulation results from DCQO and BF-DCQO, as well as the experimental result run on the quantum computer. They also show the result from a classical solver called Gurobi as a reference. With just 10 iterations, BF-DCQO provides a drastic enhancement compared to DCQO. Although the experimental result is a bit different than the ideal result due to noise, the performance is still better than the ideal DCQO. This shows that there is still a lot of excellent progress being made with regards to quantum optimization and good results are being reported on over 100 qubits for one of the first times.
Use Case 3: mRNA secondary structure prediction
Finally, we'll discuss a paper from Moderna Pharmaceuticals called mRNA Secondary Structure Prediction Using Utility-Scale Quantum Computers.
First, a brief refresher on mRNA. Messenger RNA is a type of RNA involved in protein synthesis. It basically reads instructions given by DNA. The secondary structure of mRNA is how the chain is folded, as shown in the diagram below. And the RNA secondary structure prediction problem is the problem of finding the most stable folding of the sequence of bases or nucleotides that make up RNA: adenine (A), cytosine (C), uracil (U), and guanine (G). The image below shows some common folding structures found in mRNA, each color represents a different type of secondary structure. What makes one structure more favorable than the others isn't well understood; all we can do is calculate which structure yields the lowest free energy compared to the unfolded state. And that's where quantum computers come in.
So, why are mRNA secondary structures important? Accurate prediction of them is crucial to not only understanding DNA and our genes, but also for designing RNA-based therapeutics, like the COVID-19 vaccine.
This has long been known to be a formidable optimization problem for classical computers due to the vast number of possible configurations. For some configurations, it's known to be an NP-complete problem. However, on a quantum computer, we can formulate the secondary structure prediction as a binary optimization problem – something we know how to handle. Furthermore, there had been evidence in the literature already of accurate RNA predictions on small scale quantum devices and quantum simulators. But would this work on larger hardware?
This experiment was performed using something called the conditional value at risk variational quantum eigensolver, which is a modification of a traditional VQE algorithm and is expected to achieve better convergence.
The plot above shows the distribution of measurement probabilities of the sampled bitstrings, with the corresponding energies for a 42-nucleotide, 80-qubit instance. Here, the bitstrings symbolize pairings of nucleotides. It illustrates that the lowest energy bitstring found by the quantum computer matches that of the comparative classical solver, so that is great. Also shown is the optimal folded structure of that nucleotide chain based on the lowest energy bitstring the quantum computer found.
Conclusion
Hopefully, these three use-cases have given you enough context to understand what cutting-edge work in the field looks like right now, and the confidence to attempt new quantum experiments you might not have before.
Remember: quantum computing isn't good for every problem. And really this is just a testament to how good we've gotten at classical computing. Just because you think you could apply quantum computing to a problem doesn't mean it is going to yield interesting results; you must consider the scaling.
Circuit depth is a double-edged sword. We need it to be considerably large to do interesting work that classical computers can't, but right now, we can't increase the depth too much because the hardware noise will cause the fidelity to diminish. It's all about finding that sweet spot and knowing that it's a moving target. So, take some time between now and the next lesson to think about a problem you have come across in your research, and how you might approach it with what we have learned so far. And hey, your solution may not pan out, and that's fine. That's why this is research.
Was this page helpful?