Instances and Extensions
This chapter will cover several quantum variational algorithms, including:
- Variational Quantum Eigensolver (VQE)
- Subspace Search VQE (SSVQE)
- Variational Quantum Deflation (VQD)
- Quantum Sampling Regression (QSR)
By using these algorithms, we will learn about several design ideas that can be incorporated into custom variational algorithms, such as weights, penalties, oversampling, and undersampling. We encourage you to experiment with these concepts and share your findings with the community.
The Qiskit patterns framework applies to all these algorithms - but we will explicitly call out the steps only in the first example.
Variational Quantum Eigensolver (VQE)
VQE is one of the most widely used variational quantum algorithms, setting up a template for other algorithms to build upon.
Step 1: Map classical inputs to a quantum problem
VQE's theoretical layout
VQE's layout is simple:
- Prepare reference operators
- We start from the state and go to the reference state
- Apply the variational form to create an ansatz
- We go from the state to
- Bootstrap at if we have a similar problem (typically found via classical simulation or sampling)
- Each optimizer will be bootstrapped differently, resulting in an initial set of parameter vectors (e.g., from an initial point ).
- Evaluate the cost function for all prepared states on a quantum computer.
- Use a classical optimizer to select the next set of parameters .
- Repeat the process until convergence is reached.
This is a simple classical optimization loop where we evaluate the cost function. Some optimizers may require multiple evaluations to calculate a gradient, determine the next iteration, or assess convergence.
Here's the example for the following observable:
VQE Implementation
Output:
No output produced
No output produced
We can use this cost function to calculate optimal parameters
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -5.999999958375564
x: [ 1.743e+00 9.452e-01 1.571e+00 -3.501e-05 1.923e+00
1.219e+00 6.221e-01 6.220e-01]
nfev: 126
maxcv: 0.0
Step 2: Optimize problem for quantum execution
We will select the least-busy backend, and import the necessary components form qiskit_ibm_runtime.
No output produced
We will transpile the circuit using the preset pass manager with optimization level 3, and we will apply the corresponding layout to the observable.
No output produced
Step 3: Execute using Qiskit Runtime Primitives
We are now ready to run our calculation on IBM Quantum™ hardware. Because the cost function minimization is highly iterative, we will start a Runtime session. This way, we will only have to wait in a queue once. Once the job begins running, each iteration with updates to parameters will run immediately.
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -5.633934465186281
x: [ 1.135e+00 1.712e+00 1.340e+00 9.429e-02 2.175e+00
7.089e-01 4.387e-01 5.427e-01]
nfev: 90
maxcv: 0.0
Step 4: Post-process, return result in classical format
We can see that the minimization routine successfully terminated, meaning we reached the default tolerance of the COBYLA classical optimizer. If we require a more precise result, we can specify a smaller tolerance. This may indeed be the case, since the result was several percent off compared to the result obtained by the simulator above.
The value of x obtained is the current best guess for the parameters that minimize the cost function. If iterating to obtain a higher precision, those values should be used in place of the x0 initially used (a vector of ones).
Finally, we note that the function was evaluated 96 times in the process of optimization. That might be different from the number of optimization steps, since some optimizers require multiple function evaluations in a single step, such as when estimating a gradient.
Subspace Search VQE (SSVQE)
SSVQE is a variant of VQE that allows obtaining the first eigenvalues of an observable with eigenvalues {}, where . Without loss of generality, we assume that . SSQVE introduces a new idea by adding weights to help prioritize optimizing for the term with the largest weight.
To implement this algorithm, we need mutually orthogonal reference states , meaning for . These states can be constructed using Pauli operators. The cost function of this algorithm is then:
where is an arbitrary positive number such that if then , and is the user-defined variational form.
The SSVQE algorithm relies on the fact that eigenstates corresponding to different eigenvalues are mutually orthogonal. Specifically, the inner product of and can be expressed as:
The first equality holds because is a quantum operator and is therefore unitary. The last equality holds because of the orthogonality of the reference states . The fact that orthogonality is preserved through unitary transformations is deeply related to the principle of conservation of information, as expressed in quantum information science. Under this view, non-unitary transformations represent processes where information is either lost or injected.
Weights help ensure that all the states are eigenstates. If the weights are sufficiently different, the term with the largest weight (i.e., ) will be given priority during optimization over the others. As a result, the resulting state will become the eigenstate corresponding to . Because are mutually orthogonal, the remaining states will be orthogonal to it and, therefore, contained in the subspace corresponding to the eigenvalues {}.
Applying the same argument to the rest of the terms, the next priority would then be the term with weight , so would be the eigenstate corresponding to , and the other terms would be contained in the eigenspace of {}.
By reasoning inductively, we deduce that will be an approximate eigenstate of for .
SSVQE's Theoretical Layout
SSVQE's can be summarized as follows:
- Prepare several reference states by applying a unitary U_R to k different computational basis states
- This algorithm requires the usage of mutually orthogonal reference states , such that for .
- Apply the variational form to each reference state, resulting in the following ansatz .
- Bootstrap at if a similar problem is available (usually found via classical simulation or sampling).
- Evaluate the cost function for all prepared states on a quantum computer.
- This can be separated into calculating the expectation value for an observable and multiplying that result by .
- Afterward, the cost function returns the sum of all weighted expectation values.
- Use a classical optimizer to determine the next set of parameters .
- Repeat the above steps until convergence is achieved.
You will be re-constructing SSVQE's cost function in the assessment, but we have the following snippet to motivate your solution:
No output produced
Variational Quantum Deflation (VQD)
VQD is an iterative method that extends VQE to obtain the first eigenvalues of an observable with eigenvalues {}, where , instead of only the first. For the rest of this section, we will assume, without loss of generality, that . VQD introduces the notion of a penalty cost to guide the optimization process.
VQD introduces a penalty term, denoted as , to balance the contribution of each overlap term to the cost. This penalty term serves to penalize the optimization process if orthogonality is not achieved. We impose this constraint because the eigenstates of an observable, or a Hermitian operator, corresponding to different eigenvalues are always mutually orthogonal, or can be made to be so in the case of degeneracy or repeated eigenvalues. Thus, by enforcing orthogonality with the eigenstate corresponding to , we are effectively optimizing over the subspace that corresponds to the rest of the eigenvalues {}. Here, is the lowest eigenvalue from the rest of the eigenvalues and, therefore, the optimal solution of the new problem can be obtained using the variational theorem.
The general idea behind VQD is to use VQE as usual to obtain the lowest eigenvalue along with the corresponding (approximate) eigenstate for some optimal parameter vector . Then, to obtain the next eigenvalue , instead of minimizing the cost function , we optimize:
The positive value should ideally be greater than .
This introduces a new cost function that can be viewed as a constrained problem, where we minimize subject to the constraint that the state must be orthogonal to the previously obtained , with acting as a penalty term if the constraint is not satisfied.
Alternatively, this new problem can be interpreted as running VQE on the new observable:
Assuming that the solution to the new problem is , the expected value of (not ) should be .
To obtain the third eigenvalue , the cost function to optimize is:
where is a positive constant large enough to enforce orthogonality of the solution state to both and . This penalizes states in the search space that do not meet this requirement, effectively restricting the search space. Thus, the optimal solution of the new problem should be the eigenstate corresponding to .
Like the previous case, this new problem can also be interpreted as VQE with the observable:
If the solution to this new problem is , the expected value of (not ) should be .
Analogously, to obtain the -th eigenvalue , you would minimize the cost function:
Remember that we defined such that . This problem is equivalent to minimizing but with the constraint that the state must be orthogonal to , thereby restricting the search space to the subspace corresponding to the eigenvalues {}.
This problem is equivalent to a VQE with the observable:
As you can see from the process, to obtain the -th eigenvalue, you need the (approximate) eigenstates of the previous eigenvalues, so you would need to run VQE a total of times. Therefore, VQD's cost function is as follows:
VQD's Theoretical Layout
VQD's layout can be summarized as follows:
- Prepare a reference operator
- Apply variational form to the reference state, creating the following ansatze
- Bootstrap at if we have a similar problem (typically found via classical simulation or sampling).
- Evaluate the cost function , which involves computing excited states and an array of 's defining the overlap penalty for each overlap term.
- Calculate the expectation value for an observable for each
- Calculate the penalty .
- The cost function should then return the sum of these two terms
- Use a classical optimizer to choose the next set of parameters .
- Repeat this process until convergence is reached.
VQD's Implementation
For this implementation, we'll create a function for an overlap penalty. This penalty will be used in the cost function at each iteration. This process will be repeated for each excited state
Output:
First, we'll setup a function that calculates the state fidelity -- a percentage of overlap between two states that we'll use as a penalty for VQD:
No output produced
It's time to write VQD's cost function. As before when we calculated only the ground state, we will determine the lowest energy state using the Estimator primitive. However, as described above, we will now add a penalty term to ensure orthogonality of higher-energy states. That is, for each new excited state, a penalty is added for any overlap between the current variational state and the lower-energy eigenstates already found.
No output produced
Note especially that the cost function above refers to the calculate_overlaps function, which actually creates a new quantum circuit. If we want to run on real hardware, that new circuit must also be transpiled, hopefully in an optimal way, to run on the backend we select. Note that transpilation has been built in to the calculate_overlaps or cost_func_vqd functions. Feel free to try modifying the code yourself to build in this additional (and conditional) transpilation - but this will also be done for you in the next lesson.
In this lesson, we will run the VQD algorithm using the Statevector Sampler and Statevector Estimator:
No output produced
We will introduce an observable to be estimated. In the next lesson we will add some physical context to this, like the excited state of a molecule. It may be helpful to think of this observable as the Hamiltonian of a system that can have excited states, even though this observable has not been chosen to match any particular molecule or atom.
No output produced
Here, we set the total number of states we wish to calculate (ground state and excited states, k), and the penalties (betas) for overlap between statevectors that should be orthogonal. The consequences of choosing betas to be too high or too low will be explored a bit in the next lesson. For now, we will simply use those provided below. We will start by using all zeros as our parameters. In your own calculations, you may want to use more clever starting parameters based on your knowledge of the system or on previous calculations.
No output produced
We can now run the calculation:
Output:
message: Optimization terminated successfully.
success: True
status: 1
fun: -5.999999979545955
x: [-5.150e-01 -5.452e-02 -1.571e+00 -2.853e-05 2.671e-01
-2.672e-01 -8.509e-01 -8.510e-01]
nfev: 131
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: 4.024550284767612
x: [-3.745e-01 1.041e+00 8.637e-01 1.202e+00 -8.847e-02
1.181e-02 7.611e-01 -3.006e-01]
nfev: 110
maxcv: 0.0
message: Optimization terminated successfully.
success: True
status: 1
fun: 5.608925562838559
x: [-2.670e-01 1.280e+00 1.070e+00 -8.031e-01 -1.524e-01
-6.956e-02 7.018e-01 1.514e+00]
nfev: 90
maxcv: 0.0
The values we obtained from the cost function are approximately -6.00, 4.02, and 5.61. The important thing about these results is that the function values are increasing. If we had obtained a first excited state that is lower in energy than our initial, unconstrained calculation of the ground state, that would have indicated an error somewhere in our code.
The values of x are the parameters that yielded a statevector corresponding to each of these costs/energies.
Finally, we note that all three minimizations converged to within the default tolerance of the classical optimizer (here COBYLA). They required 131, 110, and 90 function evaluations, respectively.
Quantum Sampling Regression (QSR)
One of the main issues with VQE is the multiple calls to a quantum computer that are required to obtain the parameters for each step, including , , etc. This is especially problematic when access to quantum devices is queued. While a
The idea behind QSR is that the cost function can be expressed as a Fourier series in the following manner:
Depending on the periodicity and bandwidth of the original function, the set may be finite or infinite. For the purposes of this discussion, we will assume it is infinite. The next step is to sample the cost function multiple times in order to obtain the Fourier coefficients . Specifically, since we have unknowns, we will need to sample the cost function times.
If we then sample the cost function for parameter values {}, we can obtain the following system:
that we'll rewrite as
In practice, this system is generally not consistent because the cost function values are not exact. Therefore, it is usually a good idea to normalize them by multiplying them by on the left, which results in:
This new system is always consistent, and its solution is a least-squares solution to the original problem. If we have parameters instead of just one, and each parameter has its own for , then the total number of samples required is:
where . Furthermore, adjusting as a tunable parameter (instead of inferring it) opens up new possibilities, such as:
- Oversampling: to improve accuracy.
- Undersampling: to boost performance by reducing runtime overhead or eliminating local minima.
QSR's Theoretical Layout
QSR's layout can be summarized as follows:
- Prepare reference operators
- We'll go from the state to the reference state
- Apply the variational form to create an ansatz
- Determine the bandwidth associated with each parameter in the ansatz. An upper bound is sufficient.
- Bootstrap at if we have a similar problem (typically found via classical simulation or sampling)
- Sample the cost function at least times
- Decide whether to oversample/undersample to balance speed vs accuracy by adjusting .
- Compute the Fourier coefficients from the samples (i.e., solve the normalized linear system of equations).
- Solve for the global minimum of the resulting regression function on a classical machine.
Summary
With this lesson, you learned about multiple variational instances available:
- General layout
- Introducing weights and penalties to adjust a cost function
- Exploring undersampling vs oversampling to trade-off speed vs accuracy
These ideas can be adapted to form a custom variational algorithm that fits your problem. We encourage you to share your results with the community. The next lesson will explore how to use a variational algorithm to solve an application.
Was this page helpful?