Variational Algorithms Reference States Ansatze and Variational Forms Cost Functions Optimization Loops Instances and Extensions Examples and Applications
Cost Functions
During this lesson, we'll learn how to evaluate a cost function:
All physical systems, whether classical or quantum, can exist in different states. For example, a car on a road can have a certain mass, position, speed, or acceleration that characterize its state. Similarly, quantum systems can also have different configurations or states, but they differ from classical systems in how we deal with measurements and state evolution. This leads to unique properties such as superposition and entanglement that are exclusive to quantum mechanics. Just like we can describe a car's state using physical properties like speed or acceleration, we can also describe the state of a quantum system using observables, which are mathematical objects.
In quantum mechanics, states are represented by normalized complex column vectors, or kets (∣ψ⟩), and observables are hermitian linear operators (H^=H^†) that act on the kets. An eigenvector (∣λ⟩) of an observable is known as an eigenstate. Measuring an observable for one of its eigenstates (∣λ⟩) will give us the corresponding eigenvalue (λ) as readout.
If you're wondering how to measure a quantum system and what you can measure, Qiskit offers two that can help:
Sampler: Given a quantum state ∣ψ⟩, this primitive obtains the probability of each possible computational basis state.
Estimator: Given a quantum observable H^ and a state ∣ψ⟩, this primitive computes the expected value of H^.
The Sampler primitive
The Sampler primitive calculates the probability of obtaining each possible state ∣k⟩ from the computational basis, given a quantum circuit that prepares the state ∣ψ⟩. It calculates
pk=∣⟨k∣ψ⟩∣2∀k∈Z2n≡{0,1,⋯,2n−1},
Where n is the number of qubits, and k the integer representation of any possible output binary string {0,1}n (i.e. integers base 2).
Qiskit Runtime'sSampler runs the circuit multiple times on a quantum device, performing measurements on each run, and reconstructing the probability distribution from the recovered bit strings. The more runs (or shots) it performs, the more accurate the results will be, but this requires more time and quantum resources.
However, since the number of possible outputs grows exponentially with the number of qubits n (i.e. 2n), the number of shots will need to grow exponentially as well in order to capture a dense probability distribution. Therefore, Sampler is only efficient for sparse probability distributions; where the target state ∣ψ⟩ must be expressible as a linear combination of the computational basis states, with the number of terms growing at most polynomially with the number of qubits:
∣ψ⟩=k∑Poly(n)wk∣k⟩.
The Sampler can also be configured to retrieve probabilities from a subsection of the circuit, representing a subset of the total possible states.
The Estimator primitive
The Estimator primitive calculates the expectation value of an observable H^ for a quantum state ∣ψ⟩; where the observable probabilities can be expressed as pλ=∣⟨λ∣ψ⟩∣2, being ∣λ⟩ the eigenstates of the observable H^. The expectation value is then defined as the average of all possible outcomes λ (i.e. the eigenvalues of the observable) of a measurement of the state ∣ψ⟩, weighted by the corresponding probabilities:
⟨H^⟩ψ:=λ∑pλλ=⟨ψ∣H^∣ψ⟩
However, calculating the expectation value of an observable is not always possible, as we often don't know its eigenbasis. Qiskit Runtime'sEstimator uses a complex algebraic process to estimate the expectation value on a real quantum device by breaking down the observable into a combination of other observables whose eigenbasis we do know.
In simpler terms, Estimator breaks down any observable that it doesn't know how to measure into simpler, measurable observables called .
Any operator can be expressed as a combination of 4n Pauli operators.
P^k:=σkn−1⊗⋯⊗σk0∀k∈Z4n≡{0,1,⋯,4n−1},
such that
H^=k=0∑4n−1wkP^k
where n is the number of qubits, k≡kn−1⋯k0 for kl∈Z4≡{0,1,2,3} (i.e. integers base 4), and (σ0,σ1,σ2,σ3):=(I,X,Y,Z).
After performing this decomposition, Estimator derives a new circuit Vk∣ψ⟩ for each observable P^k (i.e. from the original circuit), to effectively diagonalize the Pauli observable in the computational basis and measure it. We can easily measure Pauli observables because we know Vk ahead of time, which is not the case generally for other observables.
For each P^k, the Estimator runs the corresponding circuit on a quantum device multiple times, measures the output state in the computational basis, and calculates the probability pkj of obtaining each possible output j. It then looks for the eigenvalue λkj of Pk corresponding to each output j, multiplies by wk, and adds all the results together to obtain the expected value of the observable H^ for the given state ∣ψ⟩.
⟨H^⟩ψ=k=0∑4n−1wkj=0∑2n−1pkjλkj,
Since calculating the expectation value of 4n Paulis is impractical (i.e. exponentially growing), Estimator can only be efficient when a large amount of wk are zero (i.e. sparse Pauli decomposition instead of dense). Formally we say that, for this computation to be efficiently solvable, the number of non-zero terms has to grow at most polynomially with the number of qubits n: H^=∑kPoly(n)wkP^k.
The reader may notice the implicit assumption that probability also needs to be efficient as explained for Sampler, which means
⟨H^⟩ψ=k∑Poly(n)wkj∑Poly(n)pkjλkj.
Guided example to calculate expectation values
Let's assume the single-qubit state ∣+⟩:=H∣0⟩=21(∣0⟩+∣1⟩), and observable
H^=(−122−1)=2X−Z
with the following theoretical expectation value ⟨H^⟩+=⟨+∣H^∣+⟩=2.
Since we do not know how to measure this observable, we cannot compute its expectation value directly, and we need to re-express it as ⟨H^⟩+=2⟨X⟩+−⟨Z⟩+. Which can be shown to evaluate to the same result by virtue of noting that ⟨+∣X∣+⟩=1, and ⟨+∣Z∣+⟩=0.
Let see how to compute ⟨X⟩+ and ⟨Z⟩+ directly. Since X and Z do not commute (i.e. don't share the same eigenbasis), they cannot be measured simultaneously, therefore we need the auxiliary circuits:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
We can now carry out the computation manually using Sampler and check the results on Estimator:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Sampler results:
>> Expected value of X: 1.00000
>> Expected value of Z: 0.00000
>> Total expected value: 2.00000
Estimator results:
>> Expected value of X: 1.00000
>> Expected value of Z: 0.00000
>> Total expected value: 2.00000
Mathematical rigor (optional)
Expressing ∣ψ⟩ with respect to the basis of eigenstates of H^, ∣ψ⟩=∑λaλ∣λ⟩, it follows:
Since we do not know the eigenvalues or eigenstates of the target observable H^, first we need to consider its diagonalization. Given that H^ is , there exists a unitary transformation V such that H^=V†ΛV, where Λ is the diagonal eigenvalue matrix, so ⟨j∣Λ∣k⟩=0 if j=k, and ⟨j∣Λ∣j⟩=λj.
This implies that the expected value can be rewritten as:
Given that if a system is in the state ∣ϕ⟩=V∣ψ⟩ the probability of measuring ∣j⟩ is pj=∣⟨j∣ϕ⟩∣2, the above expected value can be expressed as:
⟨ψ∣H^∣ψ⟩=j=0∑2n−1pjλj.
It is very important to note that the probabilities are taken from the state V∣ψ⟩ instead of ∣ψ⟩. This is why the matrix V is absolutely necessary.
You might be wondering how to obtain the matrix V and the eigenvalues Λ. If you already had the eigenvalues, then there would be no need to use a quantum computer since the goal of variational algorithms is to find these eigenvalues of H^.
Fortunately, there is a way around that: any 2n×2n matrix can be written as a linear combination of 4n tensor products of n Pauli matrices and identities, all of which are both hermitian and unitary with known V and Λ. This is what Runtime's Estimator does internally by decomposing any Operator object into a SparsePauliOp.
where Vk:=Vkn−1⊗...⊗Vk0 and Λk:=Λkn−1⊗...⊗Λk0, such that: Pk^=Vk†ΛkVk.
Cost functions
In general, cost functions are used to describe the goal of a problem and how well a trial state is performing with respect to that goal. This definition can be applied to various examples in chemistry, machine learning, finance, optimization, and so on.
Let's consider a simple example of finding the ground state of a system. Our objective is to minimize the expectation value of the observable representing energy (Hamiltonian H^):
θmin⟨ψ(θ)∣H^∣ψ(θ)⟩
We can use the Estimator to evaluate the expectation value and pass this value to an optimizer to minimize. If the optimization is successful, it will return a set of optimal parameter values θ∗, from which we will be able to construct the proposed solution state ∣ψ(θ∗)⟩ and compute the observed expectation value as C(θ∗).
Notice how we will only be able to minimize the cost function for the limited set of states that we are considering. This leads us to two separate possibilities:
Our ansatz does not define the solution state across the search space: If this is the case, our optimizer will never find the solution, and we need to experiment with other ansatzes that might be able to represent our search space more accurately.
Our optimizer is unable to find this valid solution: Optimization can be globally defined and locally defined. We'll explore what this means in the later section.
All in all, we will be performing a classical optimization loop but relying on the evaluation of the cost function to a quantum computer. From this perspective, one could think of the optimization as a purely classical endeavor where we call some each time the optimizer needs to evaluate the cost function.
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
1.478
Example mapping to non-physical systems
The maximum cut (Max-Cut) problem is a combinatorial optimization problem that involves dividing the vertices of a graph into two disjoint sets such that the number of edges between the two sets is maximized. More formally, given an undirected graph G=(V,E), where V is the set of vertices and E is the set of edges, the Max-Cut problem asks to partition the vertices into two disjoint subsets, S and T, such that the number of edges with one endpoint in S and the other in T is maximized.
We can apply Max-Cut to solve a various problems including: clustering, network design, phase transitions, etc. We'll start by creating a problem graph:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
This problem can be expressed as a binary optimization problem. For each node 0≤i<n, where n is the number of nodes of the graph (in this case n=4), we will consider the binary variable xi. This variable will have the value 1 if node i is one of the groups that we'll label 1 and 0 if it's in the other group, that we'll label as 0. We will also denote as wij (element (i,j) of the adjacency matrix w) the weight of the edge that goes from node i to node j. Because the graph is undirected, wij=wji. Then we can formulate our problem as maximizing the following cost function:
To solve this problem with a quantum computer, we are going to express the cost function as the expected value of an observable. However, the observables that Qiskit admits natively consist of Pauli operators, that have eigenvalues 1 and −1 instead of 0 and 1. That's why we are going to make the following change of variable:
Where x=(x0,x1,⋯,xn−1). We can use the adjacency matrix w to comfortably access the weights of all the edge. This will be used to obtain our cost function:
Moreover, the natural tendency of a quantum computer is to find minima (usually the lowest energy) instead of maxima so instead of maximizing C(z) we are going to minimize:
−C(z)=i=0∑nj=0∑i2wijzizj−i=0∑nj=0∑i2wij
Now that we have a cost function to minimize whose variables can have the values −1 and 1, we can make the following analogy with the Pauli Z:
zi≡Zi=In−1⊗...⊗Zi⊗...⊗I0
In other words, the variable zi will be equivalent to a Z gate acting on qubit i. Moreover:
to which we will have to add the independent term afterwards:
offset=−i=0∑nj=0∑i2wij
The operator is a linear combination of terms with Z operators on nodes connected by an edge (recall that the 0th qubit is farthest right): IIZZ+IZIZ+IZZI+ZIIZ+ZZII. Once the operator is constructed, the ansatz for the QAOA algorithm can easily be built by using the QAOAAnsatz circuit from the Qiskit circuit library.
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Offset: -2.5
With the Runtime Estimator directly taking a Hamiltonian and parameterized ansatz, and returning the necessary energy, The cost function for a QAOA instance is quite simple:
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
-0.4425
We will revisit this example in Applications to explore how to leverage an optimizer to iterate through the search space. Generally speaking, this includes:
Leveraging an optimizer to find optimal parameters
Binding optimal parameters to the ansatz to find the eigenvalues
Translating the eigenvalues to our problem definition
Measurement Strategy: Speed vs Accuracy
As mentioned, we are using a noisy quantum computer as a black-box oracle, where noise can make the retrieved values non-deterministic, leading to random fluctuations which, in turn, will harm —or even completely prevent— convergence of certain optimizers to a proposed solution. This is a general problem that we must address as we incrementally progress towards quantum advantage:
We can use Qiskit Runtime Primitive's error suppression and error mitigation options to address noise and maximize the utility of today's quantum computers.
Error Suppression
Error suppression refers to techniques used to optimize and transform a circuit during compilation in order to minimize errors. This is a basic error handling technique that usually results in some classical pre-processing to the overall runtime. The overhead includes transpiling circuits to run on quantum hardware by:
Expressing the circuit using the native gates available on a quantum system
Mapping the virtual qubits to physical qubits
Adding SWAPs based on connectivity requirements
Optimizing 1Q and 2Q gates
Adding dynamical decoupling to idle qubits to prevent the effects of decoherence.
Primitives allow for the use of error suppression techniques by setting the optimization_level option and selecting advanced transpilation options. In a later course, we will delve into different circuit construction methods to improve results, but for most cases, we would recommend setting optimization_level=3.
Error Mitigation
Error mitigation refers to techniques that allow users to reduce circuit errors by modeling the device noise at the time of execution. Typically, this results in quantum pre-processing overhead related to model training and classical post-processing overhead to mitigate errors in the raw results by using the generated model.
The Qiskit Runtime primitive's resilience_level option specifies the amount of resilience to build against errors. Higher levels generate more accurate results at the expense of longer processing times due to quantum sampling overhead. Resilience levels can be used to configure the trade-off between cost and accuracy when applying error mitigation to your primitive query.
When implementing any error mitigation technique, we expect the in our results to be reduced with respect to the previous, unmitigated bias. In some cases, the bias may even disappear. However, this comes at a cost. As we reduce the bias in our estimated quantities, the statistical variability will increase (that is, variance), which we can account for by further increasing the number of shots per circuit in our sampling process. This will introduce overhead beyond that needed to reduce the bias, so it is not done by default. We can easily opt-in to this behavior by adjusting the number of shots per circuit in options.executions.shots, as shown in the example below.
For this course, we will explore these error mitigation models at a high level to illustrate the error mitigation that Qiskit Runtime primitives can perform without requiring full implementation details.
Twirled readout error extinction (T-REx)
Twirled readout error extinction (T-REx) uses a technique known as Pauli twirling to reduce the noise introduced during the process of quantum measurement. This technique assumes no specific form of noise, which makes it very general and effective.
Overall workflow:
Acquire data for the zero state with randomized bit flips (Pauli X before measurement)
Acquire data for the desired (noisy) state with randomized bit flips (Pauli X before measurement)
Compute the special function for each data set, and divide.
We can set this with options.resilience_level = 1, demonstrated in the example below.
Zero noise extrapolation
Zero noise extrapolation (ZNE) works by first amplifying the noise in the circuit that is preparing the desired quantum state, obtaining measurements for several different levels of noise, and using those measurements to infer the noiseless result.
Overall workflow:
Amplify circuit noise for several noise factors
Run every noise amplified circuit
Extrapolate back to the zero noise limit
We can set this with options.resilience_level = 2. We can optimize this further by exploring a variety of noise_factors, noise_amplifiers, and extrapolators, but this outside the scope of this course. We encourage you to experiment with these options as described here.
Probabilistic error cancellation
Probabilistic error cancellation (PEC) samples for a collection of circuits that, on average mimics a noise inverting channel to cancel out the noise in the desired computation. This process is a bit like how noise-cancelling headphones work, and produces great results. However, it is not as general as other methods, and the sampling overhead is exponential.
Overall workflow:
Step 1: Pauli Twirling
Step 2: Repeat layer and learn the noise
Step 3: Derive a fidelity (error for each noise channel)
Each method come with their own associated overhead: a trade-off between the number of quantum computations needed (time) and the accuracy of our results:
MethodsAssumptionsQubit overheadSampling overheadBiasR=1, T-RExNone120R=2, ZNEAbility to scale noise1Nnoise-factorsO(λNnoise-factors)R=3, PECFull knowledge of noise1O(eλNlayers)0
Using Qiskit Runtime's mitigation and suppression options
Here's how to calculate an expectation value while using error mitigation and suppression in Qiskit Runtime. This process occurs multiple times throughout an optimization loop, but we've kept the example simple to demonstrate how to configure error mitigation and suppression.
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
No output produced
Authenticate to run code cells Sign in
Reset Copy to clipboard
Output:
Summary
With this lesson, you learned how to create a cost function:
How to define a measurement strategy to optimize speed vs accuracy
Here's our high-level variational workload:
Our cost function runs during every iteration of the optimization loop. The next lesson will explore how the classical optimizer uses our cost function evaluation to select new parameters.