Learning Home Catalog Composer
Learning
Home Catalog Composer Return to course
Learning

Variational Algorithms

This course covers the specifics of variational algorithms and near-term hybrid quantum-classical algorithms based on the variational theorem of quantum mechanics. These algorithms can leverage the utility provided by today's non-fault-tolerant quantum computers, making them ideal candidates to achieve .

Throughout this course, we will explore:

  • Each step in the variational algorithm design workflow
  • Trade-offs associated with each step
  • How to use Qiskit Runtime primitives to optimize for speed and accuracy

While this course is meant to be a starting place for researchers and developers to explore the utility of quantum computers, feel free to explore the theoretical and foundational knowledge around quantum computing in general in the Basics of Quantum Information and Computation (also available as a series of YouTube videos).

Pre-course Survey

Before we begin, please take a moment to complete our pre-course survey, which is important to help improve our content offerings and user experience.

Simplified hybrid workflow

Variational Flow

Variational algorithms include several modular components that can be combined and optimized based on algorithm, software, and hardware advancements. This includes a cost function that describes a specific problem with a set of parameters, an ansatz to express the search space with these parameters, and an optimizer to iteratively explore the search space. During each iteration, the optimizer evaluates the with the current parameters and selects the next iteration's parameters until it on an optimal solution. The hybrid nature of this family of algorithms comes from the fact that the cost functions are evaluated using quantum resources and optimized through classical ones.

  1. Initialize problem: Variational algorithms start by initializing the quantum computer in a default state 0|0\rangle, then transforming it to some desired (non-parametrized) state ρ|\rho\rangle, which we will call reference state.

    This transformation is represented by the application of a unitary reference operator URU_R on the default state, such that UR0=ρU_R|0\rangle = |\rho\rangle.

  2. Prepare ansatz: To begin iteratively optimizing from the default state 0|0\rangle to the target state ψ(θ)|\psi(\vec\theta)\rangle, we must define a variational form UV(θ)U_V(\vec\theta) to represent a collection of parametrized states for our variational algorithm to explore.

    We refer to any particular combination of reference state and variational form as an ansatz, such that: UA(θ):=UV(θ)URU_A(\vec\theta) := U_V(\vec\theta) U_R. Ansatze will ultimately take the form of parametrized quantum circuits capable of taking the default state 0|0\rangle to the target state ψ(θ)|\psi(\vec\theta)\rangle.

    All in all we will have:

    0URUR0=ρUV(θ)UA(θ)0=UV(θ)UR0=UV(θ)ρ=ψ(θ)\begin{aligned} |0\rangle \xrightarrow{U_R} U_R|0\rangle & = |\rho\rangle \xrightarrow{U_V(\vec{\theta})} U_A(\vec{\theta})|0\rangle \\[1mm] & = U_V(\vec{\theta})U_R|0\rangle \\[1mm] & = U_V(\vec{\theta})|\rho\rangle \\[1mm] & = |\psi(\vec{\theta})\rangle \\[1mm] \end{aligned}
  3. Evaluate cost function: We can encode our problem into a cost function C(θ)C(\vec\theta) as a linear combination of Pauli operators, run on a quantum system. While this can be information about a physical system, such as energy or spin, we can also encode non-physical problems as well. We can leverage Qiskit Runtime primitives to address noise with error suppression and mitigation while evaluating our cost function.

  4. Optimize parameters: Evaluations are taken to a classical computer, where a classical optimizer analyzes them and chooses the next set of values for the variational parameters. If we have a pre-existing optimal solution, we can set it as an initial point θ0\vec\theta_0 to bootstrap our optimization. Using this initial state ψ(θ0)|\psi(\vec\theta_0)\rangle could help our optimizer find a valid solution faster.

  5. Adjust ansatz parameters with results, and re-run: the entire process is repeated until the classical optimizer's finalization criteria are met, and an optimal set of parameter values θ\vec\theta^* is returned. The proposed solution state for our problem will then be ψ(θ)=UA(θ)0|\psi(\vec\theta^*)\rangle = U_A(\vec\theta^*)|0\rangle.

Variational theorem

A common goal of variational algorithms is to find the quantum state with the lowest or highest eigenvalue of a certain observable. A key insight we'll use is the variational theorem of quantum mechanics. Before going into its full statement, let us explore some of the mathematical intuition behind it.

Mathematical intuition for energy and ground states

In quantum mechanics, energy comes in the form of a quantum observable usually referred to as the Hamiltonian, which we'll denote by H^\hat{\mathcal{H}}. Let us consider its :

H^=k=0N1λkϕkϕk\hat{\mathcal{H}} = \sum_{k=0}^{N-1} \lambda_k |\phi_k\rangle \langle \phi_k|

where NN is the dimensionality of the space of states, λk\lambda_{k} is the kk-th eigenvalue or, physically, the kk-th energy level, and ϕk|\phi_k\rangle is the corresponding : H^ϕk=λkϕk\hat{\mathcal{H}}|\phi_k\rangle = \lambda_k |\phi_k\rangle, the expected energy of a system in the (normalized) state ψ|\psi\rangle will be:

ψH^ψ=ψ(k=0N1λkϕkϕk)ψ=k=0N1λkψϕkϕkψ=k=0N1λkψϕk2\begin{aligned} \langle \psi | \hat{\mathcal{H}} | \psi \rangle & = \langle \psi |\bigg(\sum_{k=0}^{N-1} \lambda_k |\phi_k\rangle \langle \phi_k|\bigg) | \psi \rangle \\[1mm] & = \sum_{k=0}^{N-1} \lambda_k \langle \psi |\phi_k\rangle \langle \phi_k| \psi \rangle \\[1mm] & = \sum_{k=0}^{N-1} \lambda_k |\langle \psi |\phi_k\rangle|^2 \\[1mm] \end{aligned}

If we take into account that λ0λk,k\lambda_0\leq \lambda_k, \forall k, we have:

ψH^ψ=k=0N1λkψϕk2k=0N1λ0ψϕk2=λ0k=0N1ψϕk2=λ0\begin{aligned} \langle \psi | \hat{\mathcal{H}} | \psi \rangle & = \sum_{k=0}^{N-1} \lambda_k |\langle \psi |\phi_k\rangle|^2 \\[1mm] & \geq \sum_{k=0}^{N-1} \lambda_0 |\langle \psi |\phi_k\rangle|^2 \\[1mm] & = \lambda_0 \sum_{k=0}^{N-1} |\langle \psi |\phi_k\rangle|^2 \\[1mm] & = \lambda_0 \\[1mm] \end{aligned}

Since {ϕk}k=0N1\{|\phi_k\rangle \}_{k=0}^{N-1} is an orthonormal basis, the probability of measuring ϕk|\phi_{k} \rangle is pk=ψϕk2p_k = |\langle \psi |\phi_{k} \rangle |^2, and the sum of all probabilities is such that k=0N1ψϕk2=k=0N1pk=1\sum_{k=0}^{N-1} |\langle \psi |\phi_k\rangle|^2 = \sum_{k=0}^{N-1}p_k = 1. In short, the expected energy of any system is higher than the lowest energy or ground state energy:

ψH^ψλ0.\langle \psi | \hat{\mathcal{H}} | \psi \rangle \geq \lambda_0.

The above argument applies to any valid (normalized) quantum state ψ|\psi\rangle, so it is perfectly possible to consider parametrized states ψ(θ)|\psi(\vec\theta)\rangle that depend on a parameter vector θ\vec\theta. This is where the "variational" part comes into play. If we consider a cost function given by C(θ):=ψ(θ)H^ψ(θ)C(\vec\theta) := \langle \psi(\vec\theta)|\hat{\mathcal{H}}|\psi(\vec\theta)\rangle and want to minimize it, the minimum will always satisfy:

minθC(θ)=minθψ(θ)H^ψ(θ)λ0.\min_{\vec\theta} C(\vec\theta) = \min_{\vec\theta} \langle \psi(\vec\theta)|\hat{\mathcal{H}}|\psi(\vec\theta)\rangle \geq \lambda_0.

The minimum value of C(θ)C(\vec\theta) will be the closest that one can get to λ0\lambda_0 using the parametrized states ψ(θ)|\psi(\vec\theta)\rangle, and equality will only be reached if there exists a parameter vector θ\vec\theta^* such that ψ(θ)=ϕ0|\psi(\vec\theta^*)\rangle = |\phi_0\rangle

Variational theorem of Quantum Mechanics

If the (normalized) state ψ|\psi\rangle of a quantum system depends on a parameter vector θ\vec\theta, then the optimal approximation of the ground state (i.e. the eigenstate ϕ0|\phi_0\rangle with the minimum eigenvalue λ0\lambda_0) is the one that minimizes the of the Hamiltonian H^\hat{\mathcal{H}}:

H^(θ):=ψ(θ)H^ψ(θ)λ0\langle \hat{\mathcal{H}} \rangle(\vec\theta) := \langle \psi(\vec\theta) |\hat{\mathcal{H}}| \psi(\vec\theta) \rangle \geq \lambda_0

The reason why the variational theorem is stated in terms of energy minima is that it includes a number of mathematical assumptions:

  • For physical reasons, a finite lower bound to the energy Eλ0>E \geq \lambda_0 > -\infty needs to exist, even for NN\rightarrow\infty.
  • Upper bounds do not generally exist.

However, mathematically speaking, there is nothing special about the Hamiltonian H^\hat{\mathcal{H}} beyond these assumptions, so the theorem can be generalized to other quantum observables and their eigenstates provided they follow the same constraints. Also, note that if finite upper bounds exist, the same mathematical arguments could be made for maximizing eigenvalues by swapping lower bounds for upper bounds.

Summary

With this lesson, you learned the high-level view of variational algorithms. Over the following lessons, we'll explore each step in greater detail, and their associated trade-offs.

Was this page helpful?