# Variational quantum eigensolver

Was this page helpful?

## Background

Variational quantum algorithms are promising candidate hybrid-algorithms for observing the utility of quantum computation on noisy near-term devices. Variational algorithms are characterized by the use of a classical optimization algorithm to iteratively update a parameterized trial solution, or "ansatz". Chief among these methods is the Variational Quantum Eigensolver (VQE) that aims to solve for the ground state of a given Hamiltonian represented as a linear combination of Pauli terms, with an ansatz circuit where the number of parameters to optimize over is polynomial in the number of qubits. Given that size of the full solution vector is exponential in the number of qubits, successful minimization using VQE requires, in general, additional problem specific information to define the structure of the ansatz circuit.

Executing a VQE algorithm requires the following 3 components:

- Hamiltonian and ansatz (problem specification)
- Qiskit Runtime estimator
- Classical optimizer

Although the Hamiltonian and ansatz require domain specific knowledge to construct, these details are immaterial to the Runtime, and we can execute a wide class of VQE problems in the same manner.

## Requirements

Before starting this tutorial, ensure that you have the following installed:

- Qiskit SDK 1.0 or later, with visualization support (
pip install 'qiskit[visualization]' ) - Qiskit Runtime (
pip install qiskit-ibm-runtime ) 0.22 or later - SciPy (
python -m pip install scipy )

## Setup

Here we import the tools needed for a VQE experiment.

No output produced

Output:

```
'ibmq_mumbai'
```

## Step 1: Map classical inputs to a quantum problem

Here we define the problem instance for our VQE algorithm. Although the problem in question can come from a variety of domains, the form for execution through Qiskit Runtime is the same. Qiskit provides a convenience class for expressing Hamiltonians in Pauli form, and a collection of widely used ansatz circuits in the

Here, our example Hamiltonian is derived from a quantum chemistry problem

No output produced

Our choice of ansatz is the

Output:

From the previous figure we see that our ansatz circuit is defined by a vector of parameters, $\theta_{i}$, with the total number given by:

Output:

```
16
```

## Step 2: Optimize problem for quantum execution.

To reduce the total job execution time, Qiskit Runtime V2 primitives only accept circuits (ansatz) and observables (Hamiltonian) that conforms to the instructions and connectivity supported by the target system (referred to as instruction set architecture (ISA) circuits and observables, respectively).

### ISA Circuit

We can schedule a series of

optimization_level : The lowest optimization level just does the bare minimum needed to get the circuit running on the device; it maps the circuit qubits to the device qubits and adds swap gates to allow all 2-qubit operations. The highest optimization level is much smarter and uses lots of tricks to reduce the overall gate count. Since multi-qubit gates have high error rates and qubits decohere over time, the shorter circuits should give better results.

No output produced

Output:

### ISA Observable

Similarly, we need to transform the Hamiltonian to make it backend compatible before running jobs with

No output produced

## Step 3: Execute using Qiskit Primitives.

Like many classical optimization problems, the solution to a VQE problem can be formulated as minimization of a scalar cost function. By definition, VQE looks to find the ground state solution to a Hamiltonian by optimizing the ansatz circuit parameters to minimize the expectation value (energy) of the Hamiltonian. With the Qiskit Runtime

Note that the

No output produced

Note that, in addition to the array of optimization parameters that must be the first argument, we use additional arguments to pass the terms needed in the cost function.

### Creating a callback function

Callback functions are a standard way for users to obtain additional information about the status of an iterative algorithm. The standard SciPy callback routine allows for returning only the interim vector at each iteration. However, it is possible to do much more than this. Here, we show how to use a mutable object, such as a dictionary, to store the current vector at each iteration, for example in case we need to restart the routine due to failure, and also return the current iteration number and average time per iteration.

No output produced

No output produced

We can now use a classical optimizer of our choice to minimize the cost function. Here, we use the COBYLA routine from SciPy through the

To begin the routine, we specify a random initial set of parameters:

No output produced

Output:

```
array([5.07056716, 1.86434912, 1.27835939, 3.41939336, 5.05479277,
1.863352 , 2.71667884, 5.03560174, 1.95941096, 3.16362623,
5.92007134, 5.27294266, 1.72488001, 1.66385271, 4.23805393,
5.34258604])
```

Because we are sending a large number of jobs that we would like to execute together, we use a

Output:

```
Iters. done: 169 [Current cost: -0.6057352426069124]
```

At the terminus of this routine we have a result in the standard SciPy

Output:

```
message: Optimization terminated successfully.
success: True
status: 1
fun: -0.6111644347854737
x: [ 6.916e+00 1.971e+00 ... 4.950e+00 5.211e+00]
nfev: 169
maxcv: 0.0
```

## Step 4: Post-process, return result in classical format.

If the procedure terminates correctly, then the

Output:

```
True
```

Output:

```
True
```

We can also now view the progress towards convergence as monitored by the cost history at each iteration:

Output:

Output:

```
'0.21.1'
```

Output:

```
'1.0.1'
```

Was this page helpful?