# Cost Functions

During this lesson, we'll learn how to evaluate a *cost function*:

- First, we'll learn about Qiskit Runtime primitives
- Define a
*cost function*$C(\vec\theta)$. This is a problem-specific function that defines the problem's goal for the optimizer to minimize (or maximize) - Defining a measurement strategy with the Qiskit Runtime primitives to optimize speed vs accuracy

## Primitives

All physical systems, whether classical or quantum, can exist in different states. For example, a car on a road can have a certain mass, position, speed, or acceleration that characterize its state. Similarly, quantum systems can also have different configurations or states, but they differ from classical systems in how we deal with measurements and state evolution. This leads to unique properties such as *superposition* and *entanglement* that are exclusive to quantum mechanics. Just like we can describe a car's state using physical properties like speed or acceleration, we can also describe the state of a quantum system using *observables*, which are mathematical objects.

In quantum mechanics, states are represented by normalized complex column vectors, or *kets* ($|\psi\rangle$), and observables are hermitian linear operators ($\hat{H}=\hat{H}^{\dagger}$) that act on the kets. An eigenvector ($|\lambda\rangle$) of an observable is known as an *eigenstate*. Measuring an observable for one of its eigenstates ($|\lambda\rangle$) will give us the corresponding eigenvalue ($\lambda$) as readout.

If you're wondering how to measure a quantum system and what you can measure, Qiskit offers two that can help:

Sampler : Given a quantum state $|\psi\rangle$, this primitive obtains the probability of each possible computational basis state.Estimator : Given a quantum observable $\hat{H}$ and a state $|\psi\rangle$, this primitive computes the expected value of $\hat{H}$.

### The Sampler primitive

The

Where $n$ is the number of qubits, and $k$ the integer representation of any possible output binary string $\{0,1\}^n$ (i.e. integers base $2$).

Qiskit Runtime's *shots*) it performs, the more accurate the results will be, but this requires more time and quantum resources.

However, since the number of possible outputs grows exponentially with the number of qubits $n$ (i.e. $2^n$), the number of shots will need to grow exponentially as well in order to capture a *dense* probability distribution. Therefore, *sparse* probability distributions; where the target state $|\psi\rangle$ must be expressible as a linear combination of the computational basis states, with the number of terms growing at most polynomially with the number of qubits:

The

### The Estimator primitive

The

However, calculating the expectation value of an observable is not always possible, as we often don't know its eigenbasis. Qiskit Runtime's

In simpler terms,

Any operator can be expressed as a combination of $4^n$ Pauli operators.

$\hat{P}_k := \sigma_{k_{n-1}}\otimes \cdots \otimes \sigma_{k_0} \quad \forall k \in \mathbb{Z}_4^n \equiv \{0,1,\cdots,4^n-1\}, \\$such that

$\hat{H} = \sum^{4^n-1}_{k=0} w_k \hat{P}_k$where $n$ is the number of qubits, $k \equiv k_{n-1} \cdots k_0$ for $k_l \in \mathbb{Z}_4 \equiv \{0, 1, 2, 3\}$ (i.e. integers base $4$), and $(\sigma_0, \sigma_1, \sigma_2, \sigma_3) := (I, X, Y, Z)$.

After performing this decomposition, *diagonalize* the Pauli observable in the computational basis and measure it. We can easily measure Pauli observables because we know $V_k$ ahead of time, which is not the case generally for other observables.

For each $\hat{P}_{k}$, the

Since calculating the expectation value of $4^n$ Paulis is impractical (i.e. exponentially growing), *sparse* Pauli decomposition instead of *dense*). Formally we say that, for this computation to be *efficiently solvable*, the number of non-zero terms has to grow at most polynomially with the number of qubits $n$: $\hat{H} = \sum^{\text{Poly}(n)}_k w_k \hat{P}_k.$

The reader may notice the implicit assumption that probability also needs to be efficient as explained for

### Guided example to calculate expectation values

Let's assume the single-qubit state $|+\rangle := H|0\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle)$, and observable

$\begin{aligned} \hat{H} & = \begin{pmatrix} -1 & 2 \\ 2 & -1 \\ \end{pmatrix}\\[1mm] & = 2X - Z \end{aligned}$with the following theoretical expectation value $\langle\hat{H}\rangle_+ = \langle+|\hat{H}|+\rangle = 2.$

Since we do not know how to measure this observable, we cannot compute its expectation value directly, and we need to re-express it as $\langle\hat{H}\rangle_+ = 2\langle X \rangle_+ - \langle Z \rangle_+$. Which can be shown to evaluate to the same result by virtue of noting that $\langle+|X|+\rangle = 1$, and $\langle+|Z|+\rangle = 0$.

Let see how to compute $\langle X \rangle_+$ and $\langle Z \rangle_+$ directly. Since $X$ and $Z$ do not commute (i.e. don't share the same eigenbasis), they cannot be measured simultaneously, therefore we need the auxiliary circuits:

Output:

Output:

Output:

We can now carry out the computation manually using

Output:

```
Sampler results:
>> Expected value of X: 1.00000
>> Expected value of Z: 0.00000
>> Total expected value: 2.00000
Estimator results:
>> Expected value of X: 1.00000
>> Expected value of Z: 0.00000
>> Total expected value: 2.00000
```

### Mathematical rigor (optional)

Expressing $|\psi\rangle$ with respect to the basis of eigenstates of $\hat{H}$, $|\psi\rangle = \sum_\lambda a_\lambda |\lambda\rangle$, it follows:

$\begin{aligned} \langle \psi | \hat{H} | \psi \rangle & = \bigg(\sum_{\lambda'}a^*_{\lambda'} \langle \lambda'|\bigg) \hat{H} \bigg(\sum_{\lambda} a_\lambda | \lambda\rangle\bigg)\\[1mm] & = \sum_{\lambda}\sum_{\lambda'} a^*_{\lambda'}a_{\lambda} \langle \lambda'|\hat{H}| \lambda\rangle\\[1mm] & = \sum_{\lambda}\sum_{\lambda'} a^*_{\lambda'}a_{\lambda} \lambda \langle \lambda'| \lambda\rangle\\[1mm] & = \sum_{\lambda}\sum_{\lambda'} a^*_{\lambda'}a_{\lambda} \lambda \cdot \delta_{\lambda, \lambda'}\\[1mm] & = \sum_\lambda |a_\lambda|^2 \lambda\\[1mm] & = \sum_\lambda p_\lambda \lambda\\[1mm] \end{aligned}$Since we do not know the eigenvalues or eigenstates of the target observable $\hat{H}$, first we need to consider its diagonalization. Given that $\hat{H}$ is , there exists a unitary transformation $V$ such that $\hat{H}=V^\dagger \Lambda V,$ where $\Lambda$ is the diagonal eigenvalue matrix, so $\langle j | \Lambda | k \rangle = 0$ if $j\neq k$, and $\langle j | \Lambda | j \rangle = \lambda_j$.

This implies that the expected value can be rewritten as:

$\begin{aligned} \langle\psi|\hat{H}|\psi\rangle & = \langle\psi|V^\dagger \Lambda V|\psi\rangle\\[1mm] & = \langle\psi|V^\dagger \bigg(\sum_{j=0}^{2^n-1} |j\rangle \langle j|\bigg) \Lambda \bigg(\sum_{k=0}^{2^n-1} |k\rangle \langle k|\bigg) V|\psi\rangle\\[1mm] & = \sum_{j=0}^{2^n-1} \sum_{k=0}^{2^n-1}\langle\psi|V^\dagger |j\rangle \langle j| \Lambda |k\rangle \langle k| V|\psi\rangle\\[1mm] & = \sum_{j=0}^{2^n-1}\langle\psi|V^\dagger |j\rangle \langle j| \Lambda |j\rangle \langle j| V|\psi\rangle\\[1mm] & = \sum_{j=0}^{2^n-1}|\langle j| V|\psi\rangle|^2 \lambda_j\\[1mm] \end{aligned}$Given that if a system is in the state $|\phi\rangle = V |\psi\rangle$ the probability of measuring $| j\rangle$ is $p_j = |\langle j|\phi \rangle|^2$, the above expected value can be expressed as:

$\langle\psi|\hat{H}|\psi\rangle = \sum_{j=0}^{2^n-1} p_j \lambda_j.$It is very important to note that the probabilities are taken from the state $V |\psi\rangle$ instead of $|\psi\rangle$. This is why the matrix $V$ is absolutely necessary.

You might be wondering how to obtain the matrix $V$ and the eigenvalues $\Lambda$. If you already had the eigenvalues, then there would be no need to use a quantum computer since the goal of variational algorithms is to find these eigenvalues of $\hat{H}$.

Fortunately, there is a way around that: any $2^n \times 2^n$ matrix can be written as a linear combination of $4^n$ tensor products of $n$ Pauli matrices and identities, all of which are both hermitian and unitary with known $V$ and $\Lambda$. This is what Runtime's

Here are the Operators that can be used:

$\begin{array}{c|c|c|c} \text{Operator} & \sigma & V & \Lambda \\[1mm] \hline I & \sigma_0 = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} & V_0 = I & \Lambda_0 = I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \\[4mm] X & \sigma_1 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} & V_1 = H =\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix} & \Lambda_1 = \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\[4mm] Y & \sigma_2 = \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} & V_2 = HS^\dagger =\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}\cdot \begin{pmatrix} 1 & 0 \\ 0 & -i \end{pmatrix} = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & -i \\ 1 & i \end{pmatrix}\quad & \Lambda_2 = \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \\[4mm] Z & \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} & V_3 = I & \Lambda_3 = \sigma_3 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \end{array}$So let's rewrite $\hat{H}$ with respect to the Paulis and identities:

$\hat{H} = \sum_{k_{n-1}=0}^3... \sum_{k_0=0}^3 w_{k_{n-1}...k_0} \sigma_{k_{n-1}}\otimes ... \otimes \sigma_{k_0} = \sum_{k=0}^{4^n-1} w_k \hat{P}_k,$where $k = \sum_{l=0}^{n-1} 4^l k_l \equiv k_{n-1}...k_0$ for $k_{n-1},...,k_0\in \{0,1,2,3\}$ (i.e. base $4$), and $\hat{P}_{k} := \sigma_{k_{n-1}}\otimes ... \otimes \sigma_{k_0}$:

$\begin{aligned} \langle\psi|\hat{H}|\psi\rangle & = \sum_{k=0}^{4^n-1} w_k \sum_{j=0}^{2^n-1}|\langle j| V_k|\psi\rangle|^2 \langle j| \Lambda_k |j\rangle \\[1mm] & = \sum_{k=0}^{4^n-1} w_k \sum_{j=0}^{2^n-1}p_{kj} \lambda_{kj}, \\[1mm] \end{aligned}$where $V_k := V_{k_{n-1}}\otimes ... \otimes V_{k_0}$ and $\Lambda_k := \Lambda_{k_{n-1}}\otimes ... \otimes \Lambda_{k_0}$, such that: $\hat{P_k}=V_k^\dagger \Lambda_k V_k.$

## Cost functions

In general, cost functions are used to describe the goal of a problem and how well a trial state is performing with respect to that goal. This definition can be applied to various examples in chemistry, machine learning, finance, optimization, and so on.

Let's consider a simple example of finding the ground state of a system. Our objective is to minimize the expectation value of the observable representing energy (Hamiltonian $\hat{\mathcal{H}}$):

$\min_{\vec\theta} \langle\psi(\vec\theta)|\hat{\mathcal{H}}|\psi(\vec\theta)\rangle$We can use the

Notice how we will only be able to minimize the cost function for the limited set of states that we are considering. This leads us to two separate possibilities:

**Our ansatz does not define the solution state across the search space**: If this is the case, our optimizer will never find the solution, and we need to experiment with other ansatzes that might be able to represent our search space more accurately.**Our optimizer is unable to find this valid solution**: Optimization can be globally defined and locally defined. We'll explore what this means in the later section.

All in all, we will be performing a classical optimization loop but relying on the evaluation of the cost function to a quantum computer. From this perspective, one could think of the optimization as a purely classical endeavor where we call some each time the optimizer needs to evaluate the cost function.

Output:

Output:

Output:

```
1.478
```

### Example mapping to non-physical systems

The maximum cut (Max-Cut) problem is a combinatorial optimization problem that involves dividing the vertices of a graph into two disjoint sets such that the number of edges between the two sets is maximized. More formally, given an undirected graph $G=(V,E)$, where $V$ is the set of vertices and $E$ is the set of edges, the Max-Cut problem asks to partition the vertices into two disjoint subsets, $S$ and $T$, such that the number of edges with one endpoint in $S$ and the other in $T$ is maximized.

We can apply Max-Cut to solve a various problems including: clustering, network design, phase transitions, etc. We'll start by creating a problem graph:

Output:

This problem can be expressed as a binary optimization problem. For each node $0 \leq i < n$, where $n$ is the number of nodes of the graph (in this case $n=4$), we will consider the binary variable $x_i$. This variable will have the value $1$ if node $i$ is one of the groups that we'll label $1$ and $0$ if it's in the other group, that we'll label as $0$. We will also denote as $w_{ij}$ (element $(i,j)$ of the adjacency matrix $w$) the weight of the edge that goes from node $i$ to node $j$. Because the graph is undirected, $w_{ij}=w_{ji}$. Then we can formulate our problem as maximizing the following cost function:

$\begin{aligned} C(\vec{x}) & =\sum_{i,j=0}^n w_{ij} x_i(1-x_j)\\[1mm] & = \sum_{i,j=0}^n w_{ij} x_i - \sum_{i,j=0}^n w_{ij} x_ix_j\\[1mm] & = \sum_{i,j=0}^n w_{ij} x_i - \sum_{i=0}^n \sum_{j=0}^i 2w_{ij} x_ix_j \end{aligned}$