Grover's algorithm
Download the slides for this lesson.
Introduction
In this lesson we'll discuss Grover's algorithm, which is a quantum algorithm for so-called unstructured search problems that offers a quadratic improvement over classical algorithms. What this means is that Grover's algorithm requires a number of operations on the order of the square-root of the number of operations required to solve unstructured search classically — which is equivalent to saying that classical algorithms for unstructured search must have a cost at least on the order of the square of the cost of Grover's algorithm. Grover's algorithm, together with its extensions and underlying methodology, turn out to be broadly applicable, leading to a quadratic advantage for many interesting computational tasks that may not initially look like unstructured search problems on the surface.
While the broad applicability of Grover's searching technique is compelling, it should be acknowledged here at the start of the lesson that the quadratic advantage it offers seems unlikely to lead to a practical advantage of quantum over classical computing any time soon. Classical computing hardware is currently so much more advanced than quantum computing hardware that the quadratic quantum-over-classical advantage offered by Grover's algorithm is certain to be washed away by the staggering clock speeds of modern classical computers for any unstructured search problem that could feasibly be run on the quantum computers of today.
As quantum computing technology advances, however, Grover's algorithm does have potential. Indeed, some of the most important and impactful classical algorithms ever discovered, including the fast Fourier transform and fast sorting (e.g., quicksort and mergesort), offer slightly less than a quadratic advantage over naive approaches to the problems they solve. The key difference here, of course, is that an entirely new technology (meaning quantum computing) is required to run Grover's algorithm. While this technology is still very much in its infancy in comparison to classical computing, we should not be so quick to underestimate the potential of technological advances that could allow a quadratic advantage of quantum over classical computing to one day offer tangible practical benefits.
Unstructured search
Summary
We'll begin with a description of the problem that Grover's algorithm solves. As usual, we'll let denote the binary alphabet throughout this discussion.
Suppose that
is a function from binary strings of length to bits. We'll assume that we can compute this function efficiently, but otherwise it's arbitrary and we can't rely on it having a special structure or specific implementation that suits our needs.
What Grover's algorithm does is to search for a string for which We'll refer to strings like this as solutions to the searching problem. If there are multiple solutions, then any one of them is considered to be a correct output, and if there are no solutions, then a correct answer requires that we report that there are no solutions.
We describe this task as an unstructured search problem because we can't rely on having any particular structure to make it easy. We're not searching an ordered list or within some data structure specifically designed to facilitate searching, we're essentially looking for a needle in a haystack. From an intuitive point of view, we might imagine that we have an extremely complicated Boolean circuit that computes and we can easily run this circuit on a selected input string if we choose — but because it's so convoluted, we have no hope of making sense of the circuit by examining it (beyond having the ability to evaluate it on selected input strings).
One way to perform this searching task classically is to simply iterate through all of the strings evaluating on each one to check whether or not it is a solution. Hereafter, let's write
just for the sake of convenience, so we can say that there are strings in Iterating through all of them requires evaluations of Operating under the assumption that we're limited to evaluating on chosen inputs, this is the best we can do with a deterministic algorithm if we want to guarantee success. With a probabilistic algorithm, we might hope to save time by randomly choosing input strings to but we'll still require evaluations of in we want this method to succeed with high probability.
Grover's algorithm solves the unstructured search problem described above with high probability, and required just evaluations of To be clear, these function evaluations must happen in superposition, similar to the query algorithms discussed in Lesson 5 (including Deutsch's algorithm, the Deutsch-Jozsa algorithm, and Simon's algorithm). Grover's algorithm takes an iterative approach: it evaluates on superpositions of input strings and intersperses these evaluations with other operations that have the effect of creating interference patterns, leading to a solution with high probability (if one exists) after iterations.
Formal problem statement
We'll formalize the problem that Grover's algorithm solves using the query model of computation. That is, we will assume that we have access to the function through a query gate defined in the usual way, which is as
for every and This is the action of on standard basis states, and its action in general is determined by linearity.
As discussed in Lesson 6, if we have a Boolean circuit for computing we can transform that Boolean circuit description into a quantum circuit implementing (using some number of workspace qubits that start and end the computation in the state). So, while we're using the query model to formalize the problem that Grover's algorithm solves, it is not limited to this model: we can run Grover's algorithm on any function for which we have a Boolean circuit.
Here's a precise statement of the problem, which is called Search because we're searching for a solution, meaning a string that causes to evaluate to
Search
Input: a function
Output: a string satisfying or "no solution" if no such string exists
Notice that this is not a promise problem — the function is arbitrary. It will, however, be helpful to consider the following promise variant of the problem, where we're guaranteed that there's exactly one solution. This problem appeared as an example of a promise problem in Lesson 5.
Unique search
Input: a function of the form
Promise: there is exactly one string for which with for all strings
Output: the string
Also notice that the Or problem mentioned in Lesson 5 is closely related to Search. For this problem, the goal is simply to determine whether or not a solution exists, as opposed to actually finding a solution.
Grover's algorithm
Next we will describe Grover's algorithm itself.
Phase query gates
Grover's algorithm makes use of operations known as phase query gates. In contrast to an ordinary query gate defined for a given function in the usual way described above, a phase query gate for the function is defined as
for every string
The operation can be implemented using one query gate as this diagram suggests:
This implementation makes use of the phase kickback phenomenon, and requires that one workspace qubit, initialized to a state, is made available. This qubit remains in the state after the implementation has completed, and can be reused (to implement subsequent gates, for instance) or simply discarded.
In addition to the operation we will also make use of a phase query gate for the -bit OR function, which is defined as follows for each string
Explicitly, the phase query gate for the -bit OR function operates like this:
To be clear, this is how operates on standard basis states; its behavior on arbitrary states is determined from this expression by linearity.
The operation can be implemented as a quantum circuit by beginning with a Boolean circuit for the OR function, then constructing a operation (i.e., a standard query gate for the -bit OR function) using the procedure described in Lesson 6, and finally a operation using the phase kickback phenomenon as described above.
Notice that the operation has no dependence on the function and can therefore be implemented by a quantum circuit having no query gates.
Description of the algorithm
Now that we have the two operations and we can describe Grover's algorithm.
The algorithm refers to a number which is the number of iterations it performs, as well as the number of queries to the function it requires. This number isn't specified by Grover's algorithm (as we're describing it), and we'll discuss in the section following this one how it can be chosen.
Grover's algorithm
- Initialize an qubit register to the all-zero state and then apply a Hadamard operation to each qubit of
- Apply times the unitary operation to the register
- Measure the qubits of with respect to standard basis measurements and output the resulting string.
The operation iterated in step 2 will be called the Grover operation throughout the remainder of this lesson. Here is a quantum circuit representation of the Grover operation:
Here the operation is depicted as being larger than as a way to suggest that it is likely to be the more costly operation (but this is only meant a visual clue and not something with a formal meaning). In particular, when we're working within the query model requires one query while requires no queries — and if instead we have a Boolean circuit for the function and convert it to a quantum circuit for we can reasonably expect that the resulting quantum circuit will be larger and more complicated than one for
Here's a description of a quantum circuit for the entire algorithm when For larger values of we may simply insert additional instances of the Grover operation immediately before the measurements.
Application to Search
Grover's algorithm can be applied to the Search problem described in the previous section as follows:
- Choose the number in step 2. The section following this one discusses how we can choose
- Run Grover's algorithm on the function using whatever choice we made for to obtain a string
- Query the function on the string to see if it's a valid solution:
- If then we have found a solution, so we can stop and output
- Otherwise, if then we can either run the procedure again, possibly with a different choice for or we can decide to give up and output "no solution."
A bit later, once we've analyzed how Grover's algorithm works, we'll see that by taking we'll obtain a solution to our search problem (if one exists) with high probability.
Analysis
Now we'll analyze Grover's algorithm to understand how it works. We'll start with what could be described as a symbolic analysis, where we calculate how the Grover operation acts on certain states, and then we'll then tie this symbolic analysis to a geometric picture that's helpful for visualizing how the algorithm works.
Solutions and non-solutions
Let's start by defining two sets of strings.
The set contains all of the solutions to our search problem, and contains the strings that aren't solutions (which we can refer to as non-solutions when it's convenient). These two sets satisfy and which is to say that this is a bipartition of
Next we'll define two unit vectors representing uniform superpositions over the sets of solutions and non-solutions.
Formally speaking, each of these vectors is only defined when its corresponding set is nonempty, but hereafter we're going to focus on the case that neither nor is empty. The cases that and are easily handled separately, and we'll do that later.
As an aside, this notation is pretty common: any time we have a nonempty set we can write to denote the quantum state vector that's uniform over the elements of
Let us also define to be a uniform quantum state over all -bit strings:
Notice that
We also have that so represents the state of the register after the initialization in step 1 of Grover's algorithm. This implies that just before the iterations of happen in step 2, the state of is contained in the two-dimensional vector space spanned by and and moreover the coefficients of these vectors are real numbers.
As we will see, the state of will always have these properties — meaning that the state is a real linear combination of and — after any number of iterations of the operation in step 2.
An observation about the Grover operation
We'll now turn our attention to the Grover operation
beginning with an interesting observation about it.
Imagine for a moment that we replaced the function by the composition of with the NOT function — or in other words the function we get by flipping the output bit of We'll call this new function and we can express it using symbols in a few alternative ways.
Now, notice that
Recalling that for every string we can verify this by observing that
for every string
So, Grover's algorithm behaves in exactly the same for as it does for Intuitively speaking, the algorithm doesn't really care which strings are solutions — it only needs to be able to distinguish solutions and non-solutions to operate as it does.
Action of the Grover operation
Now let's consider the action of on the vectors and
First let's observe that the operation has a very simple action on the vectors and
Second we have the operation The operation is defined as
again for every string and a convenient alternative way to express this operation is like this:
(A simple way to verify that this expression agrees with the definition of is to evaluate its action on standard basis states.) The operation can therefore be written like this:
Using the same notation that we used above for the uniform superposition over all -bit strings, we can alternatively express like this:
And now we have what we need to compute the action of on and First we compute the action of on
And second, the action of on
In both cases we're using the equation
along with the expressions
that follow.
In summary, we have
As we already noted, the state of just prior to step 2 is contained in the two-dimensional space spanned by and and we have just established that maps any vector in this space to another vector in the same space. This means that, for the sake of the analysis, we can focus our attention exclusively on this subspace.
To better understand what's happening within this two-dimensional space, let's express the action of on this space as a matrix,
whose first and second rows/columns correspond to and respectively. (So far in this series we've always connected the rows and columns of matrices with the classical states of a system, but matrices can also be used to describe the actions of linear mappings on different bases like we have here.)
While it isn't at all obvious at first glance, the matrix is what we obtain by squaring a simpler-looking matrix.
The matrix
is a rotation matrix, which we can alternatively express as
for
This angle is going to play a very important role in the analysis that follows, so it's worth stressing its importance here as we see it for the first time.
In light of this expression of this matrix, we observe that
This is because rotating by the angle two times is equivalent to rotating by the angle Another way to see this is to make use of the alternative expression
together with the double angle formulas from trigonometry:
In summary, the state of the register at the start of step 2 is
and the effect of applying to this state is to rotate it by an angle within the space spanned by and
So, for example, we have
and in general
Geometric picture
Now let's connect the analysis we just went through to a geometric picture. The idea is that the operation is the product of two reflections, and And the net effect of performing two reflections is to perform a rotation.
Let's start with As we already observed previously, we have
Within the two-dimensional vector space spanned by and this is a reflection about the line parallel to which we'll call Here's a figure illustrating the action of this reflection on a hypothetical unit vector which we're assuming is a real linear combination of and
Second we have the operation which we've already seen can be written as
This is also a reflection, this time about the line parallel to the vector Here's a figure depicting the action of this reflection on a unit vector
When we compose these two reflections, we obtain a rotation — by twice the angle between the lines of reflection — as this figure illustrates.
Choosing the number of iterations
As we have established in the previous section, the state vector of the register in Grover's algorithm remains in the two-dimensional subspace spanned by and once the initialization step has been performed. The goal is to find an element and this goal will be accomplished if we can obtain the state — for if we measure this state, we're guaranteed to get a measurement outcome (under the assumption that is nonempty, of course).
Given that the state of after iterations in step 2 is
this means that we should choose so that
is as close to as possible in absolute value, to maximize the probability to obtain from the measurement.
Notice that for any angle the value oscillates as increases, though it is not necessarily periodic — there's no guarantee that we'll ever get the same value twice.
We can plot the values we obtain for varying values of as follows. First we'll import the required libraries, then plot the value for varying and a fixed choice of (which can be changed as desired).
No output produced
Output:
Scatter plot
Linear interpolation
Naturally, in addition to making the probability of obtaining an element from the measurement large, we would also like to choose to be as small as possible, because applications of the operation requires queries to the function Because we're aiming to make close to in absolute value, a natural way to do this is to choose so that
Solving for yields
Of course, must be an integer, so we won't necessarily be able to hit this value exactly — but what we can do is to take the closest integer to this value, which is
As we proceed with the analysis, we'll see that the closeness of this integer to the target value naturally affects the performance of the algorithm.
(As an aside, if the target value happens to be exactly half-way between two integers, this expression of is what we get by rounding up. We could alternatively round down, which makes sense to do because it means one fewer query — but this is secondary and unimportant for the sake of the lesson.)
Recalling that the value of the angle is given by the formula
we also see that our estimate depends on the number of strings in This presents a challenge if we don't know how many solutions we have, as we'll discuss later.
Unique search
First, let's focus on the situation in which there's a single string such that Another way to say this is that we're considering an instance of the Unique search problem.
In this case we have
which can conveniently be approximated as
when gets large. If we substitute into the expression
we obtain
Recalling that is not only the number of times the operation is performed, but also the number of queries to the function required by the algorithm, we see that we're on track to obtaining an algorithm that requires queries.
Now we'll investigate how well this choice of works. The probability that the final measurement results in the unique solution can be expressed explicitly as
(The first argument, refers to the number of possible solutions and the second argument, which is in this case, refers to the actual number of solutions. A bit later we'll use the same notation more generally, where there are multiple solutions.)
Here's a code cell that calculates the probability of success for increasing values of
Output:
2 0.5000000000
4 1.0000000000
8 0.9453125000
16 0.9613189697
32 0.9991823155
64 0.9965856808
128 0.9956198657
256 0.9999470421
512 0.9994480262
1024 0.9994612447
2048 0.9999968478
4096 0.9999453461
8192 0.9999157752
16384 0.9999997811
32768 0.9999868295
65536 0.9999882596
131072 0.9999992587
262144 0.9999978382
524288 0.9999997279
Notice that these probabilities are not strictly increasing. In particular, we have an interesting anomaly when where we get a solution with certainty. It can, however, be proved in general that
for all so the probability of success goes to in the limit as becomes large, as the values produced by the code cell suggest.
This is good! But notice that even a weak bound such as establishes the utility of Grover's algorithm. For whatever measurement outcome we obtain from running the procedure, we can always check to see if using a single query to And if we fail to obtain the unique string for which with probability at most by running the procedure once, then after independent runs of the procedure we will have failed to obtain this unique string with probability at most That is, using queries to we'll obtain the unique solution with probability at least Using the better bound reveals that the probability to find using this method is actually at least
Multiple solutions
As the number of elements in varies, so too does the angle which can have a significant significant effect on the algorithm's probability of success. For the sake of brevity, let's write to denote the number of solutions, and as before we'll assume that
As a motivating example, let's imagine that we have solutions rather than a single solution, as we considered above. This means that
which is approximately double the angle we had in the case when is large.
Suppose that we didn't know any better, and selected the same value of as in the unique solution setting:
The effect will be catastrophic as the next code cell demonstrates.
Output:
4 1.0000000000
8 0.5000000000
16 0.2500000000
32 0.0122070313
64 0.0203807689
128 0.0144530758
256 0.0000705058
512 0.0019310741
1024 0.0023009083
2048 0.0000077506
4096 0.0002301502
8192 0.0003439882
16384 0.0000007053
32768 0.0000533810
65536 0.0000472907
131072 0.0000030066
262144 0.0000086824
524288 0.0000010820
The probability of success goes to as goes to infinity. This happens because we're effectively rotating twice as fast as we did when there was a unique solution, so we end up zooming past the target and landing near
However, if instead we use the recommended choice of which is
for
then the performance will be better. To be more precise, using this choice of leads to success with high probability, as the following code cell suggests. Here we are using the notation suggested earlier: denotes the probability that Grover's algorithm run for iterations reveals a solution when there are solutions in total out of possibilities.
Output:
8 0.8750000000
16 0.6835937500
32 0.9877929688
64 0.9869401455
128 0.9933758959
256 0.9942813445
512 0.9977678832
1024 0.9999963373
2048 0.9999257666
4096 0.9983374778
8192 0.9995465664
16384 0.9995822234
32768 0.9999531497
65536 0.9998961946
131072 0.9999998224
262144 0.9999745784
524288 0.9999894829
1048576 0.9999939313
2097152 0.9999979874
4194304 0.9999986243
Generalizing what was claimed earlier, it can be proved that
This lower bound of on the probability of success is slightly peculiar in that more solutions implies a worse lower bound — but under the assumption that is significantly smaller than we nevertheless conclude that the probability of success is reasonably high. As before, the mere fact that is reasonably large implies the algorithm's usefulness.
It it also the case that
This lower bound describes the probability that a string selected uniformly at random is a solution — so Grover's algorithm always does at least as well as random guessing. In fact, Grover's algorithm is random guessing when
Now let's take a look at the number of iterations (and hence the number of queries)
for
For every it is the case that and so
This implies that
This translates to a savings in the number of queries as grows. In particular, the number of queries required is
Unknown number of solutions
If the number of solutions is unknown, then a different approach is required, for in this situation we have no knowledge of to inform our choice of There are different approaches.
One simple approach is to choose
uniformly at random.
As it turns out, selecting in this way finds a solution (assuming one exists) with probability greater than 40%. This is not at all obvious, and requires an analysis that will not be included here. Intuitively speaking it makes sense, particularly when we think about the geometric picture. The state of is being rotated a random number of times, which will likely give us a vector for which the coefficient of is reasonably large.
By repeating this procedure and checking the outcome in the same way as described before, the probability to find a solution can be made very close to
There is a refined method that finds a solution when one exists using queries, even when the number of solutions is not known. It requires queries to determine that there are no solutions when
The basic idea is to choose uniformly at random from the set iteratively, for increasing values of In particular, we can start with and increase it exponentially, always terminating the process as soon as a solution is found and capping so as not to waste queries when there isn't a solution. The process takes advantage of the fact that fewer queries are required when more solutions exist.
Some care is required, however, to balance the rate of growth of with the probability of success for each iteration. Taking works, as an analysis reveals. Doubling does not — this turns out to be too fast of an increase.
The trivial cases
Throughout the analysis we've just gone through, we've assumed that the number of solutions is non-zero. Indeed, by referring to the vectors
we have implicitly assumed that and are both nonempty. Here we will briefly consider what happens when one of these sets is empty.
Before we bother with an analysis, let's observe the obvious: if every string is a solution, then we'll see a solution when we measure; and when there aren't any solutions, we won't get one. In some sense there's no need to go deeper than this.
We can, however, quickly verify the mathematics for these trivial cases. The situation where one of and is empty happens when is constant; is empty when for every and is empty when for every This means that
and therefore
So, irrespective of the number of iterations we perform in these cases, the measurements will always reveal a uniform random string
Qiskit implementation
An implementation of Grover's algorithm in Qiskit can be found in the Grover's algorithm tutorial.
Concluding remarks
Within the query model, Grover's algorithm is asymptotically optimal. What that means is that it's not possible to come up with a query algorithm for solving the Search problem, or even the Unique search problem specifically, that uses asymptotically less than queries in the worst case. This is something that has been proved rigorously in multiple ways. Interestingly, this was known even before Grover's algorithm was discovered — Grover's algorithm matched an already-known lower bound.
Grover's algorithm is also broadly applicable, in the sense that the square-root speed-up that it offers can be obtained in a variety of different settings. For example, sometimes it's possible to use Grover's algorithm in conjunction with another algorithm to get an improvement. Grover's algorithm is also quite commonly used as a subroutine inside of other quantum algorithms.
Finally, the technique used in Grover's algorithm, where two reflections are composed and iterated to rotate a quantum state vector, can be generalized. An example is a technique known as amplitude amplification, where a process similar to Grover's algorithm can be applied to another quantum algorithm to boost its success probability quadratically faster than what is possible classically. Amplitude amplification has broad applications in quantum algorithms.
So, although Grover's algorithm may not lead to a practical quantum advantage for searching any time soon, it is a fundamentally important quantum algorithm, and it is representative of a more general technique that finds many applications in quantum algorithms.
Post-Course Survey
Congratulations on completing the “Fundamentals of Quantum Algorithms” course by IBM! Please take a moment to help us improve our course by filling out the post-course survey. Your feedback will be used to enhance our content offering and user experience. Thank you!"
Was this page helpful?