Phase-estimation and factoring
Download the slides for this lesson.
Introduction
In this lesson we'll discuss the phase estimation problem and a quantum algorithm to solve it. We'll then apply the solution to obtain — an efficient quantum algorithm for the integer factorization problem.
Along the way, we'll encounter the quantum Fourier transform, and we'll see how it can be implemented efficiently by a quantum circuit.
Phase estimation problem
This section explains the phase estimation problem. We'll begin with a short discussion of the spectral theorem from linear algebra, and then move on to a statement of the phase estimation problem itself.
Spectral theorem
The spectral theorem is an important fact in linear algebra that states that matrices of a certain type (called normal matrices) can be expressed in a simple and useful way. We're only going to use this theorem for unitary matrices in this lesson, but later on we'll need it for Hermitian matrices as well.
Normal matrices
Suppose that is an matrix with complex number entries. We say that is a normal matrix if it commutes with its conjugate transpose:
Every unitary matrix is normal because
Hermitian matrices, which are matrices that equal their own conjugate transpose, are another important class of normal matrices. If is a Hermitian matrix, then
so is normal. We'll see the spectral theorem being applied to Hermitian matrices later in the series.
Not every square matrix is normal. For instance, this matrix isn't normal:
(This is a simple but great example of a matrix that's always good to keep in mind.) This matrix isn't normal because
while
Theorem statement
Now here's a statement of the spectral theorem.
Theorem (spectral theorem). Let be a normal complex matrix. There exists an orthonormal basis of dimensional complex vectors along with complex numbers such that
The expression of a matrix in the form
is commonly called a spectral decomposition. Notice that if is a normal matrix expressed in the form then the equation
must true for every This is a consequence of the fact that is orthonormal:
That is to say, each number is an eigenvalue of and is an eigenvector corresponding to that eigenvalue.
Example 1. Let
which is normal. The theorem implies that can be written in the form for some choice of and — and in particular the equation is true for
(Notice that the theorem does not say that the complex numbers are distinct — we can have the same complex number repeated.)
These choices work because
Indeed, we could choose to be any orthonormal basis and the equation will be true. For instance,
Example 2. Consider a Hadamard operation.
This is a unitary matrix, so it is normal. The spectral theorem implies that can be written in the form and in particular we have
where
More explicitly,
We can check that this decomposition is correct by performing the required calculations:
We can also use Qiskit to check that this decomposition is correct, as the following code cell demonstrates.
Output:
As the first example reveals, there can be some freedom in how eigenvectors are selected. There is, however, no freedom at all in how the eigenvalues are chosen (except for their ordering). For any given matrix the same complex numbers (which can include repetitions of the same complex number) will always occur in the equation
Now let's focus in on unitary matrices. Using the same terminology that was just mentioned, if we have a complex number and a non-zero vector that satisfy the equation
then we say that is an eigenvalue of and is an eigenvector corresponding to the eigenvalue
Unitary matrices preserve Euclidean norm. So, if the equation is true, then because is unitary we conclude
The condition that is non-zero implies that so we can cancel it from both sides to obtain
So, eigenvalues of unitary matrices must always have absolute value equal to one. That is to say, they lie on the unit circle
The symbol is a common name for the complex unit circle. The name is is also common.
Problem statement
In the phase estimation problem, we're given a quantum state of qubits, along with a unitary quantum circuit that acts on qubits. We're promised that is an eigenvector of the unitary matrix that describes the action of the circuit, and our goal is to either identify or approximate the eigenvalue to which corresponds.
More precisely, because lies on the complex unit circle we can write
for a unique real number satisfying The goal of the problem is to compute or approximate this real number
Phase estimation problem
Input: An qubit quantum state and a unitary quantum circuit for an -qubit operation
Promise: is an eigenvector of
Output: an approximation to the number satisfying
Remarks
The phase estimation problem is different from other problems we've seen so far in the series in that the input includes a quantum state. Typically we're focused on problems having classical inputs and outputs, but nothing prevents us from considering quantum state inputs like this. In terms of its practical relevance, the phase estimation problem tends to appear as a subproblem inside of larger computations. We'll see this happening in the context of integer factorization later in the lesson.
The statement of the phase estimation problem above isn't specific about what constitutes an approximation of but we can formulate more precise problem statements depending on our needs and interests. In the context of integer factorization we'll demand a very precise approximation to but in other cases we might be satisfied with a very rough approximation. We'll discuss shortly how the precision we require affects the computational cost of a solution.
Notice that as we go from toward in the phase estimation problem, we're going all the way around the unit circle, starting from and moving counter-clockwise — and as we get closer to we're moving toward which is back where we started at So, as we consider the accuracy of approximations, choices of near should be considered as being near For example, an approximation should be considered as being within of
Phase estimation procedure
Next we'll discuss the phase-estimation procedure, which is a quantum algorithm for solving the phase estimation problem. We'll begin with a low-precision warm-up, which explains some of the basic intuition behind the method. We'll then talk about the quantum Fourier transform, which is an important quantum operation used in the phase-estimation procedure, as well as its quantum circuit implementation. And finally, we'll describe the phase-estimation procedure in general and analyze its performance.
Warm-up: approximating phases with low precision
We'll begin with a warm-up: a couple of simple versions of the phase-estimation procedure that provide low-precision solutions to the phase-estimation problem. This is helpful for explaining the intuition behind the general procedure that we'll see a bit later in the lesson.
Using the phase kickback
A simple approach to the phase-estimation problem, which allows us to learn something about the value we seek, is based on the phase kick-back phenomenon. As we will see, this is essentially a single-qubit version of the general phase-estimation procedure to be discussed later in the lesson.
As part of the input to the phase estimation problem, we have a unitary quantum circuit for the operation We can use the description of this circuit to create a circuit for a controlled- operation, which can be depicted as this figure suggests (with the operation viewed as a quantum gate, on the left and a controlled- operation on the right).
We can create a quantum circuit for a controlled- operation by first adding a control qubit to the circuit for and then replacing every gate in the circuit for with a controlled version of that gate — so our one new control qubit effectively controls every single gate in the circuit for This requires that we have a controlled version of every gate in our circuit, but if we want to restrict ourselves to a standard gate set, we can always build circuits for these controlled operations rather than insisting that they're single gates.
Now let's consider the following circuit, where the input state of all the qubits except the top one is the quantum state eigenvector of
The eigenvalue of corresponding to the eigenvector determines the measurement outcome probabilities. To see exactly how, let's analyze the circuit.
The initial state of the circuit is
and the first Hadamard gate transforms this state to
Next, the controlled- operation is performed, which results in the state
Using the assumption that is an eigenvector of having eigenvalue we can alternatively express this state as follows.
Here we see the phase kickback phenomenon taking place. It is slightly different this time than it was for Deutsch's algorithm and the Deutsch-Jozsa algorithm because we're not working with a query gate — but the idea is the similar.
Finally, the second Hadamard gate is performed, which results in the state
The measurement therefore yields the outcomes and with these probabilities:
Here's a table that lists the probabilities for the two possible measurement outcomes for various choices of the number
0.0000 | 1.0000 | 0.0000 |
0.1250 | 0.8536 | 0.1464 |
0.2500 | 0.5000 | 0.5000 |
0.3750 | 0.1464 | 0.8536 |
0.5000 | 0.0000 | 1.0000 |
0.6250 | 0.1464 | 0.8536 |
0.7500 | 0.5000 | 0.5000 |
0.8750 | 0.8536 | 0.1464 |
We can also plot the probabilities for the two possible outcomes, and as follows:
Naturally, the two probabilities always sum to Notice that when the measurement outcome is always and when the measurement outcome is always So, although the measurement result doesn't reveal exactly what is, it does provide us with some information about it — and if we were promised that either or we could learn from the circuit which one is correct without error.
Intuitively speaking, we can think of the circuit's measurement outcome as being a guess for to "one bit of accuracy." In other words, if we were to write in binary notation and round it off to one bit after the , we'd have a number like this:
The measurement outcome can be viewed as a guess for the bit When is neither nor there's a nonzero probability that the guess will be wrong — but the probability of making an error becomes smaller and smaller as we get closer to or
It's natural to ask what role the two Hadamard gates play in this procedure:
The first Hadamard gate sets the control qubit to a uniform superposition of and so that when the phase kickback occurs, it happens for the state and not the state, creating a relative phase difference that affects the measurement outcomes. If we didn't do this and the phase kickback produced a global phase, it would have no effect on the probabilities of obtaining different measurement outcomes.
The second Hadamard gate allows us to learn something about the number through the phenomenon of interference. Prior to the second Hadamard gate, the state of the top qubit is
and if we were to measure this state, we would obtain and each with probability — which tells us nothing at all about Performing the second Hadamard gate allows the number to affect the output probabilities.
Qiskit implementation
Here's an implementation of this circuit in Qiskit. (For this implementation we're using a phase gate for the unitary operation, just in the interest of simplicity, so the relevant eigenvector is the state.)
Output:
Now we'll run the circuit using the
Output:
{0: 0.345491502812526, 1: 0.654508497187474}
We can now compare the results to the predicted values to see that they're correct.
Output:
{0: 0.34549150281252616, 1: 0.6545084971874737}
Doubling the phase
The circuit described above uses the phase kickback phenomenon to approximate to a single bit of accuracy. One bit of accuracy may be all we need in some situations — but for factoring we're going to need a lot more accuracy than that. The natural question is, how can we learn more about
One simple thing we could do is to replace the controlled- operation in our circuit with two copies of this operation, like in this circuit:
Two copies of a controlled- operation is equivalent to a controlled- operation. If is an eigenvector of having eigenvalue then this state is also an eigenvector of this time having eigenvalue
So, if we run this version of the circuit, we're effectively performing the same computation as before, except that the number is replaced by The following plot illustrates the output probabilities as ranges from to
Doing this can indeed provide us with some additional information about If the binary representation of is
then doubling effectively shifts the binary point one position to the right:
And because we're equating with as we move around the unit circle, we see that the bit has no influence on our probabilities — so we're effectively obtaining a guess for what we would get for the second bit after the binary point if we were to round to two bits. For instance, if we knew in advance that was either or then we could fully trust the measurement outcome to tell us which.
It's not immediately clear, though, how this estimation should be reconciled with what we learned from the original (non-doubled) phase kickback circuit to give us the most accurate information possible about So let's take a step back and consider how to proceed.
Two-qubit phase estimation
Rather than considering the two options described above separately, let's combine them into a single circuit like this:
The Hadamard gates after the controlled operations have been removed and there are no measurements here yet. We'll add more to the circuit as we consider our options for learning as much as we can about
If we run this circuit for being an eigenvector of the state of the bottom qubits will remain throughout the entire circuit — and phases will be "kicked" into the state of the top two qubits. Let's analyze the circuit carefully, by means of the following figure.
We can write the state like this:
When the first controlled- operation is performed, the eigenvalue gets kicked into the phase when (the top qubit) is equal to but not when it's So, we can express the resulting state like this:
The second and third controlled- gates do something similar, except for rather than and with replaced by We can express the resulting state like this:
If we think about the binary string as representing an integer in binary notation, which is we can alternatively express this state as follows:
Our goal is to extract as much information about as we can from this state.
At this point, we'll consider a special case, where we're promised that for some integer In other words, we have so we can express this number exactly using binary notation with two bits, as . . . or . In general, might not be one of these four values — but thinking about this special case will help us to figure out how to most effectively extract information about this value.
Let's define one two-qubit state vector for each possible value
After simplifying the exponentials, we can write these vectors as follows:
These vectors are orthogonal: if we choose any pair of them and compute their inner product, we get Each one is also a unit vector, so this implies that is an orthonormal basis. So, we know right away that there is a measurement that can discriminate them perfectly — meaning that if we're given one of them but we don't know which, then we can figure out which one it is without error.
To perform such a discrimination with a quantum circuit, let's first define a unitary operation that transforms standard basis states into the four states listed above:
To write down as a matrix, it's just a matter of taking the columns of to be the states
We saw this matrix as an example at the end of Lesson 1. It's a special matrix, and some readers will have seen it before: it's the matrix associated with the -dimensional discrete Fourier transform.
In light of this fact, let us call it by the name rather than The name is short for quantum Fourier transform — which is essentially just the discrete Fourier transform, viewed as a quantum operation. We'll discuss the quantum Fourier transform in greater detail and generality shortly.
So, the operation maps standard basis states to the four possible states we have above. We can perform this operation in reverse to go the other way, to transform the states into the standard basis states If we do this, then we can measure to learn which value describes as
Here's a diagram of the quantum circuit that does this.
To summarize, if we run this circuit when for the state immediately before the measurements take place will be (for encoded as a two-bit binary string), so the measurements will reveal the value without error.
This circuit is motivated by the special case that — but we can run it for any choice of and and hence any value of that we wish. Here's a plot of the output probabilities the circuit produces for arbitrary choices of
This is a clear improvement over the single-qubit variant described earlier in the lesson. It's not perfect — it can give us the wrong answer — but the answer is heavily skewed toward values of for which is close to In particular, the most likely outcome always corresponds to the closest value of to (equating and as before), and it appears from the plot that this closest value for always appears with probability above When is exactly halfway between two such values, like for instance, the two equally close values of are equally likely.
Qiskit implementation
Here's an implementation of this procedure in Qiskit. Similar to the previous implementation, we'll use a phase gate with a chosen angle for the unitary operation and for the eigenvector.
Output:
Output:
Generalizing to many qubits
Given the improvement we've just obtained by using two control qubits rather than one, in conjunction with the -dimensional quantum Fourier transform, it's natural to consider generalizing it further — by adding more control qubits. When we do this, we obtain the general phase estimation procedure. We'll see how this works shortly, but in order to describe it precisely we're going to need to discuss the quantum Fourier transform in greater generality, to see how it's defined for other dimensions and to see how we can implement it with a quantum circuit.
Quantum Fourier transform
The quantum Fourier transform is a unitary operation that can be defined for any positive integer dimension In this subsection we'll see how this operation is defined, and we'll see how it can be implemented with a quantum circuit on qubits with cost when
The matrices that describe this operation are derived from an analogous operation on dimensional vectors known as the discrete Fourier transform. We can think about the discrete Fourier transform abstractly, in purely mathematical terms, but we can also think about it as a computational problem, where we're given an dimensional vector of complex numbers (using binary notation to encode the real and imaginary parts of the entries, let us suppose), and our goal is to calculate the result, which is again an dimensional vector. An efficient algorithm for performing this computation known as the fast Fourier transform is considered by many to be one of the most important algorithms ever discovered — it's critical in signal processing and has wide-ranging applications.
Our focus, however, is on viewing this transform as a unitary operation that can be performed on a quantum system.
Definition of the quantum Fourier transform
To define the quantum Fourier transform, we'll first define a complex number for each positive integer like this:
This is the number on the complex unit circle we obtain if we start at and move counter-clockwise by an angle of — or "one click" if we imagine that discrete "clicks" would take us all the way around the circle.
Here are a few examples:
Now we can define the -dimensional quantum Fourier transform, which is described by an matrix whose rows and columns are associated with the standard basis states
For phase estimation we're only going to need this operation for when is a power of but the operation can be defined for any positive integer
As was already stated, this is the matrix associated with the -dimensional discrete Fourier transform. (Often the leading factor of is not included in the definition of the matrix associated with the discrete Fourier transform, but we need to include it to obtain a unitary matrix. Sometimes a minus sign appears in the exponent of as well — different people define it in slightly different ways, but these differences are superficial and easily reconciled.)
Here's the quantum Fourier transform written as a matrix for some small values of
Notice, in particular, that is another name for a Hadamard operation.
Unitarity
Let's check that is indeed unitary, for any selection of One way to do this is to show that the columns form an orthonormal basis. We can define column number starting from and going up to like this:
Taking the inner product between any two columns gives us this expression:
One way to evaluate sums like this is to use the following formula for the sum of the first terms of a geometric series.
Specifically, we can use this formula when When we have so using the formula and dividing by gives
When we have so using the formula reveals this:
This happens because so and the numerator becomes (while the denominator is nonzero because
At a more intuitive level, what we're effectively doing in the formula for is summing a bunch of points that are evenly spread around the unit circle — so that they cancel each other out, leaving when we sum them.
So, we have established that is an orthonormal set.
This reveals that is unitary.
Controlled-phase gates
In order to implement the quantum Fourier transform as a quantum circuit, we're going to need to make use of controlled-phase gates.
Recall from Lesson 1 that a phase operation is a single-qubit quantum operation of the form
for any real number (In Lesson 1 we used the name in place of but in this lesson we'll reserve the letter for the parameter in phase estimation.)
A controlled version of this gate has the following matrix:
For this controlled gate, it doesn't actually matter which qubit is the control and which is the target — the two possibilities are equivalent. We can use any of the following symbols to represent this gate in quantum circuit diagrams:
For the third form, the label is also sometimes placed on the side of the control line or under the lower control when that's convenient.
Using controlled-phase gates we can perform the following transformation, where is a bit and is a number encoded in binary notation as a string of bits.
As an example, here's for how this is done for
This can be naturally generalized for any — the phase gates always start from on the most significant bit of down to on the least significant bit.
Circuit implementation of the QFT
Now we'll see how we can implement the quantum Fourier transform with a circuit when the dimension is a power of There are, in fact, multiple ways to implement the quantum Fourier transform, but this is arguably the simplest method.
The implementation is recursive in nature, and so that's how it's most naturally described. The base case is that the quantum Fourier transform on a single qubit is a Hadamard operation.
To perform the quantum Fourier transform on qubits when we can perform the following steps, whose actions we'll describe for standard basis states of the form where is an integer encoded as bits using binary notation and is a single bit.
First apply the -dimensional quantum Fourier transform to the bottom/leftmost qubits to obtain this state:
This is done by recursively applying the method being described for one fewer qubit, using the Hadamard operation on a single qubit as the base case.
Use the top/rightmost qubit as a control to inject the phase for each standard basis state of the remaining qubits (as described above) to obtain this state:
Perform a Hadamard gate on the top/rightmost qubit to obtain this state:
Permute the order of the qubits so that the least significant bit becomes the most significant bit, with all others shifted:
For example, here's the circuit we obtain for In this diagram, the qubits are given names that correspond to the standard basis vectors (for the input) and (for the output) for clarity.
Analysis
The key formula we need to verify that the circuit just described implements the -dimensional quantum Fourier transform is this one:
This formula works for any choice of integers and but we'll only need it for and We can check the formula by expanding the product in the exponent on the right-hand side,
where the second equality makes use of the observation that
Now, the -dimensional quantum Fourier transform is defined as follows for every
If we write and as
for and we obtain
Finally, by thinking about the standard basis states and as binary encodings of integers in the range
we see that the circuit above implements the required operation.
If this method for performing the quantum Fourier transform seems remarkable, it's because it is. It's essentially the same methodology that underlies the fast Fourier transform — one of the most important and useful classical algorithms ever discovered — in the form of a quantum circuit.
Computational cost
Now let's count how many gates are used in the circuit just described. The controlled-phase gates aren't in the standard gate set that we discussed in the previous lesson, but to begin we'll ignore this and count each of them as a single gate.
Let's let denote the number of gates we need for each possible choice of If the quantum Fourier transform is just a Hadamard operation, so
If then in the circuit above we need gates for the quantum Fourier transform on qubits, plus controlled-phase gates, plus a Hadamard gate, plus swap gates, so
We can obtain a closed-form expression by summing:
We don't actually need as many swap gates as the method describes — if we rearrange the gates just a bit, we can push all of the swap gates out to the right and reduce the number of swap gates required to Asymptotically speaking this isn't a major improvement: we still obtain circuits with size for performing
If we wish to implement the quantum Fourier transform using only gates from our standard gate set, we need to either build or approximate each of the controlled-phase gates with gates from our set. The number required depends on how much accuracy we require, but the total number remains quadratic in the number (We can even come up with a pretty good approximation to the quantum Fourier transform with a sub-quadratic number of gates by using the fact that when is very small, we have so the controlled-phase gate can be very well approximated by doing nothing at all in such cases.)
QFTs in Qiskit
A circuit implementation of the QFT on any number of qubits can be obtained from Qiskit's circuit library. (Note that for three or more qubits the circuit will differ slightly from the general description above because it incorporates some minor optimizations. In particular, the number of swap gates required can be reduced significantly by effectively pushing them to the end of the circuit and adjusting the controlled-phase gates accordingly.)
Output:
General procedure and analysis
Now we'll examine the phase-estimation procedure in general. The idea is to extend the single- and double-qubit versions of phase estimation that we considered above in the natural way, as suggested by the following diagram.
Notice that for each new control qubit added on the top, we double the number of times the unitary operation is performed. Rather than drawing however many copies of the controlled- operation are needed to do this in the diagram, we've instead raised to the required powers.
In general, adding additional control qubits on the top like this will contribute significantly to the size of the circuit: if we have control qubits, like the diagram depicts, a total of copies of the controlled- operation are required. This means that a significant computational cost is incurred as is increased — but as we will see, it also leads to a significantly more accurate approximation of
It is important to note, however, that for some choices of it may be possible to create a circuit that implements the operation for large values of in a more efficient way than simply repeating times the circuit for We'll see a specific example of this in the context of integer factorization later in the lesson, where the efficient algorithm for modular exponentiation discussed in the previous lesson comes to the rescue.
Now let us analyze the circuit just described. The state immediately prior to the quantum Fourier transform looks like this:
A special case
Along similar lines to what we did in the case above, we can consider the special case that for and we see that this state can alternatively be written like this:
So, when the inverse of the quantum Fourier transform is applied, the state becomes
and the measurements reveal (encoded in binary).
Bounding the probabilities
For other values of meaning ones that don't take the form for an integer the measurement outcomes won't be certain, but we can prove certain bounds on the probabilities for different outcomes. Going forward, let's consider an arbitrary choice of satisfying
After the quantum Fourier transform is performed, the state of the circuit is this:
So, when the measurements on the top qubits are performed, we see each outcome with probability
To get a better handle on these probabilities, we'll make use of the same formula that we saw before, for the sum of the initial portion of a geometric series.
We can simplify the sum appearing in the formula for by taking Here's what we obtain.
So, in the case that we find that (as we already knew from considering this special case), and in the case that we find that
We can learn more about these probabilities by thinking about how arc length and chord length on the unit circle are related. Here's a figure that illustrates the relationships we need for any real number
First, the chord length (drawn in blue) can't possibly be larger than the arc length (drawn in purple):
Relating these lengths in the other direction, we see that the ratio of the arc length to the chord length is greatest when and in this case the ratio is half the circumference of the circle divided by the diameter, which is Thus, we can write
An analysis based on this relation reveals the following two facts.
Suppose that is a real number and satisfies
This means that is either the best -bit approximation to or it's one of the two best approximations in case is exactly halfway between and either or
We'll prove that has to be pretty large in this case. By the assumption we're considering, it follows that so we can use the second observation above relating arc and chord lengths to conclude that
We can also use the first observation about arc and chord lengths to conclude that
Putting these two inequalities to use on reveals
(So, our observation that the best outcome occurs with probability greater than in the version of phase estimation discussed earlier in fact holds for every choice of )
Now suppose that satisfies
This means that there's a better approximation to in between and
This time we'll prove that can't be too big. We can start with the simple observation that
which follows from the fact that any two points on the unit circle can differ in absolute value by at most
We can also use the second observation about arc and chord lengths from above, this time working with the denominator of rather than the numerator, to conclude
Putting the two inequalities together reveals
This is quite good, in the sense that very close approximations to are likely to occur, with probability greater than whereas approximations off by more than are less likely, with probability at most
We may, however, wish to boost our confidence. One way to do this is to repeat the phase estimation procedure several times to gather statistical evidence about Notice that the state of the bottom collection of qubits is unchanged by the phase estimation procedure, so it can be used to run the procedure as many times as we like.
Each time we run the circuit, we get a best -bit approximation to with probability greater than while the probability of being off by more than is bounded by So, if we run the circuit several times and take the most commonly appearing outcome of the runs, it's exceedingly likely that the outcome that appears most commonly will not be one that occurs at most of the time. As a result, we'll be very likely to obtain an approximation that's within of the value and indeed the unlikely chance that we're off by more than decreases exponentially in the number of times the procedure is run.
Qiskit implementation
Here's an implementation of phase estimation in Qiskit. Try adjusting and the number of control qubits to see how the results change.
Output:
Output:
Output:
Most probable output: 6
Estimated theta: 0.75
Shor's algorithm
Now we'll turn our attention to the integer factorization problem, and see how it can be solved efficiently on a quantum computer using phase estimation. The algorithm we'll obtain is Shor's algorithm for integer factorization. Shor didn't describe his algorithm specifically in terms of phase estimation, but it is a natural and intuitive way to explain how it works.
In the two subsections that follow we'll describe the two main parts of Shor's algorithm. In the first subsection we'll define an intermediate problem known as the order-finding problem and see how phase estimation provides a solution to this problem. In the second subsection we'll explain how an efficient solution to the order-finding problem also gives us an efficient solution to the integer factorization problem. (When a solution to one problem provides a solution to another problem like this, we say that the second problem reduces to the first — so in this case we're reducing integer factorization to order finding.) This part of Shor's algorithm doesn't make use of quantum computing at all — it's completely classical.
Order finding
Some basic number theory
To explain the order-finding problem and how it can be solved using phase estimation, it will be very helpful to explain a couple of basic concepts in number theory and introduce some handy notation along the way.
To begin, for any given positive integer we'll define a set
For instance, and so on.
These are sets of numbers, but we can think of them as more than sets. In particular, we can think about arithmetic operations on such as addition and multiplication — and if we agree to always take our answers modulo we'll always stay within this set when we perform these operations. (The two specific operations of addition and multiplication, both taken modulo turn into a ring, which is a fundamentally important type of object in algebra.)
For example, and are elements of and if we multiply them together we get which leaves a remainder of when divided by Sometimes we express this as follows.
But we can also simply write provided that it's been made clear that we're working in just to keep our notation as simple and clear as possible.
As an example, here are the addition and multiplication tables for
Among the elements of the elements that satisfy are special. Frequently the set containing these elements is denoted with a star like this:
If we focus our attention on the operation of multiplication, the set forms a group — and specifically an abelian group — which is another important type of object in algebra. It's a basic fact about these sets (and indeed about finite groups in general), that if we pick any element and repeatedly multiply to itself, we'll always eventually get the number
For a first example, let's take We have that because and if we multiply to itself we get (as the table above confirms).
As a second example, let's take If we go through the numbers from to these are the ones that have GCD equal to with
For each of these elements, it is possible to raise that number to a positive integer power to get Here are the smallest powers for which this works:
Naturally we're working within for all of these equations, which we haven't bothered to write — we take it to be implicit to avoid cluttering things up. We'll continue to do that throughout the rest of the lesson.
You can check each of these equations above, as well as the fact that these are the smallest positive integer powers for which the equations work, using the following code cell (changing the numbers as needed).
Output:
k a^k
1 17
2 16
3 20
4 4
5 5
6 1
7 17
8 16
9 20
10 4
11 5
12 1
Notice that after we get back to the cycle repeats — which makes sense because multiplying by brings us back to which is where we started.
Although it isn't essential for the sake of the lesson, we can also check that we never get back to when — so we're relying on the fact that for this to work.
Output:
k a^k
1 18
2 9
3 15
4 18
5 9
6 15
7 18
8 9
9 15
10 18
11 9
12 15
Problem statement
Now we can state the order-finding problem.
Order finding
Input: positive integers and satisfying
Output: the smallest positive integer such that
Alternatively, in terms of the notation we just introduced above, we're given and we're looking for the smallest positive integer such that This number is called the order of modulo
Multiplication by an element in
To connect the order-finding problem to phase estimation, let's think about the operation defined on a system whose classical states correspond to where we multiply by a fixed element
To be clear, we're doing the multiplication in so it's implicit that we're taking the remainder modulo inside of the ket on the right-hand side of the equation.
For example, if and we have
So long as this is a unitary operation. It shuffles the elements of the standard basis so as a matrix it's a permutation matrix. It's evident from its definition that this operation is deterministic, and a simple way to see that it's invertible is to think about the order of modulo and to recognize that the inverse of is
There's another way to think about the inverse that doesn't require any knowledge of — which, after all, is what we're trying to compute. For every element there's always a unique element that satisfies We denote this element by and it can be computed efficiently; an extension of Euclid's GCD algorithm does it at cost quadratic in And thus
So, the operation is both deterministic and invertible. That implies that it's described by a permutation matrix, and it's therefore unitary.
Eigenvectors and eigenvalues of multiplication operations
Now let's think about the eigenvectors and eigenvalues of the operation assuming that As was just argued, this assumption tells us that is unitary.
There are eigenvalues of possibly including the same eigenvalue repeated multiple times, and in general there's some freedom in selecting corresponding eigenvectors — but we won't need to worry about all of the possibilities. Let's start simple and identify just one eigenvector of
The number is the order of modulo — here and throughout the remainder of the lesson. The eigenvalue associated with this eigenvector is because it isn't changed when we multiply by
This happens because so each standard basis state gets shifted to for and gets shifted back to Informally speaking, it's like we're slowly stirring but it's already completely stirred so nothing changes.
Here's another example of an eigenvector of This one happens to be more interesting in the context of order finding and phase estimation.
Alternatively, we can write this vector using a summation as follows.
Here we see the complex number showing up naturally, due to the underlying structure of multiplication by modulo This time the corresponding eigenvalue is To see this, we can first compute like this:
Then, because and we see that
so
Using the same reasoning, we can identify additional eigenvector/eigenvalue pairs for Indeed, for any choice of we have that
is an eigenvector of whose corresponding eigenvalue is
There are other eigenvectors, such as which has eigenvalue but we'll only be concerned with the eigenvectors that we've just identified.
Applying phase estimation
To solve the order-finding problem for a given choice of we can apply the phase-estimation procedure to the operation
To do this, we need to implement not only efficiently with a quantum circuit, but also and so on, going as far as needed to obtain a precise enough estimate from the phase estimation procedure. Here we'll explain how this can be done, and we'll figure out exactly how much precision is needed a bit later.
Let's start with the operation by itself. Naturally, because we're working with the quantum circuit model, we'll use binary notation to encode the numbers between and The largest number we need to encode is so the number of bits we need is
For example, if we have Here's what the encoding of elements of as binary strings of length looks like.
And now, here's a precise definition of how is defined as an -qubit operation.
The point is that although we only care about how works for we do have to specify how it works for the remaining standard basis states — and we need to do this in a way that still gives us a unitary operation. Defining so that it does nothing to the remaining standard basis states accomplishes this.
Using the algorithms for integer multiplication and division discussed in the previous lesson, together with the methodology for reversible, garbage-free implementations of them, we can build a quantum circuit that performs for any choice of at cost Here's one way that this can be done.
We build a circuit for performing the operation
where
using the method described in the previous lesson. This gives us a circuit of size
We swap the two -qubit systems using swap gates to swap the qubits individually.
Along similar lines to the first step, we can build a circuit for the operation
where is the inverse of in
By initializing the bottom qubits and composing the three steps, we obtain this transformation:
The total cost of the circuit we obtain is
To perform and so on, we can use exactly the same method, except that we replace with and so on, as elements of That is, for any power we choose, we can create a circuit for not by iterating times the circuit for but instead by computing and then using the circuit for
The computation of powers is the modular exponentiation problem mentioned in the previous lesson. This computation can be done classically, using the algorithm for modular exponentiation mentioned in the previous lesson (often called the power algorithm in computational number theory). This time we don't need to implement this algorithm reversibly with a quantum circuit, we just need to do it classically.
And we're fortunate that this is possible. We're effectively offloading the problem of iterating a huge number of times, which can be exponential in the number we choose in phase estimation, to an efficient classical computation. In terms of the quantum circuit we're running, the cost of iterated times is simply the cost of for — and so the cost is
For an arbitrary choice of a quantum circuit in the phase estimation problem this won't be possible, resulting in a cost for phase estimation that's exponential in the number we use in phase estimation. By the power of computational number theory, the cost in the case at hand is linear in
A convenient eigenvector/eigenvalue pair
To understand how we can solve the order-finding problem using phase estimation, let's start by supposing that we run the phase estimation procedure on the operation using the eigenvector Getting our hands on this eigenvector isn't easy, as it turns out, so this won't be the end of the story — but it's helpful to start here.
The eigenvalue of corresponding to the eigenvector is
That is, for So, if we run the phase estimation procedure on using the eigenvector we'll get an approximation to By computing the reciprocal we'll be able to learn — provided that our approximation is good enough.
To be more precise, when we run the phase-estimation procedure using control qubits, what we obtain is a number and we take as a guess for which is in the case at hand. To figure out what is from this approximation, the natural thing to do is to compute the reciprocal of our approximation and round to the nearest integer.
For example, let's suppose and we perform phase estimation on with the eigenvector using control bits. The best -bit approximation to is and we have a pretty good chance (about in this case) to obtain the outcome from phase estimation. We have
and rounding to the nearest integer gives which is the correct answer.
On the other hand, if we don't use enough precision we may not get the right answer. For instance, if we take control qubits in phase estimation, we might obtain the best -bit approximation to which is Taking the reciprocal yields
and rounding to the nearest integer gives an incorrect answer of
How much precision do we need to get the right answer? We know that the order is an integer, and intuitively speaking what we need is enough precision to distinguish from nearby possibilities, including and The closest number to that we need to be concerned with is and the distance between these two numbers is
So, if we want to make sure that we don't mistake for it suffices to use enough precision to guarantee that a best approximation to is closer to than it is to If we use enough precision so that
so that the error is less than half of the distance between and then will be closer to than to any other possibility, including and
We can double-check this as follows. Suppose that
for satisfying
When we take the reciprocal we obtain
By maximizing in the numerator and minimizing in the denominator, we can bound how far away we are from as follows.
We're less than away from so as expected we'll get when we round.
Unfortunately, because we don't yet know what is, we can't use it to tell us how much accuracy we need. What we can do instead is to use the fact that must be smaller than to ensure that we use enough precision. In particular, if we use enough accuracy to guarantee that the best approximation to satisfies
then we'll have enough precision to correctly determine when we take the reciprocal. Taking ensures that we have a high chance to obtain an estimation with this precision using the method described previously. (Taking is good enough if we're comfortable with a lower-bound of 40% on the probability of success.)
Other eigenvector/eigenvalue pairs
As we just saw, if we had the eigenvector of we would be able to learn through phase estimation, so long as we use enough control qubits to get sufficient precision to do this. Unfortunately it's not easy to get our hands on the eigenvector so we need to figure out how to proceed.
Let's suppose we proceed just like we did above, except with the eigenvector in place of for any choice of that we choose to think about. The result we get from the phase estimation procedure will be an approximation
Working under the assumption that we don't know either or this might or might not allow us to identify For example, if we'll get an approximation to which unfortunately tells us nothing. This, however, is an unusual case; for other values of we'll at least be able to learn something about
We can use an algorithm known as the continued fraction algorithm to turn our approximation into nearby fractions — including if the approximation is good enough. We won't explain the continued fraction algorithm here. Instead, here's a statement of a known fact about this algorithm.
Fact. Given an integer and a real number there is at most one choice of integers with and satisfying Given and the continued fraction algorithm finds and (or reports that they don't exist). This algorithm can be implemented as a Boolean circuit having size
If we have a very close approximation to and we run the continued fraction algorithm for and we'll get and as they're described in the fact. A careful reading of the fact allows us to conclude that
So we don't necessarily learn and we only learn in lowest terms.
For example, and as we've already noticed, we're not going to learn anything from But that's the only value of where that happens. When is nonzero, it might have common factors with — but the number we obtain from the continued fraction algorithm must divide
It's far from obvious, but it is a known fact that if we have the ability to learn and for for chosen uniformly at random, then we're very likely to recover after just a few samples. In particular, if our guess for is the least common multiple of all the values for that we observe, we'll be right with high probability. Some values of aren't good because they share common factors with and those common factors are hidden to us when we learn and But random choices of aren't likely to hide factors of for long, and the probability that we don't guess correctly drops exponentially in the number of samples.
Proceeding without an eigenvector
So far we haven't addressed the issue of how we get our hands on an eigenvector of to run the phase estimation procedure on. As it turns out, we don't need to create them. What we will do instead is to run the phase estimation procedure on the state by which we mean the -bit binary encoding of the number in place of an eigenvector of
So far, we've talked about running the phase estimation procedure on a particular eigenvector, but nothing prevents us from running the procedure on an input state that isn't an eigenvector of and that's what we're doing here with the state (This isn't an eigenvector of for unless which is a choice for that we'll avoid.)
The following equation helps to explain why we choose the state in place of an eigenvector.
This can be verified by taking the inner product of the right-hand side with standard basis states and using formulas mentioned previously.
In greater detail, let's imagine that we run the phase estimation procedure with the state in place of one of the eigenvectors After the quantum Fourier transform is performed, this leaves us with the state
where
represents the state of the top qubits after the inverse of the quantum Fourier transform is performed. When the top qubits are measured, we therefore obtain an approximation to the value where is chosen uniformly at random.
As we've already discussed, this allows us to learn with a high degree of confidence after several independent runs, which was our goal.
Total cost
The cost to implement each controlled-unitary is There are controlled-unitary operations, so the total cost for the controlled-unitary operations is In addition, we have Hadamard gates (which contribute to the cost), and the quantum Fourier transform contributes to the cost. Thus, the cost of the controlled-unitary operations dominates the cost of the entire procedure — which is therefore
In addition to the quantum circuit itself, there are a few classical computations that need to be performed along the way. This includes computing the powers in for which are needed to create the controlled-unitary gates, as well as the continued fraction algorithm that converts approximations of into fractions. In both cases, these computations can be performed by Boolean circuits having cost
As is typical, all of these bounds can be improved using asymptotically fast algorithms; these bounds assume we're using standard algorithms for basic arithmetic operations.
Factoring by order-finding
The very last thing we need to discuss is how solving the order-finding problem helps us to factor. This part is completely classical — it has nothing specifically to do with quantum computing.
Here's the basic idea. We want to factorize the number and we can do this recursively. Specifically, we can focus on the task of splitting which means finding any two integers for which This isn't possible if is a prime number, but we can efficiently test to see if is prime using a primality testing algorithm first, and if isn't prime we'll try to split it. Once we split we can simply recurse on and until all of our factors are prime and we obtain the prime factorization of
Splitting even integers is easy: we just output and
It's also easy to split perfect powers, meaning numbers of the form for integers just by approximating the roots etc., and checking nearby integers as suspects for We don't need to go further than steps into this sequence, because at that point the root drops below and won't reveal additional candidates.
It's good that we can do both of these things because order-finding won't help us for even numbers or for prime powers, where the number happens to be prime.
If is odd and not a prime power, order-finding allows us to split
Input an odd, composite integer N that is not a prime power.
Iterate the following steps:
- Randomly choose
- Compute
- If then output and and stop. Otherwise continue to the next step knowing that
- Let be the order of modulo (Here's where we need order-finding.)
- If is even:
5.1 Compute (modulo )
5.2 Compute
5.3 If then output and and stop.- If this point is reached, the iteration has failed to find a factor of
An iteration of this algorithm may fail to find a factor of Specifically, this happens in two situations:
- The order of modulo is odd.
- The order of modulo is even and
Using basic number theory it can be proved that, for a random choice of with probability at least neither of these events happens. In fact, the probability is at most for being the number of distinct prime factors of This is why the assumption that is not a prime power is important. The assumption that is odd is also required for this to be true, which is why the (easy) case that is even has to be handled separately.
So, if we repeat the process times, randomly choosing each time, we'll succeed in splitting with probability at least
We won't go through this analysis in detail, but here's the basic idea. If we have a choice of for which the order of modulo is even, then it makes sense to consider the numbers
Using the formula we conclude that
We know that by the definition of the order, which is another way of saying that evenly divides So, evenly divides the number
which means that every prime factor of must divide either or (or both). For a randomly selected we're likely to have prime factors of dividing both terms, which allows us to split by computing the GCD.
Implementation in Qiskit
Here we'll implement Shor's algorithm in Qiskit to factor number First we'll hard code a controlled-multiplication operation for a given element
No output produced
Here's the phase estimation procedure from earlier implemented as a function.
No output produced
We can't easily prepare eigenvectors of the multiplication by operation, so we use as suggested above.
Output:
And finally we can run the circuit to try to find a nontrivial factor of
Output:
Attempt 1
Non-trivial factor found: 3
Was this page helpful?