The focus of this lesson is on the basics of quantum information when there are multiple systems being considered. Such situations arise naturally in the context of information processing, both classical and quantum. Large information-carrying systems are often most easily constructed using collections of smaller systems, such as bits or qubits.
A simple, yet critically important, idea to keep in mind going into this lesson is that we can always choose to view multiple systems together as if they form a single, compound system — to which the discussion in the previous lesson applies. Indeed, this idea very directly leads to a description of how quantum states, measurements, and operations work for multiple systems.
There is more to understanding multiple quantum systems, however, than to simply recognize that they may be viewed collectively as single systems. For instance, we may have multiple quantum systems that are collectively in a particular quantum state, and then choose to measure just one (or a ) of the individual systems. In general, this will affect the state of the remaining systems, and it is important to understand exactly how when analyzing quantum algorithms and protocols. An understanding of the sorts of correlations among multiple systems — and particularly a type of correlation known as entanglement — is also important in quantum information and computation.
Classical information
As in the previous lesson, we will begin with a discussion of classical information. Once again, the probabilistic and quantum descriptions are mathematically similar, and recognizing how the mathematics works in the familiar setting of classical information is helpful in understanding why quantum information is described in the way that it is.
Classical states via the Cartesian product
We will start at a very basic level, with classical states of multiple systems. For simplicity, we will begin by discussing just two systems, and then generalize to more than two systems.
To be precise, let us suppose that X is a system whose classical state set is Σ, and Y is a second system having classical state set Γ. As in the previous lesson, because we have referred to these sets as classical state sets, our assumption is that Σ and Γ are both finite and nonempty. It could be that Σ=Γ, but this is not necessarily so — and regardless it is helpful to use different names to refer to these sets in the interest of clarity.
Now imagine that the two systems, X and Y, are placed side-by-side, with X on the left and Y on the right. If we so choose, we can view these two systems as if they form a single system, which we can denote by (X,Y) or XY depending on our preference.
A natural question to ask about this compound system (X,Y) is, "What are its classical states?"
The answer is that the set of classical states of (X,Y) is the Cartesian product of Σ and Γ, which is the set defined as
Σ×Γ={(a,b):a∈Σandb∈Γ}.
In simple terms, the Cartesian product is precisely the mathematical notion that captures the idea of viewing an element of one set and an element of a second set together, as if they form a single element of a single set.
In the case at hand, to say that (X,Y) is in the classical state (a,b)∈Σ×Γ means that X is in the classical state a∈Σ and Y is in the classical state b∈Γ; and if the classical state of X is a∈Σ and the classical state of Y is b∈Γ, then the classical state of the joint system (X,Y) is (a,b).
For more than two systems, the situation generalizes in a natural way. If we suppose that X1,…,Xn are systems having classical state sets Σ1,…,Σn, respectively, for any positive integer n, the classical state set of the n-tuple (X1,…,Xn), viewed as a single joint system, is the Cartesian product
Σ1×⋯×Σn={(a1,…,an):a1∈Σ1,…,an∈Σn}.
Note that we're free to use whatever names we wish for systems, and we're free to order them as we choose. In particular, if we have n systems like above, we can instead choose to name them Xn−1,…,X0 and order them in this way, meaning as (Xn−1,…,X0). Mimicking the same pattern for naming the associated classical states and classical state sets, we might then refer to a classical state (an−1,…,a0)∈Σn−1×⋯×Σ0 of this compound system. Indeed, this is the standard ordering convention used by Qiskit for naming qubits, and we'll come back to this in the next lesson as we turn our focus to quantum circuits.
Representing states as strings
It is often convenient to write a classical state (a1,…,an) as a a1⋯an for the sake of brevity, particularly in the (very typical) situation that the classical state sets Σ1,…,Σn are associated with sets of symbols or characters.
Indeed, the notion of a string, which is a fundamentally important concept in computer science, is formalized in mathematical terms through Cartesian products. The term alphabet is commonly used to refer to sets of symbols used to form strings, but the mathematical definition of an alphabet is precisely the same as the definition of a classical state set: it is a finite and nonempty set.
For example, suppose that X1,…,X10 are bits, so that the classical state sets of these systems are all the same.
Σ1=Σ2=⋯=Σ10={0,1}
(The set {0,1} is commonly referred to as the binary alphabet.) There are then 210=1024 classical states of the joint system (X1,…,X10), which are the elements of the set
Σ1×Σ2×⋯×Σ10={0,1}10.
Written as strings, these classical states look like this:
For the classical state 0001010000, for instance, we see that X4 and X6 are in the state 1, while all other systems are in the state 0.
Probabilistic states
Recall from the previous lesson that a probabilistic state associates a probability with each classical state of a system. Thus, a probabilistic state of multiple systems — viewed collectively as if they form a single system — associates a probability with each element of the Cartesian product of the classical state sets of the individual systems.
For example, suppose that X and Y are both bits, so that their corresponding classical state sets are Σ={0,1} and Γ={0,1}, respectively. Here is a probabilistic state of the pair (X,Y):
This probabilistic state is one in which both X and Y are random bits — each is 0 with probability 1/2 and 1 with probability 1/2 — but the classical states of the two bits always agree. This is an example of a correlation between these systems.
Ordering Cartesian product state sets
Probabilistic states of systems are represented by probability vectors, which are column vectors having indices that have been placed in correspondence with the underlying classical state set of the system being considered.
The same situation arises for multiple systems. To represent a probabilistic state of multiple systems as a Cartesian product, one must decide on an ordering of the product's elements. Assuming the individual classical state sets Σ,Γ of systems X,Y are already ordered, there is a simple convention for doing this: alphabetical ordering. More precisely, the entries in each n-tuple (or, equivalently, the symbols in each string) are viewed as being ordered by significance that decreases from left to right.
For example, according to this convention, the Cartesian product {1,2,3}×{0,1} is ordered like this:
(1,0),(1,1),(2,0),(2,1),(3,0),(3,1).
When n-tuples are written as strings and ordered in this way, we observe familiar patterns, such as {0,1}×{0,1} being ordered as 00,01,10,11, and the set {0,1}10 being ordered as was suggested above. We also see {0,1,…,9}×{0,1,…,9} ordered as the numbers 0 through 99. You may recognize this is not a coincidence: today's decimal number system uses the same alphabetical ordering. Here, of course, "alphabetical" has a broader meaning that may include a collection of numeric symbols.
Returning to the example of two bits from above, the probabilistic state is represented by the following probability vector (where the entries are labeled explicitly for the sake of clarity).
210021←probability associated with state 00←probability associated with state 01←probability associated with state 10←probability associated with state 11(1)
Independence of two systems
A special type of probabilistic state of two systems is one in which the systems are independent. Intuitively speaking, two systems are independent if learning the classical state of either system has no effect on the probabilities associated with the other. That is, learning what classical state one of the systems is in provides no information at all about the classical state of the other.
To define this notion precisely, let us suppose once again that X and Y are systems having classical state sets Σ and Γ, respectively. With respect to a given probabilistic state of these systems, they are said to be independent if it is the case that
Pr((X,Y)=(a,b))=Pr(X=a)Pr(Y=b)(2)
for every choice of a∈Σ and b∈Γ.
To express this condition in terms of probability vectors, assume that the given probabilistic state of (X,Y) is described by a probability vector, written in the Dirac notation as
(a,b)∈Σ×Γ∑pab∣ab⟩.
The condition (2) for independence is then equivalent to the existence of two probability vectors
∣ϕ⟩=a∈Σ∑qa∣a⟩and∣ψ⟩=b∈Γ∑rb∣b⟩,(3)
representing the probabilities associated with the classical states of X and Y, respectively, such that
pab=qarb(4)
for all a∈Σ and b∈Γ.
For example, the probabilistic state of a pair of bits (X,Y) represented by the vector
61∣00⟩+121∣01⟩+21∣10⟩+41∣11⟩
is one in which X and Y are independent. Specifically, the condition required for independence is true for the probability vectors
∣ϕ⟩=41∣0⟩+43∣1⟩and∣ψ⟩=32∣0⟩+31∣1⟩.
For example, to match the 00 entry, we need 61=41×32, and indeed this is the case. Other entries can be verified in a similar manner.
On the other hand, the probabilistic state (1), which we may write as
21∣00⟩+21∣11⟩,(5)
does not represent independence between the systems X and Y. A simple way to argue this is as follows.
Suppose that there did exist probability vectors ∣ϕ⟩ and ∣ψ⟩, as in equation (3) above, for which the condition (4) is satisfied for every choice of a and b. It would then necessarily be that
q0r1=Pr((X,Y)=(0,1))=0.
This implies that either q0=0 or r1=0, because if both were nonzero, the product q0r1 would also not be zero. This leads to the conclusion that either q0r0=0 (in case q0=0) or q1r1=0 (in case r1=0). We see, however, that neither of those equalities can be true because we must have q0r0=1/2 and q1r1=1/2. Hence, there do not exist vectors ∣ϕ⟩ and ∣ψ⟩ satisfying the property required for independence.
Having defined independence between two systems, we can now define correlation precisely as a lack of independence. For example, because the two bits in the probabilistic state represented by the vector (5) are not independent, they are, by definition, correlated.
Tensor products of vectors
The condition of independence just described can be expressed more succinctly through the notion of a tensor product. Although this is a very general notion that can be defined quite abstractly and applied to a variety of mathematical structures, in the case at hand it can be defined in simple, concrete terms. Given two vectors
∣ϕ⟩=a∈Σ∑αa∣a⟩and∣ψ⟩=b∈Γ∑βb∣b⟩,
the tensor product ∣ϕ⟩⊗∣ψ⟩ is a new vector over the joint state set Σ×Γ, defined as
∣ϕ⟩⊗∣ψ⟩=(a,b)∈Σ×Γ∑αaβb∣ab⟩.
Equivalently, the vector ∣π⟩=∣ϕ⟩⊗∣ψ⟩ is defined by the equation
⟨ab∣π⟩=⟨a∣ϕ⟩⟨b∣ψ⟩
being true for every a∈Σ and b∈Γ.
We can now recast the condition for independence as requiring the probability vector ∣π⟩ of the joint system (X,Y) to be representable as a tensor product
∣π⟩=∣ϕ⟩⊗∣ψ⟩
of probability vectors ∣ϕ⟩ and ∣ψ⟩ on each of the subsystems X and Y. In this situation it is said that ∣π⟩ is a product state or product vector.
We often omit the symbol ⊗ when taking the tensor product of kets, such as writing ∣ϕ⟩∣ψ⟩ rather than ∣ϕ⟩⊗∣ψ⟩. This convention captures the idea that the tensor product is, in this context, the most natural or default way to take the product of two vectors. Although it is less common, the notation ∣ϕ⊗ψ⟩ is also sometimes used.
When we use the alphabetical convention for ordering elements of Cartesian products, we obtain the following specification for the tensor product of two column vectors.
As an important aside, we observe the following expression for tensor products of standard basis vectors:
∣a⟩⊗∣b⟩=∣ab⟩.
Alternatively, writing (a,b) as an ordered pair rather than a string, we could write
∣a⟩⊗∣b⟩=∣(a,b)⟩,
but it is more common to write
∣a⟩⊗∣b⟩=∣a,b⟩
following a practice in mathematics of removing parentheses that do not add clarity or remove ambiguity.
The tensor product of two vectors has the important property that it is bilinear, which means that it is linear in each of the two arguments separately, assuming that the other argument is fixed. This property can be expressed through these equations:
Considering the second equation in each of these pairs of equations, we see that scalars "float freely" within tensor products:
(α∣ϕ⟩)⊗∣ψ⟩=∣ϕ⟩⊗(α∣ψ⟩)=α(∣ϕ⟩⊗∣ψ⟩).
There is therefore no ambiguity in simply writing α∣ϕ⟩⊗∣ψ⟩, or alternatively α∣ϕ⟩∣ψ⟩ or α∣ϕ⊗ψ⟩, to refer to this vector.
Independence and tensor products for three or more systems
The notions of independence and tensor products generalize straightforwardly to three or more systems. If X1,…,Xn are systems having classical state sets Σ1,…,Σn, respectively, then a probabilistic state of the combined system (X1,…,Xn) is a product state if the associated probability vector takes the form
∣ψ⟩=∣ϕ1⟩⊗⋯⊗∣ϕn⟩
for probability vectors ∣ϕ1⟩,…,∣ϕn⟩ describing probabilistic states of X1,…,Xn.
Here, the definition of the tensor product generalizes in a natural way: the vector
∣ψ⟩=∣ϕ1⟩⊗⋯⊗∣ϕn⟩
is defined by the equation
⟨a1⋯an∣ψ⟩=⟨a1∣ϕ1⟩⋯⟨an∣ϕn⟩
being true for every a1∈Σ1,…an∈Σn. A different, but equivalent, way to define the tensor product of three or more vectors is recursively in terms of tensor products of two vectors:
∣ϕ1⟩⊗⋯⊗∣ϕn⟩=(∣ϕ1⟩⊗⋯⊗∣ϕn−1⟩)⊗∣ϕn⟩,
assuming n≥3.
Similar to the tensor product of just two vectors, the tensor product of three or more vectors is linear in each of the arguments individually, assuming that all other arguments are fixed. In this case, we say that the tensor product of three or more vectors is multilinear.
As we did in the case of two systems, we could say that the systems X1,…,Xn are independent when they are in a product state, but the term mutually independent is more precise. There happen to be other notions of independence for three or more systems, such as pairwise independence, that we will not be concerned with at this time.
Generalizing the observation earlier concerning tensor products of standard basis vectors, for any positive integer n and any classical states a1,…,an we have
∣a1⟩⊗⋯⊗∣an⟩=∣a1⋯an⟩=∣a1,…,an⟩.
Measurements of probabilistic states
Now let us move on to measurements of probabilistic states of multiple systems. By choosing to view multiple systems together as single systems, we immediately obtain a specification of how measurements must work for multiple systems — provided that all systems are measured.
For example, if the probabilistic state of two bits (X,Y) is described by the probability vector
21∣00⟩+21∣11⟩,
then the outcome 00 — meaning 0 for the measurement of X and 0 for the measurement of Y — is obtained with probability 1/2 and the outcome 11 is also obtained with probability 1/2. In each case we update the probability vector description of our knowledge accordingly, so that the probabilistic state becomes ∣00⟩ or ∣11⟩, respectively.
Partial measurements
Suppose, however, that we choose not to measure every system, but instead we just measure some proper subset of the systems. This will result in a measurement outcome for each system that gets measured, and will also (in general) affect our knowledge of the remaining systems.
Let us focus on the case of two systems, one of which is measured. The more general situation — in which some proper subset of three or more systems is measured — effectively reduces to the case of two systems when we view the systems that are measured collectively as if they form one system and the systems that are not measured as if they form a second system.
To be precise, let us suppose (as usual) that X is a system having classical state set Σ, that Y is a system having classical state set Γ, and the two systems together are in some probabilistic state. We will consider what happens when we just measure X and do nothing to Y. The situation where just Y is measured and nothing happens to X is handled symmetrically.
First, we know that the probability to observe a particular classical state a∈Σ when just X is measured must be consistent with the probabilities we would obtain under the assumption that Y was also measured. That is, we must have
Pr(X=a)=b∈Γ∑Pr((X,Y)=(a,b)).
This is the formula for the so-called reduced (or marginal) probabilistic state of X alone.
This formula makes perfect sense at an intuitive level; something very strange would need to happen for it to be wrong. It would mean that the probabilities for X measurements are influenced simply by whether or not Y is also measured, irrespective of the outcome on Y. If Y happened to be in a distant location, say, another galaxy, this would allow for faster-than-light signaling, which we reject based on our understanding of physics. Another way to understand this comes from an interpretation of probability as reflecting a degree of belief about the state of the system. Since a measurement on Y is taken to simply reveal a preexisting state, a different observer looking at X, unaware of the Y measurement, should not have their probabilities changed.
Given the assumption that only X is measured and Y is not, there may in general still exist uncertainty over the classical state of Y. For this reason, rather than updating our description of the probabilistic state of (X,Y) to ∣ab⟩ for some selection of a∈Σ and b∈Γ, we must update our description so that this uncertainty about Y is properly reflected.
The following conditional probability formula reflects this uncertainty.
Pr(Y=b∣X=a)=Pr(X=a)Pr((X,Y)=(a,b))
Here, the expression Pr(Y=b∣X=a) denotes the probability that Y=bconditioned on (or given that) X=a.
It should be noted that the expression above is only defined if Pr(X=a) is nonzero, for if
Pr(X=a)=0,
then we obtain the indeterminate form 00. This is not a problem, though, because if the probability associated with a is zero, then we'll never observe a as an outcome of a measurement of X, so we don't need to be concerned with this possibility.
To express these formulas in terms of probability vectors, consider a probability vector ∣ψ⟩ describing the joint state of (X,Y).
∣ψ⟩=(a,b)∈Σ×Γ∑pab∣ab⟩
Measuring X alone yields each possible outcome with probabilities
Pr(X=a)=b∈Γ∑pab.
Thus, the vector representing the probabilistic state of X alone (i.e., the reduced probabilistic state of X) is given by
a∈Σ∑(c∈Γ∑pac)∣a⟩.
Having obtained a particular outcome a∈Σ of the measurement of X, the probabilistic state of Y is updated according to the formula for conditional probabilities, so that it is represented by this probability vector:
∣πa⟩=∑c∈Γpac∑b∈Γpab∣b⟩.
In the event that the measurement of X resulted in the classical state a, we therefore update our description of the probabilistic state of the joint system (X,Y) to ∣a⟩⊗∣πa⟩.
One way to think about this definition of ∣πa⟩ is to see it as a normalization of the vector ∑b∈Γpab∣b⟩, where we divide by the sum of the entries in this vector to obtain a probability vector. This normalization effectively accounts for a conditioning on the event that the measurement of X has resulted in the outcome a.
For a specific example, suppose that classical state set of X is Σ={0,1}, the classical state set of Y is Γ={1,2,3}, and the probabilistic state of (X,Y) is
Our goal will be to determine the probabilities of the two possible outcomes (0 and 1), and to calculate what the resulting probabilistic state of Y is for the two outcomes, assuming the system X is measured.
Using the bilinearity of the tensor product, and specifically the fact that it is linear in the second argument, we may rewrite the vector ∣ψ⟩ as follows:
We have isolated the distinct standard basis vectors for the system being measured, collecting together all of the terms for the second system. A moment's thought reveals that this is always possible, regardless of what vector we started with.
Having reorganized as such, the measurement outcomes become easy to analyze. The probabilities of the two outcomes are given by
Pr(X=0)Pr(X=1)=21+121=127=121+61+61=125.
Note that these probabilities sum to one as expected, a useful check on our calculations.
Moreover, the probabilistic state of Y, conditioned on each possible outcome, can also be quickly inferred by normalizing the vectors in parentheses (by dividing by the associated probability just calculated), so that these vectors become probability vectors. That is, conditioned on X being 0, the probabilistic state of Y becomes
12721∣1⟩+121∣3⟩=76∣1⟩+71∣3⟩,
and conditioned on the measurement of X being 1, the probabilistic state of Y becomes
125121∣1⟩+61∣2⟩+61∣3⟩=51∣1⟩+52∣2⟩+52∣3⟩.
Operations on probabilistic states
To conclude this discussion of classical information for multiple systems, we will consider operations on multiple systems in probabilistic states. Following the same idea as we did for probabilistic states and measurements, we can view multiple systems collectively as forming single, compound systems and look to the previous lesson to see how this works.
Returning to the typical set-up where we have two systems X and Y, let us consider classical operations on the compound system (X,Y). Based on the previous lesson and the discussion above, we conclude that any such operation is represented by a stochastic matrix whose rows and columns are indexed by the Cartesian product Σ×Γ.
For example, suppose that X and Y are bits, and consider an operation with the following description.
If X=1, then perform a NOT operation on Y. Otherwise do nothing.
This is a deterministic operation known as a controlled-NOT operation, where X is the control bit that determines whether or not a NOT operation should be applied to the target bit Y. Here is the matrix representation of this operation:
1000010000010010.
Its action on standard basis states is as follows.
∣00⟩∣01⟩∣10⟩∣11⟩↦∣00⟩↦∣01⟩↦∣11⟩↦∣10⟩
If we were to exchange the roles of X and Y, taking Y to be the control bit and X to be the target bit, then the matrix representation of the operation would become
1000000100100100
and its action on standard basis states would be like this:
∣00⟩∣01⟩∣10⟩∣11⟩↦∣00⟩↦∣11⟩↦∣10⟩↦∣01⟩
Another example is the operation having this description:
Perform one of the following two operations, each with probability 1/2:
Set Y to be equal to X.
Set X to be equal to Y.
The matrix representation of this operation is as follows:
In these examples, we are simply viewing two systems together as a single system and proceeding as in the previous lesson.
The same thing can be done for any number of systems. For example, imagine that we have three bits, and we increment the three bits modulo 8 — meaning that we think about the three bits as encoding a number between 0 and 7 using binary notation, add 1, and then take the remainder after dividing by 8. We can write this operation like this:
assuming we have agreed that a number j∈{0,1,…,7} inside of a ket refers to that number's three-bit binary encoding. A third option is to express this operation as a matrix.
Now suppose that we have multiple systems and we independently perform separate operations on the systems.
For example, taking our usual set-up of two systems X and Y having classical state sets Σ and Γ, respectively, let us suppose that we perform one operation on X and, completely independently, another operation on Y. As we know from the previous lesson, these operations are represented by stochastic matrices — and to be precise, let us say that the operation on X is represented by the matrix M and the operation on Y is represented by the matrix N. Thus, the rows and columns of M have indices that are placed in correspondence with the elements of Σ and, likewise, the rows and columns of N correspond to the elements of Γ.
A natural question to ask is this: if we view X and Y together as a single, compound system (X,Y), what is the matrix that represents the combined action of the two operations on this compound system? To answer this question, we must first introduce the tensor product of matrices — which is similar to the tensor product of vectors and is defined analogously.
Tensor products of matrices
The tensor product M⊗N of the matrices
M=a,b∈Σ∑αab∣a⟩⟨b∣
and
N=c,d∈Γ∑βcd∣c⟩⟨d∣
is the matrix
M⊗N=a,b∈Σ∑c,d∈Γ∑αabβcd∣ac⟩⟨bd∣
Equivalently, M and N is defined by the equation
⟨ac∣M⊗N∣bd⟩=⟨a∣M∣b⟩⟨c∣N∣d⟩
being true for every selection of a,b∈Σ and c,d∈Γ.
An alternative, but equivalent, way to describe M⊗N is that it is the unique matrix that satisfies the equation
(M⊗N)(∣ϕ⟩⊗∣ψ⟩)=(M∣ϕ⟩)⊗(N∣ψ⟩)
for every possible choice of vectors ∣ϕ⟩ and ∣ψ⟩. Here we are assuming that the indices of ∣ϕ⟩ correspond to the elements of Σ and the indices of ∣ψ⟩ correspond to Γ.
Following the convention described previously for ordering the elements of Cartesian products, we can also write the tensor product of two matrices explicitly as follows:
Tensor products of three or more matrices are defined in an analogous way. If M1,…,Mn are matrices whose indices correspond to classical state sets Σ1,…,Σn, then the tensor product M1⊗⋯⊗Mn is defined by the condition that
for every choice of classical states a1,b1∈Σ1,…,an,bn∈Σn.
Alternatively, we could also define the tensor product of three or more matrices recursively, in terms of tensor products of two matrices, similar to what we observed for vectors.
The tensor product of matrices is sometimes said to be multiplicative because the equation
(M1⊗⋯⊗Mn)(N1⊗⋯⊗Nn)=(M1N1)⊗⋯⊗(MnNn)
is always true, for any choice of matrices M1,…,Mn and N1,…,Nn, provided that the products M1N1,…,MnNn make sense.
Independent operations (continued)
To summarize the above discussion, we found that if M is a probabilistic operation on X,N is a probabilistic operation on Y, and the two operations are performed independently, then the resulting operation on the compound system (X,Y) is the tensor product M⊗N.
What we see, both here and for probabilistic states, is that tensor products represent independence: if we have two systems X and Y that are independently in the probabilistic states ∣ϕ⟩ and ∣π⟩, then the compound system (X,Y) is in the probabilistic state ∣ϕ⟩⊗∣π⟩; and if we independently apply probabilistic operations M and N to the two systems independently, then the resulting action on the compound system (X,Y) is described by the operation M⊗N.
Let us take a look at an example, which recalls a probabilistic operation on a single bit from the previous lesson: if the classical state of the bit is 0, it is left alone; and if the classical state of the bit is 1, it is flipped to 0 with probability 1/2. As we observed, this operation is represented by the matrix
(102121).
If this operation is performed on a bit X, and a NOT operation is (independently) performed on a second bit Y, then the joint operation on the compound system (X,Y) has the matrix representation
By inspection, we see that this is a stochastic matrix.
This will always be the case: the tensor product of two or more stochastic matrices is always stochastic.
A common situation that we encounter is one in which one operation is performed on one system and nothing is done to another. In such a case, exactly the same prescription is followed, noting that doing nothing is represented by the identity matrix. For example, resetting the bit X to the 0 state and doing nothing to Y yields the probabilistic (and in fact deterministic) operation on (X,Y) represented by the matrix
(1010)⊗(1001)=1000010010000100.
Quantum information
We are now prepared to move on to quantum information in the setting of multiple systems. Much like in the previous lesson on single systems, the mathematical description of quantum information for multiple systems is quite similar to the probabilistic case and makes use of similar concepts and techniques.
Quantum states
Multiple systems can be viewed collectively as single, compound systems. We have already observed this in the probabilistic setting, and the quantum setting is analogous.
That is, quantum states of multiple systems are represented by column vectors having complex number entries and Euclidean norm equal to 1 — just like quantum states of single systems. In the multiple system case, the indices of these vectors are placed in correspondence with the Cartesian product of the classical state sets associated with each of the individual systems (because that is the classical state set of the compound system).
For instance, if X and Y are qubits, then the classical state set of the pair of qubits (X,Y), viewed collectively as a single system, is the Cartesian product {0,1}×{0,1}. By representing pairs of binary values as binary strings of length two, we associate this Cartesian product set with the set {00,01,10,11}. The following vectors are therefore all examples of quantum state vectors of the pair (X,Y):
There are variations on how quantum state vectors of multiple systems are expressed, and we can choose whichever variation suits our preferences. Here are some examples, which are for the first quantum state vector above.
We may use the fact that ∣ab⟩=∣a⟩∣b⟩ (for any classical states a and b) to instead write
21∣0⟩∣0⟩−61∣0⟩∣1⟩+6i∣1⟩∣0⟩+61∣1⟩∣1⟩.
We may choose to write the tensor product symbol explicitly like this:
21∣0⟩⊗∣0⟩−61∣0⟩⊗∣1⟩+6i∣1⟩⊗∣0⟩+61∣1⟩⊗∣1⟩.
We may subscript the kets to indicate how they correspond to the systems being considered, like this:
Of course, we may also write quantum state vectors explicitly as column vectors:
21−616i61.
Depending upon the context in which it appears, one of these variations may be preferred — but they are all equivalent in the sense that they describe the same vector.
Tensor products of quantum state vectors
Similar to what we have for probability vectors, tensor products of quantum state vectors are also quantum state vectors — and again they represent independence among systems.
In greater detail, and beginning with the case of two systems, suppose that ∣ϕ⟩ is a quantum state vector of a system X and ∣ψ⟩ is a quantum state vector of a system Y. The tensor product ∣ϕ⟩⊗∣ψ⟩, which may alternatively be written as ∣ϕ⟩∣ψ⟩ or as ∣ϕ⊗ψ⟩, is then a quantum state vector of the joint system (X,Y). We refer to a state of this form as a being a product state.
Intuitively speaking, when a pair of systems (X,Y) is in a product state ∣ϕ⟩⊗∣ψ⟩, we may interpret this as meaning that X is in the quantum state ∣ϕ⟩,Y is in the quantum state ∣ψ⟩, and the states of the two systems have nothing to do with one another.
The fact that the tensor product vector ∣ϕ⟩⊗∣ψ⟩ is indeed a quantum state vector is consistent with the Euclidean norm being multiplicative with respect to tensor products:
Thus, because ∣ϕ⟩ and ∣ψ⟩ are quantum state vectors, we have ∥∣ϕ⟩∥=1 and ∥∣ψ⟩∥=1, and therefore ∥∣ϕ⟩⊗∣ψ⟩∥=1, so ∣ϕ⟩⊗∣ψ⟩ is also a quantum state vector.
This discussion may be generalized to more than two systems. If ∣ψ1⟩,…,∣ψn⟩ are quantum state vectors of systems X1,…,Xn, then ∣ψ1⟩⊗⋯⊗∣ψn⟩ is a quantum state vector representing a product state of the joint system (X1,…,Xn). Again, we know that this is a quantum state vector because
∣ψ1⟩⊗⋯⊗∣ψn⟩=∣ψ1⟩⋯∣ψn⟩=1n=1.
Entangled states
Not all quantum state vectors of multiple systems are product states. For example, the quantum state vector
21∣00⟩+21∣11⟩(6)
of two qubits is not a product state. To reason this, we may follow exactly the same argument that we used to prove that the probabilistic state represented by the vector (5) is not a product state.
That is, if (6) was a product state, there would exist quantum state vectors ∣ϕ⟩ and ∣ψ⟩ for which
∣ϕ⟩⊗∣ψ⟩=21∣00⟩+21∣11⟩.
But then it would necessarily be the case that
⟨0∣ϕ⟩⟨1∣ψ⟩=⟨01∣ϕ⊗ψ⟩=0
implying that ⟨0∣ϕ⟩=0 or ⟨1∣ψ⟩=0 (or both). That contradicts the fact that
⟨0∣ϕ⟩⟨0∣ψ⟩=⟨00∣ϕ⊗ψ⟩=21
and
⟨1∣ϕ⟩⟨1∣ψ⟩=⟨11∣ϕ⊗ψ⟩=21
are both nonzero.
Notice that the specific value 1/2 is not important to this argument — what is important is that this value is nonzero. Thus, for instance, the quantum state
53∣00⟩+54∣11⟩
is also not a product state, by the same argument.
It follows that the quantum state vector (6) represents a correlation between two systems, and specifically we say that the systems are entangled.
Entanglement is a quintessential feature of quantum information that will be discussed in much greater detail in later lessons. Entanglement can be complicated, particularly for the sorts of noisy quantum states that can be described in the general, density matrix formulation of quantum information that was mentioned in Lesson 1 — but for quantum state vectors in the simplified formulation that we are focusing on in this unit, entanglement is equivalent to correlation. That is, any quantum state vector that is not a product vector represents an entangled state.
We will now take a look as some important examples of multiple-qubit quantum states, beginning with the Bell states. These are the following four two-qubit states:
Notice that the same argument that establishes that ∣ϕ+⟩ is not a product state reveals that none of the other Bell states is a product state either — all four of the Bell states represent entanglement between two qubits.
The collection of all four Bell states
{∣ϕ+⟩,∣ϕ−⟩,∣ψ+⟩,∣ψ−⟩}
is known as the Bell basis; any quantum state vector of two qubits, or indeed any complex vector at all having entries corresponding to the four classical states of two bits, can be expressed as a linear combination of the four Bell states. For example,
∣00⟩=21∣ϕ+⟩+21∣ϕ−⟩.
GHZ and W states
Next we will consider two interesting examples of states of three qubits.
The first example, which we will consider represents a quantum of three qubits (X,Y,Z), is the GHZ state (so named in honor of Daniel Greenberger, Michael Horne, and Anton Zeilinger, who first studied some of its properties):
21∣000⟩+21∣111⟩.
The second example is the so-called W state:
31∣001⟩+31∣010⟩+31∣100⟩.
Neither of these states is a product state, meaning that they cannot be written as a tensor product of three qubit quantum state vectors.
We will examine both of these two states further when we discuss partial measurements of quantum states of multiple systems.
Additional examples
The examples of quantum states of multiple systems we have seen so far are states of two or three qubits, but we can also have quantum states of multiple systems having different classical state sets.
For example, here is a quantum state of three systems, X,Y, and Z, where the classical state set of X is the binary alphabet (so X is a qubit) and the classical state set of Y and Z is {♣,♢,♡,♠}:
21∣0⟩∣♡⟩∣♡⟩+21∣1⟩∣♠⟩∣♡⟩−21∣0⟩∣♡⟩∣♢⟩.
And, here is an example of a quantum state of three systems (X,Y,Z), where X,Y, and Z all share the same classical state set {0,1,2}:
6∣012⟩−∣021⟩+∣120⟩−∣102⟩+∣201⟩−∣210⟩.
Systems having the classical state set {0,1,2} are often called trits or, assuming we consider the possibility that they are in quantum states, qutrits. The term qudit refers to a system having classical state set {0,…,d−1} for an arbitrary choice of d.
Measurements of quantum states
Standard basis measurements of quantum states of single systems were discussed in the previous lesson: if a system having classical state set Σ is in a quantum state represented by the vector ∣ψ⟩, and that system is measured (with respect to a standard basis measurement), then each classical state a∈Σ appears with probability ∣⟨a∣ψ⟩∣2.
This tells us what happens when we have a quantum state of multiple systems and choose to measure the entire compound system (which is equivalent to measuring all of the systems). To state this precisely, let us suppose that X1,…,Xn are systems having classical state sets Σ1,…,Σn, respectively. We may then view (X1,…,Xn) collectively as a single system whose classical state set is the Cartesian product Σ1×⋯×Σn. If a quantum state of this system is represented by the quantum state vector ∣ψ⟩, and all of the systems are measured, then each possible outcome (a1,…,an)∈Σ1×⋯×Σn appears with probability ∣⟨a1⋯an∣ψ⟩∣2.
For example, if systems X and Y are jointly in the quantum state
53∣0⟩∣♡⟩−54i∣1⟩∣♠⟩,
then measuring both systems with respect to a standard basis measurement yields the outcome (0,♡) with probability 9/25 and the outcome (1,♠) with probability 16/25.
Partial measurements for two systems
Now let us consider the situation in which we have multiple systems in some quantum state, and we measure a proper subset of the systems. As before, we will begin with two systems X and Y having classical state sets Σ and Γ, respectively.
In general, a quantum state vector of (X,Y) takes the form
∣ψ⟩=(a,b)∈Σ×Γ∑αab∣ab⟩,
where {αab:(a,b)∈Σ×Γ} is a collection of complex numbers satisfying
(a,b)∈Σ×Γ∑∣αab∣2=1
(which is equivalent to ∣ψ⟩ being a unit vector).
We already know, from the discussion above, that if both X and Y were measured, then each possible outcome (a,b)∈Σ×Γ would appear with probability
⟨ab∣ψ⟩2=∣αab∣2.
Supposing that just the first system X is measured, the probability for each outcome a∈Σ to appear must therefore be equal to
b∈Γ∑⟨ab∣ψ⟩2=b∈Γ∑∣αab∣2.
This is consistent with what we already saw in the probabilistic setting, and is once again consistent with our understanding of physics. That is, the probability for each particular outcome to appear when X is measured cannot possibly depend on whether or not Y was also measured, as that would otherwise allow for faster-than-light communication.
Having obtained a particular outcome a∈Σ of this measurement of X, we expect that the quantum state of X changes so that it is equal to ∣a⟩, like we had for single systems. But what happens to the quantum state of Y?
To answer this question, let us describe the joint quantum state of (X,Y) under the assumption that X was measured (with respect to a standard basis measurement) and the result was the classical state a.
First we express the vector ∣ψ⟩ as
∣ψ⟩=a∈Σ∑∣a⟩⊗∣ϕa⟩,
where
∣ϕa⟩=b∈Γ∑αab∣b⟩
for each a∈Σ. Notice that the probability that the standard basis measurement of X results in each outcome a may be written as follows:
b∈Γ∑∣αab∣2=∣ϕa⟩2.
Now, as a result of the standard basis measurement of X resulting in the outcome a, we have that the quantum state of the pair (X,Y) together becomes
∣a⟩⊗∥∣ϕa⟩∥∣ϕa⟩.
That is, the state "collapses" like in the single-system case, but only as far as is required for the state to be consistent with the measurement of X having produced the outcome a.
Informally speaking, ∣a⟩⊗∣ϕa⟩ represents the component of ∣ψ⟩ that is consistent with the a measurement of X resulting in the outcome a. We normalize this vector — by dividing it by its Euclidean norm, which is equal to ∥∣ϕa⟩∥ — to yield a valid quantum state vector having Euclidean norm equal to 1. This normalization step is analogous to what we did in the probabilistic setting when we divided vectors by the sum of their entries to obtain a probability vector.
As an example, let us consider the state of two qubits (X,Y) from the beginning of the section:
∣ψ⟩=21∣00⟩−61∣01⟩+6i∣10⟩+61∣11⟩.
To understand what happens when the first system X is measured, we begin by writing
∣ψ⟩=∣0⟩⊗(21∣0⟩−61∣1⟩)+∣1⟩⊗(6i∣0⟩+61∣1⟩).
We now see, based on the description above, that the probability for the measurement to result in the outcome 0 is
21∣0⟩−61∣1⟩2=21+61=32
in which case the state of (X,Y) becomes
∣0⟩⊗3221∣0⟩−61∣1⟩=∣0⟩⊗(43∣0⟩−21∣1⟩);
and the probability for the measurement to result in the outcome 1 is
6i∣0⟩+61∣1⟩2=61+61=31,
in which case the state of (X,Y) becomes
∣1⟩⊗316i∣0⟩+61∣1⟩=∣1⟩⊗(2i∣0⟩+21∣1⟩).
The same technique, used in a symmetric way, describes what happens if the second system Y is measured rather than the first. We rewrite the vector ∣ψ⟩ as
∣ψ⟩=(21∣0⟩+6i∣1⟩)⊗∣0⟩+(−61∣0⟩+61∣1⟩)⊗∣1⟩.
The probability that the measurement of Y yields the outcome 0 is
21∣0⟩+6i∣1⟩2=21+61=32,
in which case the state of (X,Y) becomes
3221∣0⟩+6i∣1⟩⊗∣0⟩=(43∣0⟩+2i∣1⟩)⊗∣0⟩;
and the probability that the measurement outcome is 1 is
−61∣0⟩+61∣1⟩2=61+61=31,
in which case the state of (X,Y) becomes
31−61∣0⟩+61∣1⟩⊗∣1⟩=(−21∣0⟩+21∣1⟩)⊗∣1⟩.
Remark on reduced quantum states
This example shows a limitation of the simplified description of quantum information: it offers us no way to describe the reduced (or marginal) quantum state of just one of two systems (or a proper subset of any number of systems) like we did in the probabilistic case.
Specifically, we said that for a probabilistic state of two systems (X,Y) described by a probability vector
∣ψ⟩=(a,b)∈Σ×Γ∑pab∣ab⟩,
the reduced (or marginal) state of X alone is described by the probability vector
(a,b)∈Σ×Γ∑pab∣a⟩.
For quantum state vectors, there is no analog — for a quantum state vector
∣ϕ⟩=(a,b)∈Σ×Γ∑αab∣ab⟩,
the vector
∣ϕ⟩=(a,b)∈Σ×Γ∑αab∣a⟩
is not a quantum state vector in general, and does not properly represent the concept of a reduced or marginal state. It could be, in fact, that this vector is the zero vector.
So, what we must do instead is turn to the general description of quantum information. As we will describe in Unit 3, the general description of quantum information provides with a meaningful way to define reduced quantum states that is analogous to the probabilistic setting.
Partial measurements for three or more systems
Partial measurements for three or more systems, where some proper subset of the systems are measured, can be reduced to the case of two systems by dividing the systems into two collections: those that are measured and those that are not.
Here is a specific example that illustrates how this can be done. It demonstrates how subscripting kets by the names of the systems they represent can be useful — in this case because it gives us a simple way to describe permutations of the systems.
For the example, we have a quantum state of 5 systems X1,…,X5, all sharing the same classical state set {♣,♢,♡,♠}:
We will consider the situation in which the first and third systems are measured, and the remaining systems are left alone. Conceptually speaking, there is no fundamental difference between this situation and one in which one of two systems is measured — but unfortunately, because the measured systems are interspersed with the unmeasured systems, we face a hurdle in writing down the expressions needed to perform these calculations. A way to proceed is, as mentioned above, to subscript the kets to indicate which systems they refer to. This gives us the freedom to change their ordering, as we will now describe.
First, the quantum state vector above can alternatively be written as
Nothing here has changed except that each ket now has a subscript indicating which system it corresponds to. Here we have used the subscripts 1,…,5, but the names of the systems themselves could also be used (in a situation where we have system names such as X,Y, and Z, for instance).
We can then re-order the kets and collect terms as follows:
For other measurement outcomes the state can be determined in a similar way.
Now, it must be understood that the tensor product is not commutative: if ∣ϕ⟩ and ∣π⟩ are vectors, then, in general, ∣ϕ⟩⊗∣π⟩ is different from ∣π⟩⊗∣ϕ⟩, and likewise for tensor products of three or more vectors. For instance, ∣♡⟩∣♣⟩∣♢⟩∣♠⟩∣♠⟩ is a different vector than ∣♡⟩∣♢⟩∣♣⟩∣♠⟩∣♠⟩. The technique just described of re-ordering kets should not be interpreted as suggesting otherwise. Rather, for the sake of performing calculations and expressing the results, we are simply making a decision that it is more convenient to collect the systems X1,…,X5 together as (X1,X3,X2,X4,X5) rather than (X1,X2,X3,X4,X5). The subscripts on the kets serve to keep this all straight.
Analogously, in the closely related but simpler setting of Cartesian products and ordered pairs, if a and b are different classical states, then (a,b) and (b,a) are also different. Nevertheless, saying that the classical state of two bits (X,Y) is (1,0) is equivalent to saying that the classical state of (Y,X) is (0,1); when every system has its own unique name, it doesn't really matter what order we choose to list them, so long as the ordering is made clear.
Finally, here are two examples involving the GHZ and W states, as promised earlier. First let us consider the GHZ state
21∣000⟩+21∣111⟩.
If just the first system is measured, we obtain the outcome 0 with probability 1/2, in which case the state of the three qubits becomes ∣000⟩; and we also obtain the outcome 1 with probability 1/2, in which case the state of the three qubits becomes ∣111⟩.
Next let us consider a W state, which can be written like this:
The probability that the measurement outcome is 1 is 1/3, in which case the state of the three qubits becomes ∣100⟩.
Unitary operations
In previous sections of this lesson, we used the Cartesian product to treat individual systems as a larger, single system. Following the same line of thought, we can represent operations on multiple systems as unitary matrices acting on the state vector of this larger system.
In principle, any unitary matrix whose rows and columns correspond to the classical states of whatever system we're thinking about represents a valid quantum operation — and this holds true for compound systems whose classical state sets happen to be Cartesian products of the classical state sets of the individual systems.
Focusing on two systems, if X is a system having classical state set Σ and Y is a system having classical state set Γ, then the classical state set of the joint system (X,Y) is Σ×Γ — and therefore the set of operations that can be performed on this joint system are represented by unitary matrices whose rows and columns are placed in correspondence with the set Σ×Γ. The ordering of the rows and columns of these matrices is the same as the ordering used for quantum state vectors of the system (X,Y).
For example, let us suppose that Σ={1,2,3} and Γ={0,1}, and recall that the standard convention for ordering the elements of the Cartesian product {1,2,3}×{0,1} is (1,0),(1,1),(2,0),(2,1),(3,0),(3,1). Here is an example of a unitary matrix representing an operation on (X,Y):
This unitary operation isn't important, it's just an example. To check that U is unitary, it suffices to compute: U†U=I.
The action of U on the standard basis vector ∣11⟩, for instance, is
U∣11⟩=21∣10⟩+2i∣11⟩−21∣20⟩−2i∣30⟩,
which we can see by examining the second column of U, considering our ordering of the set {1,2,3}×{0,1}.
As with any matrix, it's possible to express U using the Dirac notation using 20 terms for the 20 nonzero entries of U. However, if we did write down all of these terms rather than writing a 6×6 matrix, we might miss certain patterns that are evident from the matrix expression. Simply put, the Dirac notation is not always the best choice for how to represent matrices.
Unitary operations on three or more systems work in a similar way, with the unitary matrices having rows and columns corresponding to the Cartesian product of the classical state sets of the systems.
We have already seen an example in this lesson: the three-qubit operation
k=0∑7∣(k+1)mod8⟩⟨k∣
from before, where ∣j⟩ means the three bit binary encoding of the number j, is unitary. Operations that are both unitary and represent deterministic operations are called reversible operations. The conjugate transpose of this matrix can be written like this:
k=0∑7∣k⟩⟨(k+1)mod8∣=k=0∑7∣(k−1)mod8⟩⟨k∣.
This matrix represents the reverse, or in mathematical terms the inverse, of the original operation — which is what we expect from the conjugate transpose of a unitary matrix.
We will see other examples of unitary operations on multiple systems as the lesson continues.
Unitary operations performed independently on individual systems
When unitary operations are performed independently on a collection of individual systems, the combined action of these independent operations is described by the tensor product of the unitary matrices that represent them. That is, if X1,…,Xn are quantum systems, U1,…,Un are unitary matrices representing operations on these systems, and the operations are performed independently on the systems, the combined action on (X1,…,Xn) is represented by the matrix U1⊗⋯⊗Un. Once again, we find that the probabilistic and quantum settings are analogous in this regard.
One would naturally expect, from reading the previous paragraph, that the tensor product of any collection of unitary matrices is unitary. Indeed this is true, and we can verify it as follows.
Notice first that the conjugate transpose operation satisfies
(M1⊗⋯⊗Mn)†=M1†⊗⋯⊗Mn†
for any collection of matrices M1,…,Mn. This can be checked by going back to the definition of the tensor product and of the conjugate transpose, and checking that each entry of the two sides of the equation are in agreement. This means that
(U1⊗⋯⊗Un)†(U1⊗⋯⊗Un)=(U1†⊗⋯⊗Un†)(U1⊗⋯⊗Un).
Because the tensor product of matrices is multiplicative, we find that
Here we have written I1,…,In to refer to the matrices representing the identity operation on the systems X1,…,Xn — which is to say that these are identity matrices whose sizes agree with the number of classical states of X1,…,Xn.
Finally, the tensor product I1⊗⋯⊗In is equal to the identity matrix, where we have a number of rows and columns that agrees with the product of the number of rows and columns of the matrices I1,…,In. We may view this larger identity matrix as representing the identity operation on the joint system (X1,…,Xn).
In summary, we have the following sequence of equalities:
An important situation that often arises is one in which a unitary operation is applied to just one system — or a proper subset of systems — within a larger joint system. For instance, suppose that X and Y are systems that we can view together as forming a single, compound system (X,Y), and we perform an operation just on the system X. To be precise, let us suppose that U is a unitary matrix representing an operation on X, so that its rows and columns have been placed in correspondence with the classical states of X.
To say that we perform the operation represented by U just on the system X implies that we do nothing to Y, meaning that we independently perform U on X and the identity operation on Y. That is, "doing nothing" to Y is equivalent to performing the identity operation on Y, which is represented by the identity matrix IY. (Here, by the way, the subscript Y tells us that IY refers to the identity matrix having a number of rows and columns in agreement with the classical state set of Y.) The operation on (X,Y) that is obtained when we perform U on X and do nothing to Y is therefore represented by the unitary matrix
U⊗IY.
For example, if X and Y are qubits, performing a Hadamard operation on X (and doing nothing to Y) is equivalent to performing the operation
Along similar lines, we may consider that an operation represented by a unitary matrix U is applied to Y and nothing is done to X, in which case the resulting operation on (X,Y) is represented by the unitary matrix
IX⊗U.
For example, if we again consider the situation in which both X and Y are qubits and U is a Hadamard operation, the resulting operation on (X,Y) is represented by the matrix
Not every unitary operation on a collection of systems X1,…,Xn can be written as a tensor product of unitary operations U1⊗⋯⊗Un, just as not every quantum state vector of these systems is a product state. For example, neither the swap operation nor the controlled-NOT operation on two qubits, which are described below, can be expressed as a tensor product of unitary operations.
The swap operation
To conclude the lesson, let's take a look at two classes of examples of unitary operations on multiple systems, beginning with the swap operation.
Suppose that X and Y are systems that share the same classical state set Σ. The swap operation on the pair (X,Y) is the operation that exchanges the contents of the two systems, but otherwise leaves the systems alone (so that X remains on the left and Y remains on the right).
We will denote this operation as SWAP. It operates like this for every choice of classical states a,b∈Σ:
SWAP∣a⟩∣b⟩=∣b⟩∣a⟩.
One way to write the matrix associated with this operation using the Dirac notation is as follows:
SWAP=c,d∈Σ∑∣c⟩⟨d∣⊗∣d⟩⟨c∣.
It may not be immediately clear that this matrix represents SWAP, but we can check it satisfies the condition SWAP∣a⟩∣b⟩=∣b⟩∣a⟩ for every choice of classical states a,b∈Σ.
As a simple example, when X and Y are qubits, we find that
SWAP=1000001001000001.
Controlled-unitary operations
Now let us suppose that Q is a qubit and R is an arbitrary system, having whatever classical state set we wish.
For every unitary operation U acting on the system R, a controlledU operation is a unitary operation on the pair (Q,R) defined as follows:
CU=∣0⟩⟨0∣⊗IR+∣1⟩⟨1∣⊗U.
For example, if R is also a qubit and we think about the Pauli X operation on R, then a controlled-X operation is given by
CX=∣0⟩⟨0∣⊗IR+∣1⟩⟨1∣⊗X=1000010000010010.
We already encountered this operation in the context of classical information and probabilistic operations earlier in the lesson.
If instead we consider the Pauli Z operation on R in place of the X operation, we obtain this operation:
CZ=∣0⟩⟨0∣⊗IR+∣1⟩⟨1∣⊗Z=100001000010000−1.
If instead we take R to be two qubits, and we take U to be the swap operation between these two qubits, we obtain this operation:
This operation is also known as a Fredkin operation (or, more commonly, a Fredkin gate), named for Edward Fredkin. Its action on standard basis states can be described as follows:
CSWAP∣0bc⟩CSWAP∣1bc⟩=∣0bc⟩=∣1cb⟩
Finally, controlled-controlled-NOT operation, which we may denote as CCX, is called a Toffoli operation (or Toffoli gate), named for Tommaso Toffoli. Its matrix representation looks like this:
We may alternatively express it using the Dirac notation as follows:
CCX=(∣00⟩⟨00∣+∣01⟩⟨01∣+∣10⟩⟨10∣)⊗I+∣11⟩⟨11∣⊗X.
Qiskit examples
In the previous lesson, we learned about Qiskit's Statevector and Operator classes, and used them to simulate quantum systems. In this section, we'll use them to explore the behavior of multiple systems. We'll start by importing these classes, as well as the square root function from NumPy.
Copy to clipboard
No output produced
Tensor products
The Statevector class has a tensor method which returns the tensor product of itself and another Statevector.
For example, below we create two state vectors representing ∣0⟩ and ∣1⟩, and use the tensor method to create a new vector, ∣0⟩⊗∣1⟩.
Copy to clipboard
Output:
∣01⟩
In another example below, we create state vectors representing the ∣+⟩ and 21(∣0⟩+i∣1⟩) states, and combine them to create a new state vector. We'll assign this new vector to the variable psi.
Copy to clipboard
Output:
21∣00⟩+2i∣01⟩+21∣10⟩+2i∣11⟩
The Operator class also has a tensor method. In the example below, we create the X and I gates and display their tensor product.
We can then treat these compound states and operations as we did single systems in the previous lesson. For example, in the cell below we calculate
(I⊗X)∣ψ⟩
for the state psi we defined above. (The ^ operator tensors matrices together.)
Copy to clipboard
Output:
2i∣00⟩+21∣01⟩+2i∣10⟩+21∣11⟩
Below, we create a CX operator and calculate CX∣ψ⟩.
Copy to clipboard
Output:
21∣00⟩+2i∣01⟩+2i∣10⟩+21∣11⟩
Partial measurements
In the previous page, we used the measure method to simulate a measurement of the quantum state vector. This method returns two items: the simulated measurement result, and the new Statevector given this measurement.
By default, measure measures all qubits in the state vector, but we can provide a list of integers to only measure the qubits at those indices. To demonstrate, the cell below creates the state
W=31(∣001⟩+∣010⟩+∣100⟩).
(Note that Qiskit is primarily designed for use with qubit-based quantum computers. As such, Statevector will try to interpret any vector with 2n elements as a system of n qubits. You can override this by passing a dims argument to the constructor. For example, dims=(4,2) would tell Qiskit the system has one four-level system, and one two-level system (qubit).)
Copy to clipboard
Output:
33∣001⟩+33∣010⟩+33∣100⟩
The cell below simulates a measurement on the rightmost qubit (which has index 0). The other two qubits are not measured.
Copy to clipboard
Output:
Measured: 0
State after measurement:
22∣010⟩+22∣100⟩
Try running the cell a few times to see different results. Notice that measuring a 1 means that we know both the other qubits are ∣0⟩, but measuring a 0 means the remaining two qubits are in the state 21(∣01⟩+∣10⟩).