In the general formulation of quantum information, quantum states are not represented by vectors like we have in the simplified formulation, but instead are represented by a special class of matrices called density matrices.
At first glance it may seem peculiar that quantum states are represented by matrices, which more typically represent actions or operations as opposed to states. For example, unitary matrices describe quantum operations in the simplified formulation of quantum information and stochastic matrices describe probabilistic operations in the context of classical information. In contrast, although density matrices are indeed matrices, they represent states — not actions or operations to which we typically associate an intuitive meaning.
Nevertheless, the fact that density matrices can (like all matrices) be associated with linear mappings is a critically important aspect of them. For example, the eigenvalues of density matrices describe the randomness or uncertainty inherent to the states they represent.
Before we proceed to the definition of density matrices, here are a few key points that motivate their use.
Density matrices can represent a broader class of quantum states than quantum state vectors. This includes states that arise in practical settings, such as states of quantum systems that have been subjected to noise, as well as random choices of quantum states.
Density matrices allow us to describe states of isolated parts of systems, such as the state of one system that happens to be entangled with another system that we wish to ignore. This isn't easily done in the simplified formulation of quantum information.
Classical (probabilistic) states can also be represented by density matrices, specifically ones that are . This is important because it allows quantum and classical information to be described together within a single mathematical framework, with classical information essentially being a special case of quantum information.
Pre-course Survey
Before we begin, please take a moment to complete our pre-course survey, which is important to help improve our content offerings and user experience.
Basics
We'll begin by describing what density matrices are in mathematical terms, and then we'll take a look at some examples. After that, we'll discuss a few basic aspects of how density matrices work and how they relate to quantum state vectors in the simplified formulation of quantum information.
Definition
Suppose that we have a quantum system named X, and let Σ be the (finite and nonempty) classical state set of this system. Here we're mirroring the naming conventions used in the Basics of quantum information course, which we'll continue to do when the opportunity arises.
In the general formulation of quantum information, a quantum state of the system X is described by a density matrixρ whose entries are complex numbers and whose indices (for both its rows and columns) have been placed in correspondence with the classical state set Σ. The lowercase Greek letter ρ is a conventional first choice for the name of a density matrix and σ and ξ are also common choices.
Here are a few examples of density matrices that describe states of qubits:
The trace is a linear function: for any two square matrices A and B of the same size and any two complex numbers α and β, the following equation is always true.
Tr(αA+βB)=αTr(A)+βTr(B)
The trace is an extremely important function and there's a lot more that can be said about it, but we'll wait until the need arises to say more.
The second condition refers to the property of a matrix being positive semidefinite, which is a truly fundamental concept in quantum information theory (and in many other subjects). A matrix P is positive semidefinite if there exists a matrix M such that
P=M†M.
Here we can either demand that M is a square matrix of the same size as P or allow it to be non-square — we obtain the same class of matrices either way.
There are several alternative (but equivalent) ways to define this condition, including these:
A matrix P is positive semidefinite if and only if P is Hermitian (i.e., equal to its own conjugate transpose) and all of its eigenvalues are nonnegative real numbers. Checking that a matrix is Hermitian and all of its eigenvalues are nonnegative is a simple computational way to verify that it's positive semidefinite.
A matrix P is positive semidefinite if and only if ⟨ψ∣P∣ψ⟩≥0 for every complex vector ∣ψ⟩ having the same indices as P.
An intuitive way to think about positive semidefinite matrices is that they're like matrix analogues of nonnegative real numbers. That is, positive semidefinite matrices are to complex square matrices as nonnegative real numbers are to complex numbers. For example, a complex number α is a nonnegative real number if and only if
α=ββ
for some complex number β, which matches the definition of positive semidefiniteness when we replace matrices with scalars. While matrices are more complicated objects than scalars in general, this is nevertheless a helpful way to think about positive semidefinite matrices. This also explains why the notation P≥0 is used to mean that P is positive semidefinite. (Notice in particular that the notation P≥0 does not mean that each entry of P is nonnegative in this context. There are positive semidefinite matrices having negative entries as well as matrices whose entries are all positive that are not positive semidefinite.)
At this point the definition of density matrices may seem rather arbitrary and abstract, as we have not yet associated any meaning with these matrices or their entries. The way density matrices work and can be interpreted will be clarified as the lesson continues, but for now it may be helpful to think about the entries of density matrices in the following (rather informal) way.
The diagonal entries of a density matrix give us the probabilities for each classical state to appear if we perform a standard basis measurement — so we can think about these entries as describing the "weight" associated with each classical state.
The off-diagonal entries of a density matrix describe the degree to which the two classical states corresponding to that entry (meaning the one corresponding to the row and the one corresponding to the column) are in quantum superposition, as well as the relative phase between them.
It is certainly not obvious a priori that quantum states should be represented by density matrices. Indeed, there is a sense in which the choice to represent quantum states by density matrices leads naturally to the entire mathematical description of quantum information. Everything else about quantum information actually follows pretty logically from this one choice!
Random examples
We'll see several examples of density matrices throughout the lesson, including ones that represent states encountered earlier in the series. To begin, let's take a look as some randomly generated examples. We'll begin with some random examples of positive semidefinite matrices, and from these examples we can obtain examples of density matrices by simply normalizing — which in this context means dividing by the trace.
Copy to clipboard
Output:
Imports loaded.
The code cell that follows randomly generates a positive semidefinite matrix by first generating an n×n matrix M whose entries have real and imaginary parts chosen independently and uniformly from the set {−9,…,9} and then outputting the positive semidefinite matrix P=M†M. Through this method we'll only obtain matrices whose entries have integer real and imaginary parts that aren't too large, which will make the examples more readable — but be aware that not every positive semidefinite matrix has this property. Changing the dimension n and running the cells multiple times may help to develop a sense for what positive semidefinite matrices look like.
(Here we're using Qiskit's array_to_latex function to obtain a more human-readable output format for the matrix. Substituting display(P) for display(array_to_latex(P)) shows the result using the standard matrix representation in Python.)
The fact that each randomly generated positive semidefinite matrix is Hermitian can be checked by inspection. We can also compute the eigenvalues to see that they're always nonnegative real numbers.
Copy to clipboard
Output:
[258.355657264131.48848368289.1558590531]
Random examples generated in this way are naturally limited in what they can tell us, but we can observe some features that are true in general for positive semidefinite matrices. In particular, the diagonal entries are always nonnegative real numbers, and the off-diagonal entries are never "too large" in comparison to the two corresponding diagonal entries (meaning the diagonal entries in the same row and the same column as the chosen off-diagonal entry).
As was already suggested, to generate a random density matrix we can use the same procedure to generate a random positive semidefinite matrix and then divide this matrix by its trace. The following code cell does this. (Note that the cell will throw a warning if by chance P is the all-zero matrix, which is possible but unlikely — this can only happen when M is the all-zero matrix.)
(The array_to_latex function does its best to provide a symbolic representation of the matrix, but it's not exact and will occasionally produce unusual expressions that happen to closely approximate the actual results.)
Notice that the diagonal entries are always nonnegative and sum to 1, so they form a probability vector. This probability vector specifies the probabilities for obtaining each possible classical state from a standard basis measurement, as was already suggested.
We can also compute the eigenvalues of these randomly generated density matrices. Although the eigenvalues are usually different from the diagonal entries, they also form a probability vector. This is a consequence of the following basic fact from matrix theory.
Theorem. The trace of a square matrix is equal to the sum of its eigenvalues, with each eigenvalue being included in the sum a number of times equal to its multiplicity.
Copy to clipboard
Output:
[0.49774785180.48409077840.0181613698]
Qiskit also includes a DensityMatrix class that includes some useful methods for working with density matrices.
Copy to clipboard
Output:
[43−8i8i41]
Connection to quantum state vectors
Recall that a quantum state vector ∣ψ⟩ describing a quantum state of X is a column vector having Euclidean norm equal to 1 whose entries have been placed in correspondence with the classical state set Σ. The density matrix representation ρ of the same state is defined as follows.
ρ=∣ψ⟩⟨ψ∣
To be clear, we're multiplying a column vector to a row vector, so the result is a square matrix whose rows and columns correspond to Σ. Matrices of this form, in addition to being density matrices, are always projections and have equal to 1.
For example, let us define two qubit state vectors as follows.
To check these density matrix representations, we can compute by hand or ask Qiskit to perform the conversion using the .to_operator method from the Statevector class. Here we also use the .from_label method to define the first six state vectors for convenience.
Density matrices that take the form ρ=∣ψ⟩⟨ψ∣ for some quantum state vector ∣ψ⟩ are known as pure states. Not every density matrix can be written in this form; some states are not pure.
As density matrices, pure states always have one eigenvalue equal to 1 and all other eigenvalues equal to 0. This is consistent with the interpretation that the eigenvalues of a density matrix describe the randomness or uncertainty inherent to that state. A way to think about this is that there's no uncertainty for a pure state ρ=∣ψ⟩⟨ψ∣ — the state is definitely ∣ψ⟩.
In general, for a quantum state vector
∣ψ⟩=α0α1⋮αn−1
for a system with n classical states, the density matrix representation of the same state is as follows.
Thus, for the special case of pure states, we can verify that the diagonal entries of a density matrix describe the probabilities that a standard basis measurement would output each possible classical state.
A final remark about pure states is that density matrices eliminate the degeneracy concerning global phases found for quantum state vectors. Suppose we have two quantum state vectors that differ by a global phase: ∣ψ⟩ and ∣ϕ⟩=eiθ∣ψ⟩, for some real number θ. Because they differ by a global phase, these vectors represent exactly the same quantum state, despite the fact that the vectors may be different. The density matrices that we obtain from these two state vectors, on the other hand, are identical.
∣ϕ⟩⟨ϕ∣=(eiθ∣ψ⟩)(eiθ∣ψ⟩)†=ei(θ−θ)∣ψ⟩⟨ψ∣=∣ψ⟩⟨ψ∣
In general, density matrices provide a unique representation of quantum states: two quantum states are identical, generating exactly the same outcome statistics for every possible measurement that can be performed on them, if and only if their density matrix representations are equal. Using mathematical parlance, we can express this by saying that density matrices offer a faithful representation of quantum states.
Convex combinations of density matrices
A key aspect of density matrices is that probabilistic selections of quantum states are represented by convex combinations of their associated density matrices.
For example, if we have two density matrices, ρ and σ, representing quantum states of a system X, and we prepare the system in the state ρ with probability p∈[0,1] and σ with probability 1−p, then the resulting quantum state is represented by the density matrix
pρ+(1−p)σ.
More generally, if we have m quantum states represented by density matrices ρ0,…,ρm−1, and a system is prepared in the state ρk with probability pk for some probability vector (p0,…,pm−1), the resulting state is represented by the density matrix
k=0∑m−1pkρk.
This is a convex combination of the density matrices ρ0,…,ρm−1.
If we suppose that we have m quantum state vectors ∣ψ0⟩,…,∣ψm−1⟩, and we prepare a system in the state ∣ψk⟩ with probability pk for each k∈{0,…,m−1}, the state we obtain is represented by the density matrix
k=0∑m−1pk∣ψk⟩⟨ψk∣.
For example, if a qubit is prepared in the state ∣0⟩ with probability 1/2 and in the state ∣+⟩ with probability 1/2, the density matrix representation of the state we obtain is given by
is not a valid quantum state vector because its Euclidean norm is not equal to 1.
A more extreme example that shows that this doesn't work for quantum state vectors is that we fix any quantum state vector ∣ψ⟩ that we wish, and then we take our state to be ∣ψ⟩ with probability 1/2 and −∣ψ⟩ with probability 1/2. These states differ by a global phase, so they're actually the same state — but averaging gives us the zero vector, which is not a valid quantum state vector.
The completely mixed state
Suppose we set the state of a qubit to be ∣0⟩ or ∣1⟩ randomly, each with probability 1/2. The density matrix representing the resulting state is as follows. (In this equation the symbol I denotes the 2×2 identity matrix.)
This is a special state known as the completely mixed state. It represents complete uncertainty about the state of a qubit, similar to a uniform random bit in the probabilistic setting.
Now suppose that we change the procedure: in place of the states ∣0⟩ and ∣1⟩ we'll use the states ∣+⟩ and ∣−⟩. We can compute the density matrix that describes the resulting state in a similar way.
It's the same density matrix as before, even though we changed the states. We would again obtain the same result — the completely mixed state — by substituting any two orthogonal qubit state vectors for ∣0⟩ and ∣1⟩.
This is a feature not a bug! We do in fact obtain exactly the same state either way. That is, there's no way to distinguish the two procedures by measuring the qubit they produce, even in a statistical sense — so we've simply described the same state in two different ways.
We can verify that this makes sense by thinking about what we could hope to learn given a random selection of a state from one of the two possible state sets {∣0⟩,∣1⟩} and {∣+⟩,∣−⟩}. To keep things simple, let's suppose that we perform a unitary operation U on our qubit and then measure in the standard basis.
In the first scenario, the state of the qubit is chosen uniformly from the set {∣0⟩,∣1⟩}. If the state is ∣0⟩, we obtain the outcomes 0 and 1 with probabilities
∣⟨0∣U∣0⟩∣2and∣⟨1∣U∣0⟩∣2
respectively. If the state is ∣1⟩, we obtain the outcomes 0 and 1 with probabilities
∣⟨0∣U∣1⟩∣2and∣⟨1∣U∣1⟩∣2.
Because the two possibilities each happen with probability 1/2, we obtain the outcome 0 with probability
21∣⟨0∣U∣0⟩∣2+21∣⟨0∣U∣1⟩∣2
and the outcome 1 with probability
21∣⟨1∣U∣0⟩∣2+21∣⟨1∣U∣1⟩∣2.
Both of these expressions are equal to 1/2. One way to argue this is to use a fact from linear algebra that can be seen as a generalization of the Pythagorean theorem.
Theorem. Suppose {∣ψ1⟩,…,∣ψn⟩} is an orthonormal basis of a (real or complex) vector space V. For every vector ∣ϕ⟩∈V we have ∣⟨ψ1∣ϕ⟩∣2+⋯+∣⟨ψn∣ϕ⟩∣2=∥∣ϕ⟩∥2.
We can apply this theorem to determine the probabilities as follows. The probability to get 0 is
Because U is unitary we know that U† is unitary as well, implying that both U†∣0⟩ and U†∣1⟩ are unit vectors. Both probabilities are therefore equal to 1/2. This means that no matter how we choose U, we're just going to get a uniform random bit from the measurement.
We can perform a similar verification for any other pair of orthonormal states in place of ∣0⟩ and ∣1⟩. For example, because {∣+⟩,∣−⟩} is an orthonormal basis, the probability to obtain the measurement outcome 0 is in the second procedure is
21∣⟨0∣U∣+⟩∣2+21∣⟨0∣U∣−⟩∣2=21U†∣0⟩2=21
and the probability to get 1 is
21∣⟨1∣U∣+⟩∣2+21∣⟨1∣U∣−⟩∣2=21U†∣1⟩2=21.
In particular, we obtain exactly the same output statistics as we did for the states ∣0⟩ and ∣1⟩.
Probabilistic states
Classical states can be represented by density matrices. In particular, for each classical state a of a system X, the density matrix
ρ=∣a⟩⟨a∣
represents X being definitively in the classical state a. For qubits we have
∣0⟩⟨0∣=(1000)and∣1⟩⟨1∣=(0001),
and in general we have a single 1 on the diagonal in the position corresponding to the classical state we have in mind, with all other entries zero.
We can then take convex combinations of these density matrices to represent probabilistic states. Supposing for simplicity that our classical state set is {0,…,n−1}, if we have that X is in the state a with probability pa for each a∈{0,…,n−1}, then the density matrix we obtain is
Going in the other direction, any diagonal density matrix can naturally be identified with the probabilistic state we obtain by simply reading the probability vector off from the diagonal. To be clear, when a density matrix is diagonal, it's not necessarily the case that we're talking about a classical system, or that the system must have been prepared through the random selection of a classical state, but rather that the state could have been obtained through the random selection of a classical state.
The fact that probabilistic states are represented by diagonal density matrices is consistent with the intuition suggested at the start of the lesson that off-diagonal entries describe the degree to which the two classical states corresponding to the row and column of that entry are in quantum superposition. Here all of the off-diagonal entries are zero, so we just have classical randomness and nothing is in quantum superposition.
Density matrices and the spectral theorem
We've seen that if we take a convex combination of pure states,
ρ=k=0∑m−1pk∣ψk⟩⟨ψk∣,
we obtain a density matrix. Every density matrix ρ, in fact, can be expressed as a convex combination of pure states like this. That is, there will always exist a collection of unit vectors {∣ψ0⟩,…,∣ψm−1⟩} and a probability vector (p0,…,pm−1) for which the equation above is true.
We can, moreover, always choose the number m so that it agrees with the number of classical states of the system being considered, and we can select the quantum state vectors to be orthogonal. The spectral theorem allows us to conclude this. (The statement of this theorem that follows refers to a normal matrix M. This is a matrix that satisfies M†M=MM†, or in words, commutes with its own conjugate transpose.)
Theorem (spectral theorem). Let M be a normaln×n complex matrix. There exists an orthonormal basis of n dimensional complex vectors {∣ψ0⟩,…,∣ψn−1⟩} along with complex numbers λ0,…,λn−1 such that M=λ0∣ψ0⟩⟨ψ0∣+⋯+λn−1∣ψn−1⟩⟨ψn−1∣.
We can apply this theorem to a given density matrix ρ because density matrices are Hermitian and therefore normal, which allows us to write
ρ=λ0∣ψ0⟩⟨ψ0∣+⋯+λn−1∣ψn−1⟩⟨ψn−1∣
for some orthonormal basis {∣ψ0⟩,…,∣ψn−1⟩}. It remains to verify that (λ0,…,λn−1) is a probability vector, which we can then rename to (p0,…,pn−1) if we wish.
The numbers λ0,…,λn−1 are the eigenvalues of ρ, and because ρ is positive semidefinite these numbers must therefore be nonnegative real numbers. We can conclude that λ0+⋯+λn−1=1 from the fact that ρ has trace equal to 1. Going through the details will give us an opportunity to point out an important and useful property of the trace.
Theorem (cyclic property of the trace). For any two matrices A and B that give us a square matrix AB by multiplying, the equality Tr(AB)=Tr(BA) is true.
Note that this theorem works even if A and B are not themselves square matrices — we may have that A is n×m and B is m×n, for some choice of positive integers n and m, so that AB is an n×n square matrix and BA is m×m. So, if we let A be a column vector ∣ϕ⟩ and let B be the row vector ⟨ϕ∣, then we see that
Tr(∣ϕ⟩⟨ϕ∣)=Tr(⟨ϕ∣ϕ⟩)=⟨ϕ∣ϕ⟩.
The second equality follows from the fact that ⟨ϕ∣ϕ⟩ is a scalar, which we can also think of as a 1×1 matrix whose trace is its single entry. Using this fact, we can conclude that λ0+⋯+λn−1=1 by the linearity of the trace function.
Alternatively, we can use a fact that was mentioned previously, which is that the trace of a square matrix is equal to the sum of its eigenvalues, to reach the same conclusion.
We have therefore concluded that any given density matrix ρ can be expressed as a convex combination of pure states. We also see that we can, moreover, take the pure states to be orthogonal. This means, in particular, that we never need the number n to be larger than the size of the classical state set of X.
It must be understood that there will in general be many different ways to write a density matrix as a convex combination of pure states, not just the ways that the spectral theorem provides. A previous example illustrates this.
21∣0⟩⟨0∣+21∣+⟩⟨+∣=(43414141)
This is not a spectral decomposition of this matrix because ∣0⟩ and ∣+⟩ are not orthogonal. Here's a spectral decomposition:
As another, more general example, suppose ∣ϕ0⟩,…,∣ϕ99⟩ are quantum state vectors representing states of a qubit, chosen arbitrarily — so we're not assuming any particular relationships among these vectors. We could then consider the state we obtain by choosing one of these 100 states uniformly at random:
ρ=1001k=0∑99∣ϕk⟩⟨ϕk∣.
Because we're talking about a qubit, the density matrix ρ is 2×2, so by the spectral theorem we could alternatively write
ρ=p∣ψ0⟩⟨ψ0∣+(1−p)∣ψ1⟩⟨ψ1∣
for some real number p∈[0,1] and an orthonormal basis {∣ψ0⟩,∣ψ1⟩} — but naturally the existence of this expression doesn't prohibit us from writing ρ as an average of 100 pure states if we choose to do that.
Bloch sphere
There's a useful geometric way to represent pure states of qubits known as the Bloch sphere. It's very convenient, but unfortunately it only works for qubits — once we have three or more classical states in our system, the analogous representation no longer corresponds to a spherical object.
Let's start by thinking about a quantum state vector of a qubit: α∣0⟩+β∣1⟩. We can restrict our attention to vectors for which α is a nonnegative real number because every qubit state vector is equivalent up to a global phase to one for which α≥0. This allows us to write
∣ψ⟩=cos(θ/2)∣0⟩+eiϕsin(θ/2)∣1⟩
for two real numbers θ∈[0,π] and ϕ∈[0,2π). Here we're allowing θ to range from 0 to π and dividing by 2 in the expression of the vector because this is a conventional way to parameterize vectors of this sort, and it will make things simpler a bit later on.
It isn't quite the case that the numbers θ and ϕ are uniquely determined by a given quantum state vector α∣0⟩+β∣1⟩, but it is nearly so. In particular, if θ=0, then we have ∣ψ⟩=∣0⟩, and it doesn't make any difference what value ϕ takes, so it can be chosen arbitrarily. Similarly, if θ=π, then we have ∣ψ⟩=eiϕ∣1⟩, which is equivalent up to a global phase to ∣1⟩, so once again ϕ is irrelevant. If, however, neither α nor β is 0, then there's a unique choice for the pair (θ,ϕ) for which ∣ψ⟩ is equivalent to α∣0⟩+β∣1⟩ up to a global phase.
Now let's think about the density matrix representation of this state.
Next let's take a look at the three coefficients of σx,σy, and σz in the numerator of this expression. They're all real numbers and we can collect them together to form a 3-dimensional vector.
(sin(θ)cos(ϕ),sin(θ)sin(ϕ),cos(θ))
This vector is written (1,θ,ϕ) in spherical coordinates: the first coordinate 1 represents the radius or radial distance, θ represents the polar angle, and ϕ represents the azimuthal angle. In words, the polar angle θ is how far we rotate south from the north pole, from 0 to π=180∘, while the azimuthal angle ϕ is how far we rotate east from the prime meridian, from 0 to 2π=360∘, assuming that the prime meridian is defined to be the curve on the surface of the sphere from one pole to the other that passes through the positive x-axis.
We can describe every point on the sphere in this way, which is to say that the points we obtain when we range over all possible pure states of a qubit correspond precisely to a sphere in 3 real dimensions. (This sphere is typically called the unit 2-sphere because the surface of this sphere is two-dimensional.) When we associate points on the unit 2-sphere with pure states of qubits, we obtain the Bloch sphere representation these states.
Six important states
The standard basis{∣0⟩,∣1⟩}. Let's start with the state ∣0⟩. As a density matrix it can be written like this.
∣0⟩⟨0∣=2I+σz
By collecting the coefficients of the Pauli matrices in the numerator, we see that the corresponding point on the unit 2-sphere using Cartesian coordinates is (0,0,1). In spherical coordinates this point is (1,0,ϕ) where ϕ can be any angle. This is consistent with the expression
∣0⟩=cos(0)∣0⟩+eiϕsin(0)∣1⟩,
which also works for any ϕ. Intuitively speaking, the polar angle θ is zero, so we're at the north pole of the Bloch sphere, where the azimuthal angle is irrelevant. Along similar lines, a density matrix for the state ∣1⟩ can be written like so.
∣1⟩⟨1∣=2I−σz
This time the Cartesian coordinates are (0,0,−1). In spherical coordinates this point is (1,π,ϕ) where ϕ can be any angle. Intuitively speaking, the polar angle is all the way to π, so we're at the south pole where the azimuthal angle is again irrelevant.
The basis {∣+⟩,∣−⟩}. This time we have these expressions.
∣+⟩⟨+∣∣−⟩⟨−∣=2I+σx=2I−σx
The corresponding points on the unit 2-sphere have Cartesian coordinates (1,0,0) and (−1,0,0), and spherical coordinates (1,π/2,0) and (1,π/2,π), respectively. In words, ∣+⟩ corresponds to the point where the positive x-axis intersects the unit 2-sphere and ∣−⟩ to the point where the negative x-axis intersects it. More intuitively, ∣+⟩ is on the equator of the Bloch sphere where it meets the prime meridian, and ∣−⟩ is on the equator at the opposite side of the sphere.
The basis{∣+i⟩,∣−i⟩}. As we saw earlier in the lesson, these two states are defined like this:
∣+i⟩∣−i⟩=21∣0⟩+2i∣1⟩=21∣0⟩−2i∣1⟩.
This time we have these expressions.
∣+i⟩⟨+i∣∣−i⟩⟨−i∣=2I+σy=2I−σy
The corresponding points on the unit 2-sphere have Cartesian coordinates (0,1,0) and (0,−1,0), and spherical coordinates (1,π/2,π/2) and (1,π/2,3π/2), respectively. In words, ∣+i⟩ corresponds to the point where the positive y-axis intersects the unit 2-sphere and ∣−i⟩ to the point where the negative y-axis intersects it.
Here's another class of quantum state vectors that has appeared from time to time throughout this series, including previously in this lesson.
∣ψα⟩=cos(α)∣0⟩+sin(α)∣1⟩(for α∈[0,π))
The density matrix representation of each of these states is as follows.
The following figure illustrates the corresponding points on the Bloch sphere for a few choices for α.
Convex combinations of points
Similar to what we already discussed for density matrices, we can take convex combinations of points on the Bloch sphere to obtain representations of qubit density matrices. In general this results in points inside of the Bloch sphere, which represent density matrices of states that are not pure. Sometimes we refer to the Bloch ball when we wish to be explicit about the inclusion of points inside of the Bloch sphere as representations of qubit density matrices.
For example, we have seen that the density matrix 21I, which represents the completely mixed state of a qubit, can be written in these two alternative ways:
and more generally we can use any two orthogonal qubit state vectors (which will always correspond to two antipodal points on the Bloch sphere). If we average the corresponding points on the Bloch sphere in a similar way we obtain the same point, which in this case is at the center of the sphere. This is consistent with the observation that
21I=2I+0⋅σx+0⋅σy+0⋅σz,
giving us the Cartesian coordinates (0,0,0).
A different example concerning convex combinations of Bloch sphere points is the one discussed in the previous subsection.
The following figure illustrates these two different ways of obtaining this density matrix as a convex combination of pure states.
Plotting Bloch sphere points in Qiskit
Qiskit provides two functions for plotting points in the Bloch ball: plot_bloch_vector and plot_bloch_multivector.
Copy to clipboard
Output:
Imports loaded.
The function plot_bloch_vector displays a Bloch ball point using either Cartesian or spherical coordinates.
Copy to clipboard
Output:
The center of the ball is indicated by the lack of an arrow (or an arrow of length zero).
Copy to clipboard
Output:
The plot_bloch_multivector function takes a Statevector or DensityMatrix as input and outputs a Bloch sphere illustration (for each qubit in isolation).
Copy to clipboard
Output:
This can equivalently be done through the draw method for Statevector and DensityMatrix objects.
Copy to clipboard
Output:
Multiple systems and reduced states
Now we'll turn our attention to how density matrices work for multiple systems, including examples of different types of correlations they can express and how they can be used to describe the states of isolated parts of compound systems.
Multiple systems
Density matrices can represent states of multiple systems in an analogous way to state vectors in the simplified formulation of quantum information, following the same basic idea that multiple systems can be viewed as if they're single, compound systems. In mathematical terms, the rows and columns of density matrices representing states of multiple systems are placed in correspondence with the Cartesian product of the classical state sets of the individual systems.
For example, recall the state vector representations of the four Bell states.
Similar to what we had for state vectors, tensor products of density matrices represent independence between the states of multiple systems. For instance, if X is prepared in the state represented by the density matrix ρ and Y is independently prepared in the state represented by σ, then the density matrix describing the state of (X,Y) is the tensor product ρ⊗σ.
The same terminology is used here as in the simplified formulation of quantum information: states of this form are referred to as product states.
Correlated and entangled states
States that cannot be expressed as product states represent correlations between systems. There are, in fact, different types of correlations that can be represented by density matrices. Here are a few examples.
Correlated classical states. For example, we can express the situation in which Alice and Bob share a random bit like this:
Ensembles of quantum states. Suppose we have m density matrices ρ0,…,ρm−1, all representing states of a system X, and we randomly choose one of these states according to a probability vector (p0,…,pm−1). Such a process is represented by an ensemble of states, which includes the specification of the density matrices ρ0,…,ρm−1 as well as the probabilities (p0,…,pm−1). We can associate an ensemble of states with a single density matrix, describing both the random choice of k and the corresponding density matrix ρk, like this:
k=0∑m−1pk∣k⟩⟨k∣⊗ρk
To be clear, this is the state of a pair (Y,X) where Y represents the classical selection of k — so we're assuming its classical state set is {0,…,m−1}. States of this form are sometimes called classical-quantum states.
Separable states. We can imagine situations in which we have a classical correlation among the quantum states of two systems like this.
k=0∑m−1pkρk⊗σk
In words, for each k from 0 to m−1, we have that with probability pk the system on the left is in the state ρk and the system on the right is in the state σk. States like this are called separable states. This concept can also be extended to more than two systems.
Entangled states. Not all states of pairs of systems are separable. In the general formulation of quantum information this is how entanglement is defined: states that are not separable are said to be entangled. This terminology is consistent with the terminology we used in Basics of quantum information. There we said that quantum state vectors that are not product states represent entangled states — and indeed, for any quantum state vector ∣ψ⟩ that is not a product state, we find that the state represented by the density matrix ∣ψ⟩⟨ψ∣ is not separable. Entanglement is much more complicated than this for states that are not pure.
Reduced states and the partial trace
There's a simple but important thing we can do with density matrices in the context of multiple systems, which is to describe the states we obtain by ignoring some of the systems. When multiple systems are in a quantum state, and we discard or choose to ignore one or more of the systems, the state of the remaining systems is called the reduced state of those systems. Density matrix descriptions of reduced states are easily obtained through a mapping, known as the partial trace, from the density matrix describing the state of the whole.
Example: reduced states for an e-bit
Suppose that we have a pair of qubits (A,B) that are together in the state
∣ϕ+⟩=21∣00⟩+21∣11⟩.
We can imagine that Alice holds the qubit A and Bob holds B, which is to say that together they share an e-bit. We'd like to have a density matrix description of Alice's qubit A in isolation, as if Bob decided to take his qubit and visit the stars, never to be seen again.
First let's think about what would happen if Bob decided somewhere on his journey to measure his qubit with respect to a standard basis measurement. If he did this, he would obtain the outcome 0 with probability
(IA⊗⟨0∣)∣ϕ+⟩2=21∣0⟩2=21,
in which case the state of Alice's qubit becomes ∣0⟩; and he would obtain the outcome 1 with probability
(IA⊗⟨1∣)∣ϕ+⟩2=21∣1⟩2=21,
in which case the state of Alice's qubit becomes ∣1⟩.
So, if we ignore Bob's measurement outcome and focus on Alice's qubit, we conclude that she obtains the state ∣0⟩ with probability 1/2 and the state ∣1⟩ with probability 1/2. This leads us to describe the state of Alice's qubit in isolation by the density matrix
21∣0⟩⟨0∣+21∣1⟩⟨1∣=21IA.
That is, Alice's qubit is in the completely mixed state. To be clear, this description of the state of Alice's qubit doesn't include Bob's measurement outcome; we're ignoring Bob altogether.
Now, it might seem like the density matrix description of Alice's qubit in isolation that we've just obtained relies on the assumption that Bob has measured his qubit, but this is not actually so. What we've done is to use the possibility that Bob measures his qubit to argue that the completely mixed state arises as the state of Alice's qubit, based on what we've already learned. Of course, nothing says that Bob must measure his qubit — but nothing says that he doesn't. And if he's light years away, then nothing he does or doesn't do can possibly influence the state of Alice's qubit viewed it in isolation. That is to say, the description we've obtained for the state of Alice's qubit is the only description consistent with the impossibility of faster-than-light communication.
We can also consider the state of Bob's qubit B, which happens to be the completely mixed state as well. Indeed, for all four Bell states we find that the reduced state of both Alice's qubit and Bob's qubit is the completely mixed state.
Reduced states for a general quantum state vector
Now let's generalize the example just discussed to two arbitrary systems A and B, not necessarily qubits in the state ∣ϕ+⟩. We'll assume the classical state sets of A and B are Σ and Γ, respectively. A density matrix ρ representing a state of the combined system (A,B) therefore has row and column indices corresponding to the Cartesian product Σ×Γ.
Suppose that the state of (A,B) is described by the quantum state vector ∣ψ⟩, so the density matrix describing this state is ρ=∣ψ⟩⟨ψ∣. We'll obtain a density matrix description of the state of A in isolation, which is conventionally denoted ρA. (A superscript is also sometimes used rather than a subscript.)
The state vector ∣ψ⟩ can be expressed in the form
∣ψ⟩=b∈Γ∑∣ϕb⟩⊗∣b⟩
for a uniquely determined collection of vectors {∣ϕb⟩:b∈Γ}. In particular, these vectors can be determined through a simple formula.
∣ϕb⟩=(IA⊗⟨b∣)∣ψ⟩
Reasoning similarly to the previous example of an e-bit, if we were to measure the system B with a standard basis measurement, we would obtain each outcome b∈Γ with probability ∥∣ϕb⟩∥2, in which case the state of A becomes
∥∣ϕb⟩∥∣ϕb⟩.
As a density matrix, this state can be written as follows.
leads us to the description of the reduced state of A for any density matrix ρ of the pair (A,B), not just a pure state.
ρA=b∈Γ∑(IA⊗⟨b∣)ρ(IA⊗∣b⟩)
This formula must work, simply by linearity together with the fact that every density matrix can be written as a convex combination of pure states.
The operation being performed on ρ to obtain ρA in this equation is known as the partial trace, and to be more precise we say that the partial trace is performed on B, or that B is traced out. This operation is denoted TrB, so we can write
TrB(ρ)=b∈Γ∑(IA⊗⟨b∣)ρ(IA⊗∣b⟩).
We can also define the partial trace on A, so it's the system A that gets traced out rather than B, like this.
TrA(ρ)=a∈Σ∑(⟨a∣⊗IB)ρ(∣a⟩⊗IB)
This gives us the density matrix description ρB of the state of B in isolation rather than A.
To recapitulate, if (A,B) is any pair of systems and we have a density matrix ρ describing a state of (A,B), the reduced states of the systems A and B are as follows.
If ρ is a density matrix, then ρA and ρB will also necessarily be density matrices.
Generalization to three or more systems
These notions can be generalized to any number of systems in place of two in a natural way. In general, we can put the names of whatever systems we choose in the subscript of a density matrix ρ to describe the reduced state of just those systems. For example, if A,B, and C are systems and ρ is a density matrix describing a state of (A,B,C), then we can define
An alternative way to describe the partial trace mappings TrA and TrB is that they are the unique linear mappings that satisfy the formulas
TrA(M⊗N)TrB(M⊗N)=Tr(M)N=Tr(N)M.
In these formulas, N and M are square matrices of the appropriate sizes: the rows and columns of M correspond to the classical states of A and the rows and columns of N correspond to the classical states of B.
This characterization of the partial trace is not only fundamental from a mathematical viewpoint, but can also allow for quick calculations in some situations. For example, consider this state of a pair of qubits (A,B).
ρ=21∣0⟩⟨0∣⊗∣0⟩⟨0∣+21∣1⟩⟨1∣⊗∣+⟩⟨+∣
To compute the reduced state ρA for instance, we can use linearity together with the fact that ∣0⟩⟨0∣ and ∣+⟩⟨+∣ have unit trace.
The partial trace can also be described explicitly in terms of matrices. Here we'll do this just for two qubits, but this can also be generalized to larger systems. Assume that we have two qubits (A,B), so that any density matrix describing a state of these two qubits can be written as