In the general formulation of quantum information, operations on quantum states are represented by a special class of mappings called channels. This includes useful operations, such as ones corresponding to unitary gates and circuits, as well as operations we might prefer to avoid, such as noise. We can also describe measurements as channels, which we'll do in the next lesson. In short, any change in states that is physically realizable (in an idealized sense) can be described by a channel.
The term channel comes to us from information theory, which (among other things) studies the information-carrying capacities of noisy communication channels. In this context, a quantum channel could specify the quantum state that's received when a given quantum state is sent, perhaps through a quantum network of some sort. It should be understood, however, that the terminology merely reflects this historical motivation and is used in a more general way. Indeed, we can describe a wide variety of things (such as complicated quantum computations) as channels, even though they have nothing to do with communication and would be unlikely to arise naturally in such a setting.
In mathematical terms, channels are linear mappings from density matrices to density matrices that satisfy certain requirements. Because channels are linear mappings from matrices to matrices — as opposed to linear mappings from vectors to vectors — we'll require some additional mathematical machinery to describe them in general. We'll see that channels can, in fact, be described mathematically in a few different ways, including representations named in honor of three individuals who played key roles in their development: , , and . Together, these different ways of describing channels offer different angles from which they can be viewed and analyzed.
We'll begin the lesson with a discussion of some basic aspects of channels along with small selection of examples, and then we'll move on to Stinespring, Kraus, and Choi representations of channels later in the lesson. In the final section of the lesson we'll see that, although these representations are different, they all offer equivalent mathematical characterizations of channels.
Basics of channels
Throughout this lesson we'll use uppercase Greek letters, including Φ and Ψ, as well as some other letters in specific cases, to refer to channels. Every channel Φ has an input system and an output system, and we'll typically use the name X to refer to the input system and Y to refer to the output system. It's common that the output system of a channel is the same as the input system, and in this case we can use the same letter X to refer to both.
Channels are linear mappings
Channels are described by linear mappings, just like probabilistic operations in the standard formulation of classical information and unitary operations in the simplified formulation of quantum information.
If a channel Φ is performed on an input system X whose state is described by a density matrix ρ, then the output system of the channel is described by the density matrix Φ(ρ). In the situation in which the output system of Φ is also X, we can simply view that the channel represents a change in the state of X, from ρ to Φ(ρ). When the output system of Φ is a different system Y rather than X, it should be understood that Y is a new system that is created by the process of applying the channel, and that the input system X is no longer available once the channel is applied — as if the channel itself transformed X into Y, leaving it in the state Φ(ρ).
The assumption that channels are described by linear mappings can be viewed as being an axiom — or in other words, a basic postulate of the theory rather than something that is proved. We can, however, see the need for channels to act linearly on convex combinations of density matrix inputs in order for them to be consistent with probability theory and what we've already learned about density matrices.
To be more specific, suppose that we have a channel Φ and we apply it to a system when it's in one of the two states represented by the density matrices ρ and σ. If we apply the channel to ρ we obtain the density matrix Φ(ρ) and if we apply it to σ we obtain the density matrix Φ(σ). Thus, if we randomly choose the input state of X to be ρ with probability p∈[0,1] and σ with probability 1−p, we'll obtain the output state Φ(ρ) with probability p and Φ(σ) with probability 1−p, which we represent by a weighted average of density matrices as pΦ(ρ)+(1−p)Φ(σ). On the other hand, we could alternatively think about the input state of the channel as being represented by the weighted average pρ+(1−p)σ, in which case the output is Φ(pρ+(1−p)σ). It's the same state regardless of how we choose to think about it, so we must have
Φ(pρ+(1−p)σ)=pΦ(ρ)+(1−p)Φ(σ).
Whenever we have a mapping that satisfies this condition for every choice of density matrices ρ and σ and scalars p∈[0,1], there's always a unique way to extend that mapping to every matrix input (i.e., not just density matrix inputs) so that it's linear.
Channels transform density matrices into density matrices
Naturally, in addition to being linear mappings, channels must also transform density matrices into density matrices. If a channel Φ is applied to an input system while it's in a state represented by a density matrix ρ, then we obtain a system whose state is represented by Φ(ρ), which must be a valid density matrix in order for us to interpret it as a state.
It is critically important, though, that we consider a more general situation, where a channel Φ transforms a system X into to a system Y in the presence of an additional system Z (to which nothing happens). That is, if we start with the pair of systems (Z,X) in a state described by some density matrix and then apply Φ just to X, transforming it into Y, we must obtain a density matrix describing a state of the pair (Z,Y).
We can describe in mathematical terms how a channel Φ having an input system X and an output system Y transforms a state of the pair (Z,X) into a state of (Z,Y) when nothing is done to Z. To keep things simple, we'll assume that the classical state set of Z is {0,…,m−1}. This allows us to write an arbitrary density matrix ρ, representing a state of (Z,X), in the following form.
On the right-hand side of this equation we have a block matrix, which we can think of as a matrix of matrices except that the inner parentheses are removed. This leaves us with an ordinary matrix that can alternatively be described using Dirac notation as we have in the middle expression. Each matrix ρa,b has rows and columns corresponding to the classical states of X, and these matrices can be determined by a simple formula.
ρa,b=(⟨a∣⊗IX)ρ(∣b⟩⊗IX)
Note that these are not density matrices in general — it's only when they're arranged together to form ρ that we obtain a density matrix. The following equation describes the state of (Z,Y) that is obtained when Φ is applied to X.
Notice that in order to evaluate this expression for a given choice of Φ and ρ, we must understand how Φ works as a linear mapping on non-density matrix inputs, as each ρa,b generally won't be a density matrix on its own. The equation is consistent with the expression (IdZ⊗Φ)(ρ), in which IdZ denotes the identity channel on the system Z. This presumes that we've extended the notion of a tensor product to linear mappings from matrices to matrices, which is straightforward but isn't really essential to the lesson and won't be explained further.
Reiterating a statement made above, in order for a linear mapping Φ to be a valid channel it must be the case that for every choice for Z and every density matrix ρ of the pair (Z,X) we always obtain a density matrix when Φ is applied to X. In mathematical terms, the properties a mapping must possess to be a channel are that it must be trace-preserving — so that the matrix we obtain by applying the channel has trace equal to one — as well as completely positive — so that the resulting matrix is positive semidefinite. These are both important properties that can be considered and studied separately, but it isn't critical for the sake of this lesson to consider these properties in isolation. There are in fact linear mappings that always output a density matrix when given a density matrix as input, but fail to map density matrices to density matrices for compound systems, so we do eliminate some linear mappings from the class of channels in this way. (The linear mapping given by matrix transposition is the simplest example.)
We have an analogous formula to one above in the case that the two systems X and Z are swapped, so that Φ is applied to the system on the left rather than the right.
(Φ⊗IdZ)(ρ)=a,b=0∑m−1Φ(ρa,b)⊗∣a⟩⟨b∣
This assumes that ρ is a state of (X,Z) rather than (Z,X). This time the block matrix description doesn't work because the matrices ρa,b don't fall into consecutive rows and columns in ρ, but it's the same underlying mathematical structure.
Any linear mapping that satisfies the requirement that it always transforms density matrices into density matrices, even when it's applied to just one part of a compound systems, represents a valid channel. So, in an abstract sense, the notion of a channel is determined by the notion of a density matrix together with the assumption that channels act linearly. In this regard, channels are analogous to unitary operations in the simplified formulation of quantum information, which are precisely the linear mappings that always transform quantum state vectors to quantum state vectors for a given system; as well as to probabilistic operations (represented by stochastic matrices) in the standard formulation of classical information, which are precisely the linear mappings that always transform probability vectors into probability vectors.
Unitary operations as channels
Suppose X is a system and U is a unitary matrix representing an operation on X. The channel Φ that describes this operation on density matrices is defined as follows for every density matrix ρ representing a quantum state of X.
Φ(ρ)=UρU†(1)
This action, where we multiply by U on the left and U† on the right, is commonly referred to as conjugation by U.
This description is consistent with the fact that the density matrix that represents a given quantum state vector ∣ψ⟩ is ∣ψ⟩⟨ψ∣. In particular, if the unitary operation U is performed on ∣ψ⟩, then the output state is represented by the vector U∣ψ⟩, and so the density matrix describing this state is equal to
(U∣ψ⟩)(U∣ψ⟩)†=U∣ψ⟩⟨ψ∣U†.
Once we know that, as a channel, the operation U has the action ∣ψ⟩⟨ψ∣↦U∣ψ⟩⟨ψ∣U† on pure states, we can conclude by linearity that it must work as is specified by the equation (1) above for any density matrix ρ.
The particular channel we obtain when we take U=I is the identity channelId, which we can also give a subscript (such as IdZ, as we've already encountered) when we wish to indicate explicitly what system this channel acts on. Its output is always equal to its input: Id(ρ)=ρ. This might not seem like an interesting channel, but it's actually a very important one, and it's fitting that this is our first example. The identity channel is the perfect channel in some contexts, representing an ideal memory or a noiseless transmission of information from a sender to a receiver.
Every unitary channel is indeed a valid channel. Conjugation by a matrix U gives us a linear map — and if ρ is a density matrix of a system (Z,X) and U is unitary, then the result, which we can express as
(IZ⊗U)ρ(IZ⊗U†),
is also a density matrix. Specifically, this matrix must be positive semidefinite, for if ρ=M†M then
(IZ⊗U)ρ(IZ⊗U†)=K†K
for K=M(IZ⊗U†), and it must have unit trace by the cyclic property of the trace.
Suppose we have two channels Φ0 and Φ1 that share the same input system and the same output system. For any real number p∈[0,1], we could decide to apply Φ0 with probability p and Φ1 with probability 1−p, which gives us a new channel that can be written as pΦ0+(1−p)Φ1. Explicitly, the way that this channel acts on a given density matrix is specified by the following simple equation.
(pΦ0+(1−p)Φ1)(ρ)=pΦ0(ρ)+(1−p)Φ1(ρ)
More generally, if we have channels Φ0,…,Φm−1 and a probability vector (p0,…,pm−1), then we can average these channels together to obtain a new channel.
k=0∑m−1pkΦk
This is a convex combination of channels, and we always obtain a valid channel through this process. A simple way to say this in mathematical terms is that, for a given choice of an input and output system, the set of all channels is a convex set.
As an example, we could choose to apply one of a collection of unitary operations to a certain system. We obtain what's known as a mixed unitary channel, which is a channel that can be expressed in the following form.
Φ(ρ)=k=0∑m−1pkUkρUk†
Mixed unitary channels for which all of the unitary operations are Pauli matrices (or tensor products of Pauli matrices) are called Pauli channels, and are commonly encountered in quantum computing.
Examples of qubit channels
Now we'll take a look at a few specific examples of channels that aren't unitary. For all of these examples, the input and output systems are both single qubits, which is to say that these are examples of qubit channels.
The qubit reset channel
This channel does something very simple: it resets a qubit to the ∣0⟩ state. As a linear mapping this channel can be expressed as follows for every qubit density matrix ρ.
Λ(ρ)=Tr(ρ)∣0⟩⟨0∣
Although the trace of every density matrix ρ is equal to 1, writing the channel in this way makes it clear that it's a linear mapping that could be applied to any 2×2 matrix, not just a density matrix. As we already observed, we need to understand how channels work as linear mappings on non-density matrix inputs to describe what happens when they're applied to just one part of a compound system.
For example, suppose that A and B are qubits and together the pair (A,B) is in the Bell state ∣ϕ+⟩. As a density matrix, this state is given by
∣ϕ+⟩⟨ϕ+∣=21002100000000210021.
Using Dirac notation we can alternatively express this state as follows.
It might be tempting to say that resetting A has had an effect on B — but in some sense it's actually the opposite. Prior to A being reset, the reduced state of B was the completely mixed state, and that doesn't change as a result of resetting A.
The completely dephasing channel
Here's an example of a qubit channel called Δ, described by its action on 2×2 matrices:
Δ(α00α10α01α11)=(α0000α11).
In words, Δ zeros out the off-diagonal entries of a 2×2 matrix. This example can be generalized to arbitrary systems, as opposed to qubits: for whatever density matrix is input, the channel zeros out all of the off-diagonal entries and leaves the diagonal alone.
This channel is called the completely dephasing channel, and it can be thought of as representing an extreme form of the process known as decoherence — which essentially ruins quantum superpositions and turns them into classical probabilistic states. Another way to think about this channel is that it describes a standard basis measurement on a qubit, where an input qubit is measured and then discarded, and where the output is a density matrix describing the measurement outcome. (Alternatively, but equivalently, we can imagine that the measurement outcome is discarded, leaving the qubit in its post-measurement state.)
Let us again consider an e-bit, and see what happens when Δ is applied to just one of the two qubits. Specifically, we have qubits A and B for which (A,B) is in the state ∣ϕ+⟩, and this time let's apply the channel to the second qubit. Here's the state we obtain.
We can also consider a qubit channel that only slightly dephases a qubit, as opposed to completely dephasing it, which is a less extreme form of decoherence than what is represented by the completely dephasing channel. In particular, suppose that ε∈(0,1) is a small but nonzero real number. We can define a channel
Δε=(1−ε)Id+εΔ,
which transforms a given qubit density matrix ρ like this:
Δε(ρ)=(1−ε)ρ+εΔ(ρ).
That is, nothing happens with probability 1−ε, and with probability ε the qubit dephases. In terms of matrices, this action can be expressed as follows, where the diagonal entries are left alone and the off-diagonal entries are multiplied by 1−ε.
Here's another example of a qubit channel called Ω.
Ω(ρ)=Tr(ρ)2I
Here I denotes the 2×2 identity matrix. In words, for any density matrix input ρ, the channel Ω outputs the completely mixed state. It doesn't get any noisier than this! This channel is called the completely depolarizing channel, and like the completely dephasing channel it can be generalized to arbitrary systems in place of qubits.
We can also consider a less extreme variant of this channel where depolarizing happens with probability ε, similar to what we saw for the dephasing channel.
Ωε(ρ)=(1−ε)ρ+εΩ(ρ).
Channel representations
Next we'll discuss mathematical representations of channels, starting with a basic issue that was suggested at the start of the lesson. Linear mappings from vectors to vectors can be represented by matrices in a familiar way, where the action of the linear mapping is described by matrix-vector multiplication. But channels are linear mappings from matrices to matrices, not vectors to vectors. So, in general, how can we express channels in mathematical terms?
For some channels we may have a simple formula that describes them, like for the three examples of non-unitary qubit channels described in the previous section. But an arbitrary channel may not have such a nice formula, so it isn't practical in general to express a channel in this way. As a point of comparison, in the simplified formulation of quantum information we use unitary matrices to represent operations on quantum state vectors: every unitary matrix represents a valid operation and every valid operation can be expressed as a unitary matrix. In essence, the question being asked is: How is this done for channels?
The answer to this question is that there are in fact multiple ways to represent channels in mathematical terms. We'll discuss three specific ways of representing channels, named after three individuals whose work was important to their development: Stinespring, Kraus, and Choi.
Stinespring representations
Stinespring representations are based on the idea that every channel can be implemented in a standard way, where an input system is first combined with an initialized workspace system, forming a compound system; then a unitary operation is performed on the compound system; and finally the workspace system is discarded (or traced out), leaving the output of the channel.
The following figure depicts such an implementation, in the form of a circuit diagram, for a channel whose input and output systems are the same system X.
Note that in this diagram the wires represent arbitrary systems, as indicated by the labels above the wires, and not necessarily single qubits.
In words, the way the implementation works is as follows. The input system X begins in some state ρ, while a workspace system W is initialized to the standard basis state ∣0⟩. (We're presuming that 0 is a classical state of W and we choose it to be the initialized state of this system, which will help to simplify the mathematics. One could, however, choose any fixed pure state to represent the initialized state of W without changing the basic properties of the representation.) A unitary operation U is performed on the pair (W,X), and finally the workspace system W is traced out, leaving X as the output. In the diagram, the ground symbol commonly used in electrical engineering indicates explicitly that W is discarded.
A mathematical expression of the resulting channel Φ is as follows.
Φ(ρ)=TrW(U(∣0⟩⟨0∣W⊗ρ)U†)
Notice that, as usual, we're using Qiskit's ordering convention: the system X is on top in the diagram, and therefore corresponds to the right-hand tensor factor in the formula.
In general, the input and output systems of a channel need not be the same. Here's a figure depicting an implementation of a channel Φ whose input system is X and whose output system is Y.
This time the unitary operation transforms (W,X) into a pair (G,Y), where G is a new "garbage" system that gets traced out, leaving Y as the output system. In order for U to be unitary, it must be a square matrix. This requires that the pair (G,Y) has the same number of classical states as the pair (W,X), and so the systems W and G must be chosen in a way that allows this. We obtain a similar mathematical expression of the resulting channel Φ to what we had before.
Φ(ρ)=TrG(U(∣0⟩⟨0∣W⊗ρ)U†)
When a channel is described in this way, as a unitary operation along with a specification of how the workspace system is initialized and how the output system is selected, we say that it is expressed in Stinespring form or that it's a Stinespring representation of the channel. It's not at all obvious, but every channel does in fact have a Stinespring representation, as we will see by the end of the lesson. We'll also see that Stinespring representations aren't unique; there will always be different ways to implement the same channel in the manner that's been described.
Remark. In the context of quantum information, term Stinespring representation commonly refers to a slightly more general expression of a channel having the form Φ(ρ)=TrG(AρA†) for an isometryA, which is a matrix whose columns are orthonormal but that might not be a square matrix. For Stinespring representations having the form that we've adopted as a definition, we can obtain an expression of this other form by taking A=U(∣0⟩W⊗IX), so that AρA†=U(∣0⟩⟨0∣W⊗ρ)U†.
Example: completely dephasing channel
Here's a Stinespring representation of the qubit dephasing channel Δ. In this diagram, both wires represent single qubits — so this is an ordinary quantum circuit diagram.
To see that the effect that this circuit has on the input qubit is indeed described by the completely dephasing channel, we can go through the circuit one step at a time, using the explicit matrix representation of the partial trace discussed in the previous lesson. We'll refer to the top qubit as X — this is the input and output of the channel — and we'll assume that X starts out in some arbitrary state ρ.
The first step is the introduction of a workspace qubit W. Prior to the controlled-NOT gate being performed, the state of the pair (W,X) is represented by the following density matrix.
As per Qiskit's ordering convention, the top qubit X is on the right and the bottom qubit W is on the left. We're using density matrices rather than quantum state vectors, but they're tensored together in a similar way to what's done in the simplified formulation of quantum information.
The next step is to perform the controlled-NOT operation, where X is the control and W is the target. Still keeping in mind the Qiskit ordering convention, the matrix representation of this gate is as follows.
1000000100100100
This is a unitary operation, and to apply it to a density matrix we conjugate by the unitary matrix. The conjugate-transpose doesn't happen to change this particular matrix, so the result is as follows.
Finally, the partial trace is performed on W. Recalling the action of this operation on 4×4 matrices, which was described in the previous lesson, we obtain the following density matrix output.
Tracing out the qubit on the left-hand side yields the same answer as before.
⟨0∣ρ∣0⟩∣0⟩⟨0∣+⟨1∣ρ∣1⟩∣1⟩⟨1∣=Δ(ρ)
An intuitive way to think about this circuit is that the controlled-NOT operation effectively copies the classical state of the input qubit, and when the copy is thrown in the trash the input qubit "collapses" probabilistically to one of the two possible classical states, which is equivalent to complete dephasing.
This implementation is based on a simple idea: dephasing is equivalent to either doing nothing (i.e., applying an identity operation) or applying a σz gate, each with probability 1/2.
That is, the completely dephasing channel is an example of a mixed-unitary channel, and more specifically a Pauli channel.
Example: qubit reset channel
The qubit reset channel can be implemented as follows.
The swap gate simply shifts the ∣0⟩ initialized state of the workspace qubit so that it gets output, while the input state ρ gets moved to the bottom qubit and then traced out.
Alternatively, if we don't demand that the output of the channel is left on top, we can take this very simple circuit as our representation.
In words, resetting a qubit to the ∣0⟩ state is equivalent to throwing the qubit in the trash and getting a new one.
Kraus representations
Now we'll discuss Kraus representations, which offer a convenient formulaic way to express the action of a channel through matrix multiplication and addition. In particular, a Kraus representation is a specification of a channel Φ in the following form.
Φ(ρ)=k=0∑N−1AkρAk†
Here, A0,…,AN−1 are matrices that all have the same dimensions: their columns correspond to the classical states of the input system X and their rows correspond to the classical states of the output system, whether it's X or some other system Y. In order for Φ to be a valid channel these matrices must satisfy the following condition.
k=0∑N−1Ak†Ak=IX
(This condition is equivalent to the condition that Φ preserves trace. The other property required of a channel — which is complete positivity — follows from the general form of the equation for Φ, as a sum of conjugations.)
Sometimes it's convenient to name the matrices A0,…,AN−1 in a different way. For instance, we could number them starting from 1, or we could use states in some arbitrary classical state set Γ instead of numbers as subscripts:
Φ(ρ)=a∈Γ∑AaρAa†wherea∈Γ∑Aa†Aa=I.
These different ways of naming these matrices, which are called Kraus matrices, are all common and can be convenient in different situations — but we'll stick with the names A0,…,AN−1 in this lesson for the sake of simplicity.
The number N can be an arbitrary positive integer, but it never needs to be too large: if the input system X has n classical states and the output system Y has m classical states, then any given channel from X to Y will always have a Kraus representation for which N is at most the product nm.
Example: completely dephasing channel
We obtain a Kraus representation of the completely dephasing channel by taking A0=∣0⟩⟨0∣ and A1=∣1⟩⟨1∣.
An alternative Kraus representation is obtained by choosing Kraus matrices like so.
A0=2IA1=2σxA2=2σyA3=2σz
To verify that these Kraus matrices do in fact represent the completely depolarizing channel, let's first observe that conjugating an arbitrary 2×2 matrix by a Pauli matrix works as follows.
This Kraus representation expresses an important idea, which is that the state of a qubit can be completely randomized by applying to it one of the four Pauli matrices (including the identity matrix) chosen uniformly at random. Thus, the completely depolarizing channel is another example of a Pauli channel.
It is not possible to find a Kraus representation for Ω having three or fewer Kraus matrices — at least four are required.
Example: unitary channels
If we have a unitary matrix U representing an operation on a system X, we can express the action of this unitary operation as a channel:
Φ(ρ)=UρU†.
This expression is already a valid Kraus representation of the channel Φ where we happen to have just one Kraus matrix A0=U. In this case, the required condition
k=0∑N−1Ak†Ak=IX
takes the much simpler form U†U=IX, which we know is true because U is unitary.
Choi representations
Now we'll discuss a third way that channels can be described, through the Choi representation. The way it works is that each channel is represented by a single matrix known as its Choi matrix. If the input system has n classical states and the output system has m classical states, then the Choi matrix of the channel will have nm rows and nm columns.
Choi matrices provide a faithful representation of channels, meaning that two channels are the same if and only if they have the same Choi matrix. One reason why this is important is that it provides us with a way of determining whether two different descriptions correspond to the same channel or to different channels — we simply compute the Choi matrices and compare them to see if they're equal. In contrast, Stinespring and Kraus representations are not unique in this way, as we have seen. Choi matrices are also useful in other regards for uncovering various mathematical properties of channels.
Definition
Let Φ be a channel from a system X to a system Y, and assume that the classical state set of the input system X is Σ. The Choi representation of Φ, which is denoted J(Φ), is defined by the following equation.
J(Φ)=a,b∈Σ∑∣a⟩⟨b∣⊗Φ(∣a⟩⟨b∣)
If we assume that Σ={0,…,n−1} for some positive integer n, then we can alternatively express J(Φ) as a block matrix:
That is, as a block matrix the Choi matrix of a channel has one block Φ(∣a⟩⟨b∣) for each pair (a,b) of classical states of the input system, with the blocks arranged in a natural way. Notice that the set {∣a⟩⟨b∣:0≤a,b<n} forms a basis for the space of all n×n matrices — and because Φ is linear, it follows that its action can be recovered from its Choi matrix by taking linear combinations of the blocks.
The Choi state of a channel
Another way to think about the Choi matrix of a channel is that it's a density matrix if we divide by n=∣Σ∣. Let's focus on the situation that Σ={0,…,n−1} for simplicitly, and imagine that we have two identical copies of X that are together in the entangled state
∣ψ⟩=n1a=0∑n−1∣a⟩⊗∣a⟩.
As a density matrix this state is as follows.
∣ψ⟩⟨ψ∣=n1a,b=0∑n−1∣a⟩⟨b∣⊗∣a⟩⟨b∣
If we apply the channel Φ to the copy of X on the right-hand side, we obtain the Choi matrix divided by n.
In words, up to a normalization factor 1/n, the Choi matrix of Φ is the density matrix we obtain by evaluating Φ on one-half of a maximally entangled pair of input systems, as the following figure depicts. Notice in particular that this implies that the Choi matrix of a channel must always be positive semidefinite.
We also see that because the channel Φ is applied to the second (or top) system alone, it cannot affect the reduced state of the first (or bottom) system. In the case at hand that state is the completely mixed state IX/n, and therefore
TrY(nJ(Φ))=nIX
Clearing the denominator n from both sides yields TrY(J(Φ))=IX.
We can alternatively draw this same conclusion by using the fact that channels must always preserve trace, and therefore
In summary, the Choi representation J(Φ) for any channel Φ must be positive semidefinite and must satisfy
TrY(J(Φ))=IX.
As we will see by the end of the lesson, these two conditions are not only necessary but also sufficient — meaning that any linear mapping from matrices to matrices that satisfies these requirements must in fact be a channel.
Example: the completely dephasing channel
The Choi representation of the completely dephasing channel Δ is
Notice in particular that J(Id) is not the identity matrix. The Choi representation does not directly describe a channel's action in the usual way that a matrix represents a linear mapping.
Equivalence of the representations
We've now discussed three different ways to represent channels in mathematical terms, namely Stinespring representations, Kraus representations, and Choi representations. We also have the definition of a channel, which states that a channel is a linear mapping that always transforms density matrices into density matrices, even when the channel is applied to just part of a compound system. The remainder of the lesson is devoted to a mathematical proof that the three representations are, in fact, equivalent and precisely capture the definition.
Overview of the proof
Our goal is to establish the equivalence of a collection of four statements, and we'll begin by writing them down precisely. All four statements follow the same conventions that have been used throughout the lesson, namely that Φ is a linear mapping from square matrices to square matrices, the rows and columns of the input matrices have been placed in correspondence with the classical states of a system X (the input system), and the rows and columns of the output matrices have been placed in correspondence with the classical states of a system Y (the output system).
Φ is a channel from X to Y. That is, Φ always transforms density matrices to density matrices, even when it acts on one part of a larger compound system.
The Choi matrix J(Φ) is positive semidefinite and satisfies the condition TrY(J(Φ))=IX.
There is a Kraus representation for Φ. That is, there exist matrices A0,…,AN−1 for which the equation Φ(ρ)=∑k=0N−1AkρAk† is true for every input ρ, and that satisfy the condition ∑k=0N−1Ak†Ak=IX.
There is a Stinespring representation for Φ. That is, there exist systems W and G for which the pairs (W,X) and (G,Y) have the same number of classical states, along with a unitary matrix U representing a unitary operation from (W,X) to (G,Y), such that Φ(ρ)=TrG(U(∣0⟩⟨0∣⊗ρ)U†).
The way the proof works is that a cycle of implications is proved: the first statement in our list implies the second, the second implies the third, the third implies the fourth, and the fourth statement implies the first. This establishes that all four statements are equivalent — which is to say that they're either all true or all false for a given choice of Φ — because the implications can be followed transitively from any one statement to any other. This is a common strategy when proving that a collection of statements are equivalent, and a useful trick to use in such a context is to set the implications up in a way that makes them as easy to prove as possible. That is the case here, and in fact we've already encountered two of the four implications.
First implication: channels to Choi matrices
Referring to the statements listed above by their numbers, the first implication to be proved is 1 ⇒ 2. This implication was already discussed in the context of the Choi state of a channel. Here we'll summarize the mathematical details.
Assume that the classical state set of the input system X is Σ and let n=∣Σ∣. Consider the situation in which Φ is applied to the second of two copies of X together in the state
and by the assumption that Φ is a channel this must be a density matrix. Like all density matrices it must be positive semidefinite, and multiplying a positive semidefinite matrix by a positive real number yields another positive semidefinite matrix, and therefore J(Φ)≥0.
Moreover, under the assumption that Φ is a channel, it must preserve trace, and therefore
Second implication: Choi representation to Kraus representation
The second implication, again referring to the statements in our list by their numbers, is 2 ⇒ 3. To be clear, we're ignoring the other statements — and in particular we cannot make the assumption that Φ is a channel. All we have to work with is that Φ is a linear mapping whose Choi representation satisfies J(Φ)≥0 and TrY(J(Φ))=IX. This, however, is all we need to conclude that Φ has a Kraus representation
Φ(ρ)=k=0∑N−1AkρAk†
for which the condition
k=0∑N−1Ak†Ak=IX
is satisfied.
We begin with the critically important assumption that J(Φ) is positive semidefinite, which means that it is possible to express it in the form
J(Φ)=k=0∑N−1∣ψk⟩⟨ψk∣(2)
for some way of choosing the vectors ∣ψ0⟩,…,∣ψN−1⟩. In general there will be multiple ways to do this — and in fact this directly mirrors the freedom one has in choosing a Kraus representation for Φ.
One way to obtain such an expression is to first use the spectral theorem to write
J(Φ)=k=0∑N−1λk∣γk⟩⟨γk∣,
in which λ0,…,λN−1 are the eigenvalues of J(Φ) (which are necessarily nonnegative real numbers because J(Φ) is positive semidefinite) and ∣γ0⟩,…,∣γN−1⟩ are unit eigenvectors vectors corresponding to the eigenvalues λ0,…,λN−1. Note that while there's no freedom in choosing the eigenvalues (except for how they're ordered), there is freedom in the choice of the eigenvectors, particularly when there are eigenvalues with multiplicity larger than one. So, this is not a unique expression of J(Φ) — we're just assuming we have one such expression. Irregardless, because the eigenvalues are nonnegative real numbers they have nonnegative square roots, and so we can select
∣ψk⟩=λk∣γk⟩
for each k=0,…,N−1 to obtain an expression of the form (2).
It is, however, not essential that the expression (2) comes from a spectral decomposition in this way, and in particular the vectors ∣ψ0⟩,…,∣ψN−1⟩ need not be orthogonal in general. It is noteworthy, though, that we can choose these vectors to be othogonal if we wish, and moreover we never need N to be larger than nm (recalling that n and m denote the numbers of classical states of X and Y, respectively).
Next, each of the vectors ∣ψ0⟩,…,∣ψN−1⟩ can be further decomposed as
∣ψk⟩=a∈Σ∑∣a⟩⊗∣ϕk,a⟩,
where the vectors {∣ϕk,a⟩} have entries corresponding to the classical states of Y and can be explicitly determined by the equation
∣ϕk,a⟩=(⟨a∣⊗IY)∣ψk⟩
for each a∈Σ and k=0,…,N−1. Although ∣ψ0⟩,…,∣ψN−1⟩ are not necessarily unit vectors, this is the same process we would use to analyze what would happen if a standard basis measurement was performed on the system X given a quantum state vector of the pair (X,Y).
And now we come to the trick that makes the proof work. We define our Kraus matrices A0,…,AN−1 according to the following equation.
Ak=a∈Σ∑∣ϕk,a⟩⟨a∣
We can think about this formula purely symbolically: ∣a⟩ effectively gets flipped around to form ⟨a∣ and moved to right-hand side, forming a matrix. For the purposes of verifying the proof, the formula is all we need.
There is, however, a simple and intuitive relationship between the vector ∣ψk⟩ and the matrix Ak, which is that by vectorizingAk we get ∣ψk⟩. What it means to vectorize Ak is that we stack the columns on top of one another (with the leftmost column on top proceeding to the rightmost on the bottom), in order to form a vector. For instance, if X and Y are both qubits, and for some choice of k we have
(Beware: sometimes the vectorization of a matrix is defined in a slightly different way, which is that the rows of the matrix are transposed and stacked on top of one another to form a column vector.)
First we'll verify that this choice of Kraus matrices correctly describes the mapping Φ, after which we'll verify the other required condition. To keep things straight, let's define a new mapping Ψ as follows.
Ψ(ρ)=k=0∑N−1AkρAk†
Thus, our goal is to verify that Ψ=Φ.
The way we can do this is to compare the Choi representations of these mappings. Choi representations are faithful, so we have Ψ=Φ if and only if J(Φ)=J(Ψ). At this point we can simply compute J(Ψ) using the expressions
∣ψk⟩=a∈Σ∑∣a⟩⊗∣ϕk,a⟩andAk=a∈Σ∑∣ϕk,a⟩⟨a∣
together with the bilinearity of tensor products to simplify.
It remains to check the required condition on A0,…,AN−1, which turns out to be equivalent to the assumption TrY(J(Φ))=IX (which we haven't used yet). What we'll show is this relationship:
(k=0∑N−1Ak†Ak)T=TrY(J(Φ))(3)
(in which we're referring the matrix transpose on the left-hand side). Starting on the left, we can first observe that
We've obtained the same result, and therefore the equation (3) has been verified. It follows, by the assumption TrY(J(Φ))=IX, that
(k=0∑N−1Ak†Ak)T=IX
and therefore, because the identity matrix is its own transpose, the required condition is true.
k=0∑N−1Ak†Ak=IX
Third implication: from Kraus to Stinespring representations
Now suppose that we have a Kraus representation of a mapping
Φ(ρ)=k=0∑N−1AkρAk†
for which
k=0∑N−1Ak†Ak=IX.
Our goal is to find a Stinespring representation for Φ.
What we'd like to do first is to choose the garbage system G so that its classical state set is {0,…,N−1}. In order for (W,X) and (G,Y) to have the same size, however, it must be the case that n divides mN, allowing us to take W to have classical states {0,…,d−1} for d=mN/n. For an arbitrary choice of n,m, and N, it may not be the case that mN/n is an integer, so we're not actually free to choose G so that it's classical state set is {0,…,N−1} — but we can always increase N arbitrarily in the Kraus representation by choosing Ak=0 for however many additional values of k that we wish. And so, if we tacitly assume that mN/n is an integer, which is equivalent to N being a multiple of m/gcd(n,m), then we're free to take G so that its classical state set is {0,…,N−1}. As an aside, notice that if it is the case that N=nm, then we may take W to have m2 classical states.
It remains to choose U, and we'll do this by matching the following pattern.
U=A0A1⋮AN−1??⋮?⋯⋯⋱⋯??⋮?
To be clear, this pattern is meant to suggest a block matrix, where each block (including A0,…,AN−1 as well as the blocks marked with a question mark) has m rows and n columns. There are N rows of blocks, which means that there are d=mN/n columns of blocks. Expressed in more formulaic terms, we will define U as
where each matrix Mk,j has m rows and n columns, and in particular we shall take Mk,0=Ak for k=0,…,N−1. This must be a unitary matrix, and the blocks labeled with a question mark, or equivalently Mk,j for j>0, must be selected with this in mind — but aside from allowing U to be unitary these blocks won't have any relevance to the proof.
Let's momentarily disregard the concern that U is unitary and focus on the expression
TrG(U(∣0⟩⟨0∣W⊗ρ)U†)
that describes the output state of Y given the input state ρ of X for our Stinespring representation. We can alternatively write
We therefore have a correct representation for the mapping Φ, and it remains to verify that we can choose U to be unitary.
Consider the first n columns of U when it is selected according to the pattern above — which is to say that by taking these columns alone we have a block matrix
A0A1⋮AN−1.
There are n columns, one for each classical state of X, and as vectors let us name them as ∣γa⟩ for each a∈Σ. Here's a formula for these vectors that can be matched to the block matrix representation above.
∣γa⟩=k=0∑N−1∣k⟩⊗Ak∣a⟩
Now let's compute the inner product between any two of these vectors, meaning the ones corresponding to any choice of a,b∈Σ.
we conclude that the n column vectors {∣γa⟩:a∈Σ} form an orthonormal set:
⟨γa∣γb⟩={10a=ba=b
for all a,b∈Σ. This implies that it is possible to fill out the remaining columns of U so that it becomes a unitary matrix. (In particular, the Gram-Schmidt orthogonalization process can be used to select the remaining columns. Recall that something similar was done in Lesson 3 of the Basics of quantum information course, in the context of the state discrimination problem.)
Fourth implication: Stinespring representation back to the definition
The final implication is 4 ⇒ 1. That is, we assume that we have a unitary operation transforming a pair of systems (W,X) into a pair (G,Y), and our goal is to conclude that the mapping
Φ(ρ)=TrG(U(∣0⟩⟨0∣W⊗ρ)U†)
is a valid channel. From its form it is evident that Φ is linear, and it remains to verify that it always transforms density matrices into density matrices. This is pretty straightforward and we've already discussed the key points.
In particular, if we start with a density matrix σ of a compound system (Z,X), and then add on an additional workspace system W, we will certainly be left with a density matrix. If we order the systems (W,Z,X) for convenience we can write this state as ∣0⟩⟨0∣W⊗σ. We then apply the unitary operation U, and as we already discussed this is a valid channel, and hence maps density matrices to density matrices. Finally, the partial trace of a density matrix is another density matrix.
Another way to say this is first to observe that each of these things is a valid channel:
Introducing an initialized workspace system.
Performing a unitary operation.
Tracing out a system.
And finally, any composition of channels is another channel — which is immediate from the definition but certainly a fact worth observing in its own right.
This completes the proof of the final implication, and therefore we've established the equivalence of the four statements listed at the start of the section.