We now turn to the prototypical eigenvalue problem in modern physics, the time-independent Schrödinger equation:
\hat{H}\ket{\psi}=E\ket{\psi}
\tag{1} where \hat{H} is the Hamiltonian operator, \ket{\psi} is a state vector (called a ket by Dirac) in a Hilbert space, and E is the energy.
In practice, when solving Equation 1 for a given physical problem, we typically get a differential equation, as we will see later of this course. In the present section, we limit ourselves to the case of one or more particles with spin-half, where there are no orbital degrees of freedom (can be generalizeed to higher spins).
As we will see below, our problem maps onto a (reasonably) straightforward matrix form, where you don’t have to worry about non-matrix features. Thus, the problem of spin-half particles becomes a direct application of the eigenproblem machinery we built earlier.
1 One particle
1.1 Hilbert space
In quantum mechanics you typically denote the spin angular momentum operator by \hat{\boldsymbol{S}}, this being made up of the three Cartesian components \hat{S}_x, \hat{S}_y, and \hat{S}_z. The two most important relations in this context are the ones for the square of the spin operator and for its z component:
\begin{gather*}
\hat{S}^2 \ket{s m_s} = s(s+1) \ket{sm_s} \\
\hat{S}_z \ket{s m_s} = m_s \ket{sm_s}.
\end{gather*}
Note that we have chosen \hbar = 1 for convenience. Here s is the spin, and m_s is the azimuthal quantum number (m_s = -s, -s+1, \cdots, s).
In the following, we shall focus on spin s=1/2 systems, which means m_s = \pm 1/2, just two possibilities. We shall introduce the notation \ket{\uparrow} and \ket{\downarrow} for these two states. We have
\hat{S}_z \ket{\uparrow} = \frac{1}{2}\ket{\uparrow},\quad \hat{S}_z \ket{\downarrow} = -\frac{1}{2}\ket{\downarrow}.
We can thus use \ket{\uparrow} and \ket{\downarrow} as an orthonormal basis for the Hilbert space.
An arbitrary spin state can thus be written as a linear combination
\ket{\psi} = \psi_{\uparrow}\ket{\uparrow} + \psi_{\downarrow}\ket{\downarrow} = \sum_{i = \uparrow,\downarrow} \psi_i \ket{i},
where \psi_\uparrow = \braket{\uparrow |\psi} and \psi_\downarrow = \braket{\downarrow |\psi} are complex numbers.
1.2 Matrix Representation
We now turn to the matrix representation of spin-half particles. This is very convenient, since it involves 2 \times 2 matrices for spin operators. You may have even heard that a 2 \times 1 column vector (which represents a spin state vector) is called a spinor.
Let’s try to form all the possible matrix elements, sandwiching \hat{S}_z between the basis states: this leads to \bra{i}\hat{S}_z\ket{j}, where i and j take on the values \uparrow and \downarrow. This means that there are four possibilites (i.e., four matrix elements) in total. It then becomes natural to collect them into a 2 \times 2 matrix. Thus, we will group together all the matrix elements and denote the resulting matrix with a bold symbol,
\boldsymbol{S}_z =
\begin{pmatrix}
\bra{\uparrow}\hat{S}_z\ket{\uparrow} & \bra{\uparrow}\hat{S}_z\ket{\downarrow} \\
\bra{\downarrow}\hat{S}_z\ket{\uparrow} & \bra{\downarrow}\hat{S}_z\ket{\downarrow}
\end{pmatrix}
= \frac{1}{2}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
Similarly, we can derive other spin operators
\boldsymbol{S}_x =
\begin{pmatrix}
\bra{\uparrow}\hat{S}_x\ket{\uparrow} & \bra{\uparrow}\hat{S}_x\ket{\downarrow} \\
\bra{\downarrow}\hat{S}_x\ket{\uparrow} & \bra{\downarrow}\hat{S}_x\ket{\downarrow}
\end{pmatrix}
= \frac{1}{2}
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
and
\boldsymbol{S}_y =
\begin{pmatrix}
\bra{\uparrow}\hat{S}_y\ket{\uparrow} & \bra{\uparrow}\hat{S}_y\ket{\downarrow} \\
\bra{\downarrow}\hat{S}_y\ket{\uparrow} & \bra{\downarrow}\hat{S}_y\ket{\downarrow}
\end{pmatrix}
= \frac{1}{2}
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix}.
We can introduce the so-called Pauli matrices:
\boldsymbol{\sigma}_x =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix},\quad
\boldsymbol{\sigma}_y =
\begin{pmatrix}
0 & -i\\
i & 0
\end{pmatrix},\quad
\boldsymbol{\sigma}_z =
\begin{pmatrix}
1 & 0\\
0 & -1
\end{pmatrix}.
You should probably memorize these matrices.
We now turn to the representation of the state vectors. Let’s first approach this as a linear algebra problem: we need to diagonalize the 2\times 2 matrix \boldsymbol{S}_z. As you already know well after studying the present section, that implies finding the eigenvalues (which turn out to be \pm 1/2) and the eigenvectors, which we calculate to be
\ket{\uparrow} \sim \boldsymbol{\zeta}_\uparrow =
\begin{pmatrix}
1 \\ 0
\end{pmatrix}, \quad
\ket{\downarrow} \sim \boldsymbol{\zeta}_\downarrow =
\begin{pmatrix}
0 \\ 1
\end{pmatrix}.
Since the matrix we were diagonalizing was 2 by 2, it comes as no surprise that the eigenvectors are 2\times 1 column vectors. You should test your understanding by finding the eigenvectors corresponding to, say, \boldsymbol{S}_y.
With this, we are no longer dealing with operators and state vectors (no hats and no kets), but with matrices and column vectors, respectively. As a result, relations that in the Hilbert-space language involved actions on kets, now turn into relations involving matrices. For example,
\boldsymbol{S}_z \boldsymbol{\zeta}_\uparrow = \frac{1}{2}\boldsymbol{\zeta}_\uparrow
We can combine our two eigenvectors to produce the matrix representation of an arbitrary spin state
\boldsymbol{\psi} = \psi_{\uparrow}\boldsymbol{\zeta}_\uparrow + \psi_{\downarrow}\boldsymbol{\zeta}_\downarrow = (\psi_\uparrow \quad \psi_\downarrow)^T.
1.3 Hamiltonian
We start with the eigenvalue equation for operators and kets
\hat{H}\ket{\psi} = E\ket{\psi}.
Using the completeness of the basis, \hat{\mathcal{I}} = \ket{\uparrow}\bra{\uparrow} + \ket{\downarrow}\bra{\downarrow}, we have
\hat{H}\ket{\uparrow}\bra{\uparrow}\psi\rangle + \hat{H}\ket{\downarrow}\bra{\downarrow}\psi\rangle = E\ket{\psi}.
Multiplying from the left by \bra{\uparrow}, we obtain
\bra{\uparrow}\hat{H}\ket{\uparrow}\bra{\uparrow}\psi\rangle + \bra{\uparrow}\hat{H}\ket{\downarrow}\bra{\downarrow}\psi\rangle = E\bra{\uparrow}\psi\rangle.
One can repeat this by using \bra{\downarrow} as well. Then we obtain a matrix equation
\boldsymbol{H}\boldsymbol{\psi} = E \boldsymbol{\psi},
where \boldsymbol{H} is the Hamiltonian matrix with matrix elements \bra{i}\hat{H}\ket{j} for i,j = \uparrow, \downarrow.
As an example, let us consider a spin-half particle interacting with an external magnetic field \boldsymbol{B}. Associated with the spin angular momentum \hat{\boldsymbol{S}} there will be a spin magnetic moment operator, \hat{\boldsymbol{\mu}}: since this operator needs to be a combination of the spin operators and the identity (and we know it has to be a vector operator), it follows that \hat{\boldsymbol{\mu}} is proportional to \hat{\boldsymbol{S}}. It is customary to write the proportionality between the two operators as follows:
\hat{\boldsymbol{\mu}} = g\left(\frac{q}{2m}\right)\hat{\boldsymbol{S}}
where q is the electric charge of the particle and m is its mass. The proportionality constant is known as the g-factor: its value is roughly -2 for electrons.
The hamiltonian then can be written as
\hat{H} = -\hat{\boldsymbol{\mu}} \cdot \boldsymbol{B} =-\frac{gqB}{2m}\hat{\boldsymbol{S}}\cdot \boldsymbol{B}.
We then took our z axis as pointing in the direction of the magnetic field. Combining our earlier point about how to go from operators to matrices, with the explicit matrix representation of \hat{\boldsymbol{S}}_z, as
\boldsymbol{H} = -\frac{gqB}{4m}\boldsymbol{\sigma}_z
= -\frac{gqB}{4m}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}.
2 Two particles
2.1 Hilbert spaces
For a single spin-half particle, we can introduce two states \ket{\uparrow}, \ket{\downarrow} forming the basis of its Hilbert space. For a system of two spin-half particles, we should be careful about the notation. We need to label these two particles. Let’s call them particle I and particle II. Particle I involves a vector space which is spanned by the two kets \ket{\uparrow^{(I)}} and \ket{\downarrow^{(I)}}. Similarly, the Hilbert space of the second particle is spanned by the two kets \ket{\uparrow^{(II)}} and \ket{\downarrow}^{(II)}.
We now wish to start from these single-particle vector spaces and generalize to a two-particle space. To do this, we employ the concept of a tensor product (denoted by \otimes): this allows us to express the product between state vectors belonging to different Hilbert spaces. In short, the two-particle Hilbert space is a four-dimensional complex vector space which is spanned by the vectors:
\begin{gather*}
\ket{\uparrow^{(I)}}\otimes\ket{\uparrow^{(II)}} \equiv \ket{\uparrow\uparrow}, \quad \ket{\uparrow^{(I)}}\otimes\ket{\downarrow^{(II)}} \equiv \ket{\uparrow\downarrow} \\
\ket{\downarrow^{(I)}}\otimes\ket{\uparrow^{(II)}} \equiv \ket{\downarrow\uparrow}, \quad \ket{\downarrow^{(I)}}\otimes\ket{\downarrow^{(II)}} \equiv \ket{\downarrow\downarrow}.
\end{gather*}
Thus, we can write the basis of the two-state Hilbert space in a compact form as \ket{i}, with i = \uparrow\uparrow, \uparrow\downarrow, \downarrow\uparrow, \downarrow\downarrow.
In terms of the Hilbert spaces themselves, we started from the space of the first particle (\mathscr{H}^{(I)}) and the space of the second particle (\mathscr{H}^{(II)}) and have produced the larger, two-particle Hilbert space \mathscr{H}^{(I)}\otimes\mathscr{H}^{(II)}. The four state vectors \ket{i^{(I)}}\otimes\ket{j^{(II)}}, i,j = \uparrow, \downarrow form the product basis of this Hilbert space \mathscr{H}^{(I)}\otimes\mathscr{H}^{(II)}.
Let us turn to the operators in the two-particle Hilbert space, focusing on the z-projection operator for concreteness. We already know the one-particle operator \hat{S}_z^{(I)} which acts on the vector of particle I and, similarly, the one-particle operator \hat{S}_z^{(II)} which acts on the vector space of particle II. Each of these operators measures the z-projection of the spin for the respective particle. What we wish to do is come up with operators for the composite system. We do this by, again, employing the tensor product. For example
\hat{S}_{1z} = \hat{S}_z^{(I)} \otimes \hat{\mathcal{I}}^{(II)}.
On the left-hand side we are introducing a new entity, \hat{S}_{Iz}, acting on the two-particle Hilbert space. Instead, it is made up of two one-particle operators, each of which knows how to act on its respective one-particle space. It should be easy to see why we have taken the tensor product with the identity operator \hat{\mathcal{I}}: the two-particle operator \hat{S}_{Iz} measures the z component of the spin for particle I, so it does nothing to any particle-II ket it encounters. In complete analogy to this, the two-particle operator that measures the z component of the spin for particle II is:
\hat{S}_{IIz} =\hat{\mathcal{I}}^{(I)} \otimes \hat{S}_z^{(II)}.
Let’s see what happens when a two-particle operator acts on a given two-particle state vector:
\hat{S}_{IIz}\ket{\uparrow\downarrow} = (\hat{\mathcal{I}}^{(I)}\otimes \hat{S}_z^{(II)})(\ket{\uparrow^{(I)}}\otimes\ket{\downarrow^{(II)}}) = \ket{\uparrow^{(I)}} \otimes (-\frac{1}{2}\ket{\downarrow^{(II)}}) = -\frac{1}{2}\ket{\uparrow\downarrow}.
Finally, an arbitrary spin state can be expressed as a linear superposition:
\ket{\psi} = \sum_{a}\psi_a \ket{a}, \quad a = \uparrow\uparrow, \uparrow\downarrow, \downarrow\uparrow, \downarrow\downarrow.
As a remark, in QM we typically consider the coupled representation where we need to add angular momenta using the Clebsh-Gordan coefficients. For the specific case of two spin-half particles, this leads to one spin-singlet state and a spin-triplet (made up of three states). In contradistinction to this, here we are interested in the uncoupled representation, where we consider the two-particle system as being made up of two individual particles. Below, we will show you how to build up the matrix representation of a two-particle operator using the matrix representation of one-particle operators: this will give us a tool that is then trivial to generalize to larger numbers of particles.
2.2 Matrix Representation
Turning to the matrix representation of two spin-half particles, you will not be surprised to hear that it involves 4 \times 4 matrices for spin operators and 4\times 1 column vectors for the state vectors.
We shall use the two-particle basis \ket{\uparrow\uparrow},\ket{\uparrow\downarrow},\ket{\downarrow\uparrow},\ket{\downarrow\downarrow}, corresponding to the following column vectors
\boldsymbol{\zeta}_{\uparrow\uparrow} =
\begin{pmatrix}
1 \\ 0 \\ 0 \\ 0
\end{pmatrix}, \quad
\boldsymbol{\zeta}_{\uparrow\downarrow} =
\begin{pmatrix}
0 \\ 1 \\ 0 \\ 0
\end{pmatrix}, \quad
\boldsymbol{\zeta}_{\downarrow\uparrow} =
\begin{pmatrix}
0 \\ 0 \\ 1 \\ 0
\end{pmatrix}, \quad
\boldsymbol{\zeta}_{\downarrow\downarrow} =
\begin{pmatrix}
0 \\ 0 \\ 0 \\ 1
\end{pmatrix}.
For an arbitrary state vector, we can represent it as the following column vector
\boldsymbol{\psi} = \psi_{\uparrow\uparrow}\boldsymbol{\zeta}_{\uparrow\uparrow} + \psi_{\uparrow\downarrow}\boldsymbol{\zeta}_{\uparrow\downarrow}
+ \psi_{\downarrow\uparrow}\boldsymbol{\zeta}_{\downarrow\uparrow} + \psi_{\downarrow\downarrow}\boldsymbol{\zeta}_{\downarrow\downarrow}=
\begin{pmatrix}
\psi_{\uparrow\uparrow} \\ \psi_{\uparrow\downarrow} \\ \psi_{\downarrow\uparrow} \\ \psi_{\downarrow\downarrow}
\end{pmatrix}.
We have seen that the two-particle basis can be constructed via the tensor product of single-particle basis. For matrices, there exists a similar operation that builds larger matrices from smaller matrices. We shall introduce this operation below.
Kronecker Product
The Kronecker product between two matrices is denoted similarly as the tensor product. Let us take an n\times n matrix \boldsymbol{U} and a p \times p matrix \boldsymbol{V}. The Kronecker product \boldsymbol{W} = \boldsymbol{U}\otimes\boldsymbol{V} is the np\times np matrix:
\boldsymbol{W} = \boldsymbol{U} \otimes \boldsymbol{V} =
\begin{pmatrix}
U_{00}\boldsymbol{V} & U_{01}\boldsymbol{V} & \dots & U_{0,n-1}\boldsymbol{V} \\
U_{10}\boldsymbol{V} & U_{11}\boldsymbol{V} & \dots & U_{1,n-1}\boldsymbol{V} \\
\vdots & \vdots & \ddots & \vdots \\
U_{n-1,0}\boldsymbol{V} & U_{n-1,1}\boldsymbol{V} & \dots & U_{n-1,n-1}\boldsymbol{V}
\end{pmatrix}.
The presence of a \boldsymbol{V} in each slot is to be interpreted as follows: to produce \boldsymbol{U}\otimes \boldsymbol{V}, take each element of \boldsymbol{U}, namely U_{ik}, and replace it by U_{ik}\boldsymbol{V}, which is a p \times p matrix. For example, we have
\zeta_{\uparrow\downarrow} =
\begin{pmatrix}
1 \\ 0
\end{pmatrix}
\otimes
\begin{pmatrix}
0 \\ 1
\end{pmatrix}
=
\begin{pmatrix}
1\begin{pmatrix} 0 \\ 1 \end{pmatrix} \\ 0 \begin{pmatrix} 0 \\ 1 \end{pmatrix}
\end{pmatrix}
=
\begin{pmatrix}
0 \\ 1 \\ 0 \\ 0
\end{pmatrix}.
We can also use Kronecker product to build matrix representation of \hat{S}_{Iz}. We have
\boldsymbol{S}_{Iz} = \boldsymbol{S}_z \otimes \boldsymbol{I} = \frac{1}{2}
\begin{pmatrix}
1 & 0 \\
0 & -1
\end{pmatrix}
\otimes
\begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}
=
\frac{1}{2}
\begin{pmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1
\end{pmatrix}.
Now let us try to implement the Kronecker product programmatically, for general matrices \boldsymbol{U} and \boldsymbol{V}. We need an equation connecting the indices of the \boldsymbol{U} and \boldsymbol{V} matrix elements, on the one hand, with the indices of the \boldsymbol{W} matrix, on the other. This is:
W_{ab} =(\boldsymbol{U}\otimes \boldsymbol{V})_{ab} = U_{ik}V_{jl}, \quad a = pi+j, b = pk + l
where p is the dimension of \boldsymbol{V}. The original four indices take on the values:
i = 0,1,\dots, n-1, \quad k = 0, 1, \dots, n-1, \quad j = 0, 1, \dots, p-1, \quad l = 0,1,\dots, p-1.
As a result, the new indices takes on the values
a = 0,1,\dots,np-1, \quad b = 0,1,\dots, np-1.
You should spend some time thinking about our new equation: you will benefit from applying it by hand to one or two simple cases (say, the Kronecker product of a 2 \times 2 matrix with a 3 \times 3 matrix).
2.3 Interacting Spins
Let us now consider the interaction between two spins:
\hat{\boldsymbol{S}}_{I} \cdot \hat{\boldsymbol{S}}_{II} = \hat{S}_{Ix}\hat{S}_{IIx} + \hat{S}_{Iy}\hat{S}_{IIy} +\hat{S}_{Iz}\hat{S}_{IIz}.
The matrix representation of this interaction can be constructed with Kronecker product
\boldsymbol{S}_{I\cdot II} = \{\bra{a}\hat{\boldsymbol{S}}_{I} \cdot \hat{\boldsymbol{S}}_{II} \ket{b}\}
= \boldsymbol{S}_{x}\otimes\boldsymbol{S}_{x} + \boldsymbol{S}_{y}\otimes\boldsymbol{S}_{y} + \boldsymbol{S}_{z}\otimes\boldsymbol{S}_{z}.
2.4 Hamiltonian
We end our discussion of two spin-half particles with the Schrödinger equation:
\hat{H}\ket{\psi} = E\ket{\psi}.
We can use its matrix representation for numerical implementation
\boldsymbol{H}\boldsymbol{\psi} = E \boldsymbol{\psi},
where \boldsymbol{H} is a 4\times 4 matrix, and \psi is a 4 \times 1 column vector.
A Hamiltonian describing the interaction between two spins in presence of external magnetic field is of the following form
\begin{align*}
\hat{H} &= -\frac{g_I q_I B}{2m_I}\hat{S}_{Iz} - \frac{g_{II} q_{II} B}{2m_{II}}\hat{S}_{IIz} + \gamma \hat{\boldsymbol{S}}_{I} \cdot \hat{\boldsymbol{S}}_{II} \\
& = -\omega_{I}\hat{S}_{Iz} - \omega_{II}\hat{S}_{IIz} + \gamma (\hat{S}_{Ix}\hat{S}_{IIx} + \hat{S}_{Iy}\hat{S}_{IIy} +\hat{S}_{Iz}\hat{S}_{IIz}).
\end{align*}
Based on this, we can construct its matrix representation using Kronecker product
\boldsymbol{H} = -\omega_I \boldsymbol{S}_z\otimes\boldsymbol{I} - \omega_{II}\boldsymbol{I}\otimes\boldsymbol{S}_z + \gamma(\boldsymbol{S}_{x}\otimes\boldsymbol{S}_x + \boldsymbol{S}_{y}\otimes\boldsymbol{S}_y + \boldsymbol{S}_{z}\otimes\boldsymbol{S}_z).
\tag{2}
In your homework, you will show that
\boldsymbol{H} = -\frac{1}{2}
\begin{pmatrix}
\omega_{I}+\omega_{II}-\frac{\gamma}{2} & 0 & 0 & 0 \\
0 &\omega_{I} - \omega_{II}+\frac{\gamma}{2} & -\gamma & 0 \\
0 & -\gamma & -\omega_I+\omega_{II}+\frac{\gamma}{2} & 0 \\
0 & 0 & 0 & -\omega_I-\omega_{II}-\frac{\gamma}{2}
\end{pmatrix}
\tag{3}
3 Three Particles
We will now see the benefits of the theoretical machinery we established in the previous sections. We can now study the problem of three spin-half particles, interacting with a magnetic field and with each other. The matrix formulation of this problem gives rise to 8 \times 8 matrices (so 64 matrix elements per matrix): since there are three particles and three Cartesian coordinates, we need to deal with at least nine matrices, each of which is 8 \times 8. In other words, this is not a task that is comfortably carried out using paper and pencil, which is why it doesn’t appear in QM textbooks traditionally.
The basis for the three-particle Hilbert space can be chosen naturally as
\ket{i^{I}}\otimes\ket{j^{II}}\otimes\ket{k^{III}} \equiv \ket{\mu}
with all possible
\mu = \uparrow\uparrow\uparrow, \uparrow\uparrow\downarrow, \uparrow\downarrow\uparrow, \downarrow\uparrow\uparrow, \uparrow\downarrow\downarrow, \downarrow\uparrow\downarrow, \downarrow\downarrow\uparrow,\downarrow\downarrow\downarrow.
You may thus think \mu as the ordered triple (i,j,k).
The operators acting on the three-particle Hilbert space can thus be constructed, for example,
\hat{S}_{Iz} = \hat{S}_{z}^{(I)}\otimes\hat{\mathcal{I}}^{(II)}\otimes\hat{\mathcal{I}}^{(III)},
as the z projection for the first particle. Similarly, the matrix representation of this operator can be constructed with Kronecker product
\boldsymbol{S}_{1z} = \boldsymbol{S}_z \otimes \boldsymbol{I} \otimes \boldsymbol{I}.
Here, we see two Kronecker products in a row, so you may be wondering how to interpret such an operation. Luckily, the Kronecker product is associative:
(\boldsymbol{T}\otimes \boldsymbol{U})\otimes \boldsymbol{V} =
\boldsymbol{T} \otimes (\boldsymbol{U} \otimes \boldsymbol{V})
meaning that you simply carry out one Kronecker product after the other and it doesn’t matter which Kronecker product you carry out first. (Note that the Kronecker product is not commutative: \boldsymbol{U}\otimes \boldsymbol{V} \neq \boldsymbol{V} \otimes \boldsymbol{U}).
3.1 Hamiltonian
With these preparations, let us consider the following Hamiltonian
\hat{H} = -\sum_{\alpha=I,II,III} \omega_\alpha \hat{S}_{\alpha z}
+ \gamma \sum_{\alpha<\beta}\hat{\boldsymbol{S}}_{\alpha} \cdot \hat{\boldsymbol{S}}_{\beta}
which describes three interacting spins in an external magnetic field. Note that in the second summation, the two indices have to satisfy \alpha < \beta, and hence the only possible (\alpha,\beta) pairs are (I,II), (II,III), and (I,III).
Note that here we explored the property of the Kronecker product (homework)
(\boldsymbol{A}\otimes \boldsymbol{B})(\boldsymbol{C}\otimes \boldsymbol{D})
= (\boldsymbol{A}\boldsymbol{C}) \otimes (\boldsymbol{B}\boldsymbol{D}).
4 Implementation
4.1 Kronecker product
We first implement the Kronecker product in the following code.
import numpy as npdef paulimatrices(): sigx = np.array([0.,1,1,0]).reshape(2,2) sigy = np.array([0.,-1j,1j,0]).reshape(2,2) sigz = np.array([1.,0,0,-1]).reshape(2,2)return sigx, sigy, sigzdef kron(U,V): n = U.shape[0] p = V.shape[0] W = np.zeros((n*p,n*p), dtype=np.complex64)for i inrange(n):for k inrange(n):for j inrange(p):for l inrange(p): W[p*i+j,p*k+l] = U[i,k]*V[j,l]return Wif__name__=='__main__': sigx, sigy, sigz = paulimatrices() allones = np.ones((3,3)) kronprod = kron(sigx,allones);print(kronprod.real)
We implement the Hamiltonian for two spins in the following. This program also calls the previous codes for calculating eigenvalues using QR method (qrmet()). The inputs for twospins(omI,omII,gam) are \omega_I, \omega_{II}, \gamma.
Note that we first call paulimatrices() to get a list of Pauli matrices. We then use list comprehension to store the x,y,z components of \boldsymbol{S}_I as a list in SIs.
import numpy as npdef twospins(omI,omII,gam): hbar =1. paulis = paulimatrices() iden = np.identity(2) SIs = [hbar*kron(pa,iden)/2for pa in paulis] SIIs = [hbar*kron(iden,pa)/2for pa in paulis] SIdotII =sum([SIs[i]@SIIs[i] for i inrange(3)]) H =-omI*SIs[2] - omII*SIIs[2] + gam*SIdotII H = H.realreturn Hdef paulimatrices(): sigx = np.array([0.,1,1,0]).reshape(2,2) sigy = np.array([0.,-1j,1j,0]).reshape(2,2) sigz = np.array([1.,0,0,-1]).reshape(2,2)return sigx, sigy, sigzdef kron(U,V): n = U.shape[0] p = V.shape[0] W = np.zeros((n*p,n*p), dtype=np.complex64)for i inrange(n):for k inrange(n):for j inrange(p):for l inrange(p): W[p*i+j,p*k+l] = U[i,k]*V[j,l]return Wdef qrmet(inA,kmax=100): A = np.copy(inA)for k inrange(1,kmax): Q, R = qrdec(A) A = R@Q# print(k, np.diag(A)) qreigvals = np.diag(A)return qreigvalsdef qrdec(A): n = A.shape[0] Ap = np.copy(A) Q = np.zeros((n,n)) R = np.zeros((n,n))for j inrange(n):for i inrange(j): R[i,j] = Q[:,i]@A[:,j] Ap[:,j] -= R[i,j]*Q[:,i] R[j,j] = mag(Ap[:,j]) Q[:,j] = Ap[:,j]/R[j,j]return Q, Rdef mag(xs):return np.sqrt(np.sum(xs*xs))if__name__=='__main__': H = twospins(1.,2.,0.5) qreigvals = qrmet(H);print(qreigvals)
[-1.375 -0.68401701 0.434017 1.625 ]
Three particles
Let us now implement the Hamiltonian for three particles.
import numpy as npdef threespins(omI,omII,omIII,gam): hbar =1. paulis = paulimatrices() iden = np.identity(2) SIs = [hbar*kron(kron(pa,iden),iden)/2for pa in paulis] SIIs = [hbar*kron(kron(iden,pa),iden)/2for pa in paulis] SIIIs = [hbar*kron(kron(iden,iden),pa)/2for pa in paulis] SIdotII =sum([SIs[i]@SIIs[i] for i inrange(3)]) SIdotIII =sum([SIs[i]@SIIIs[i] for i inrange(3)]) SIIdotIII =sum([SIIs[i]@SIIIs[i] for i inrange(3)]) H =-omI*SIs[2] - omII*SIIs[2] - omIII*SIIIs[2] H += gam*(SIdotII+SIdotIII+SIIdotIII) H = H.realreturn Hdef paulimatrices(): sigx = np.array([0.,1,1,0]).reshape(2,2) sigy = np.array([0.,-1j,1j,0]).reshape(2,2) sigz = np.array([1.,0,0,-1]).reshape(2,2)return sigx, sigy, sigzdef kron(U,V): n = U.shape[0] p = V.shape[0] W = np.zeros((n*p,n*p), dtype=np.complex64)for i inrange(n):for k inrange(n):for j inrange(p):for l inrange(p): W[p*i+j,p*k+l] = U[i,k]*V[j,l]return Wdef qrmet(inA,kmax=100): A = np.copy(inA)for k inrange(1,kmax): Q, R = qrdec(A) A = R@Q# print(k, np.diag(A)) qreigvals = np.diag(A)return qreigvalsdef qrdec(A): n = A.shape[0] Ap = np.copy(A) Q = np.zeros((n,n)) R = np.zeros((n,n))for j inrange(n):for i inrange(j): R[i,j] = Q[:,i]@A[:,j] Ap[:,j] -= R[i,j]*Q[:,i] R[j,j] = mag(Ap[:,j]) Q[:,j] = Ap[:,j]/R[j,j]return Q, Rdef mag(xs):return np.sqrt(np.sum(xs*xs))if__name__=='__main__': np.set_printoptions(precision=3) H = threespins(1.,2.,3.,0.5) qreigvals = qrmet(H);print(qreigvals)
(no programming) Please prove the Kronecker product for arbitrary matrices satisfies
(\boldsymbol{A}\otimes \boldsymbol{B})(\boldsymbol{C}\otimes \boldsymbol{D})
= (\boldsymbol{A}\boldsymbol{C}) \otimes (\boldsymbol{B}\boldsymbol{D}).
Note that (\boldsymbol{A}\otimes \boldsymbol{B})(\boldsymbol{C}\otimes \boldsymbol{D}) is a matrix product between \boldsymbol{A}\otimes \boldsymbol{B} and \boldsymbol{C}\otimes \boldsymbol{D}, and \boldsymbol{A}\boldsymbol{C} is the matrix product between \boldsymbol{A} and \boldsymbol{C}.
(programming) Take the two particle Hamiltonian. Fix \omega_{II}=2\omega_{I}, and \gamma = 1. Plot the eigenvalues as a function of \omega_I in an interval 0\leq \omega_I \leq 3.0.