Addendum: Linear Algebra Basics

Week 0 • September 1, 2025

Table of Contents

  1. Matrix Basics
  2. Matrix Operations
  3. Basis
  4. Eigenvalues and Eigenvectors
  5. Normalization

Matrix Basics

This is an extra page to help explain some of the basics of linear algebra to those who are unfamiliar with it. I highly recommend watching 3blue1brown's Essence of Linear Algebra series, as it does a great job providing a visual intuition for all of the concepts it covers. If you want a course to utilize for a greater understanding of linear algebra, I recommend MIT's Linear Algebra course. The course is provided for free via MIT's OpenCourseWare.

A matrix is simply a 2D list of numbers. We can write a size $m \times n$ matrix as

$$ A = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ a_{31} & a_{32} & \cdots & a_{3n} \\ \vdots & \vdots & \vdots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{bmatrix} $$

When we say a matrix is size $m \times n$, we are saying that it has $m$ rows and $n$ columns. We write the position in the subscript, with the row coming first and then the column, so $A_{ij}$ represents the element at row $i$ and column $j$. A matrix with only 1 column is known as a column vector, or just a vector. A matrix with only one row is known as a row vector.

$$ \vec{v} = \begin{bmatrix} a_{11} \\ a_{21} \\ \vdots \\ a_{m1} \end{bmatrix} \quad \text{(column vector)} $$$$ \vec{r} = \begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n} \end{bmatrix} \quad \text{(row vector)} $$

Matrix Operations

The transpose of a matrix, denoted with the subscript $\top$, is an operation which switches the row and column indices of the matrix, so $A_{ij}=A^\top_{ji}$. For example,

$$ A = \begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix}, \quad A^\top = \begin{bmatrix} a & c & e \\ b & d & f \end{bmatrix} $$

In quantum computing, we will generally use the conjugate transpose, denoted with the subscript $\dag$, which also flips the sign of the imaginary part of every value in the matrix. You will see its use as we progress, but for now, just know that it is the same as the transpose, but with the imaginary part of each value flipped.

If $A$ is a $m \times n$ matrix, and $B$ is a $n \times p$ matrix, the values of the matrix product $C=AB$, which is a $m \times p$ matrix, are defined as $C_{ij}=\sum_{k=1}^{n} A_{ik}B_{kj}$. The number of columns in $A$ must match the number of rows in $B$ for a matrix product to exist. For example,

$$ \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} \begin{bmatrix} 7 & 8 \\ 9 & 10 \\ 11 & 12 \end{bmatrix} = \begin{bmatrix} 58 & 64 \\ 139 & 154 \end{bmatrix} $$

The inner product of two vectors $a$ and $b$ of size $n$ is the sum of the product of the corresponding entries in each vector.

$$ a^\dag b = a \cdot b = a_1b_1+\cdots+a_nb_n $$

As you can see, there are a few different ways to write the inner product. In quantum computing, we generally use bra-ket notation, where we would write this operation as $\langle a | b \rangle$. For more on bra-ket notation, see the week 1 addendum.

The outer product of two vectors $a$ and $b$ is the opposite operation, the product of all possible corresponding terms in each vector.

$$ ab^\dag = \begin{bmatrix} a_1b_1 & a_1b_2 & a_1b_3 \\ a_2b_1 & a_2b_2 & a_2b_3 \\ a_3b_1 & a_3b_2 & a_3b_3 \end{bmatrix} $$

In bra-ket notation, we would write this operation as $|a\rangle\langle b|$.

Basis

A set of vectors $\{v_1,\dots,v_n\}$ is a basis if every possible vector in the space1 we're in (everywhere we can reach by scaling the vectors) can be written as a linear combination of the vectors in the basis. The elements of a basis must be linearly independent, i.e. it cannot be possible to write one vector in the basis as a linear combination of the other vectors In other words,

$$ a = c_1v_1+\cdots+c_nv_n $$

where $a$ is any vector in the space we're in and $c_1,\dots,c_n$ are some constant values. Consequently, if we are working with vectors in $n$ dimensions, any basis must be of size $n$.

Eigenvalues and Eigenvectors

An eigenvalue and eigenvector are some value $\lambda$ and vector $\vec{v}$ such that

$$A\vec{v} = \lambda \vec{v}$$

Conceptually, eigenvectors are vectors that don't change in its direction in space when multiplied by a transformation matrix. In order to find these eigenvectors, we can find its associated eigenvalue using the characteristic polynomial of a matrix, which is given by the equation:

$$ \text{det}(A - \lambda I) = 0$$

Where det is the determinant of the matrix.

Below is an example of how to find the eigenvalues and eigenvectors of the matrix

$$ A = \begin{pmatrix}4 & 2\\ 1 & 3\end{pmatrix} $$

The characteristic polynomial is

$$ \begin{align*} \det(A - \lambda I) &= \det\begin{pmatrix}4-\lambda & 2\\ 1 & 3-\lambda\end{pmatrix} \\ &= (4-\lambda)(3-\lambda) - 2 \cdot 1 \\ &= \lambda^2 - 7\lambda + 10. \end{align*} $$

Solving

$$ \lambda^2 - 7\lambda + 10 = 0 $$

gives the integer roots

$$ \lambda = 5, \: \lambda = 2. $$

Now, let's solve for the first eigenvector, $(A - 5I) \vec{v} = 0$.

$$ \begin{align*} (A - 5I) &= \begin{pmatrix}-1 & 2\\ 1 & -2\end{pmatrix} \\ \begin{pmatrix}-1 & 2\\ 1 & -2\end{pmatrix} \begin{pmatrix}x\\ y\end{pmatrix} &= \begin{pmatrix}0\\ 0\end{pmatrix} \\ -x + 2y &= 0 \\ x &= 2y. \end{align*} $$

Choosing $y = 1$ gives:

$$ \mathbf v_5 = \begin{pmatrix}2\\1\end{pmatrix}. $$

Now, let's solve for the other eigenvector, $(A - 2I) \vec{v} = 0$.

$$ \begin{align*} (A - 2I) &= \begin{pmatrix}2 & 2\\ 1 & 1\end{pmatrix} \\ \begin{pmatrix}2 & 2\\ 1 & 1\end{pmatrix} \begin{pmatrix}x\\ y\end{pmatrix} &= \begin{pmatrix}0\\ 0\end{pmatrix} \\ x + y &= 0 \\ y &= -x. \end{align*} $$

Choosing $x = 1$ gives:

$$ \mathbf v_2 = \begin{pmatrix}1\\-1\end{pmatrix}. $$

The matrix

$$ A = \begin{pmatrix}4 & 2\\ 1 & 3\end{pmatrix} $$

has integer eigenvalues 5 and 2, with corresponding integer eigenvectors

$$ \begin{pmatrix}2\\1\end{pmatrix} \text{ and } \begin{pmatrix}1\\-1\end{pmatrix} $$

This is arguably the most important concept in linear algebra, both inside and outside of quantum computing. Some examples of their applications include Schrödinger's equation and molecular orbitals.

We will be looking at a quantum algorithm to estimate the eigenvalues of a matrix later on in the semester.

Normalization

Another important concept is normalization. A vector is normalized if its ratio of magnitude (also called its norm, denoted as $||v||$) is based on 1. In the world of linear algebra, we call this the unit vector (like a unit circle, where the max value is 1). For a vector $\vec{v}$ with components $v_1, v_2, \dots, v_n$, the norm is given by

$$ \|v\| = \sqrt{|v_1|^2 + |v_2|^2 + \cdots + |v_n|^2} $$

To normalize a vector, divide each component by its norm:

$$ \hat{v} = \frac{\vec{v}}{\|v\|} $$

where $\hat{v}$ is the normalized vector.

Below is an example of a normalized vector calculation:

$$ \begin{align*} \vec{v} &= \begin{bmatrix} 3 \\ 4 \end{bmatrix} \\ \quad \|v\| &= \sqrt{3^2 + 4^2} = \sqrt{25} = 5 \\ \hat{v} &= \frac{1}{5}\begin{bmatrix} 3 \\ 4 \end{bmatrix} = \begin{bmatrix} 3/5 \\ 4/5 \end{bmatrix} \end{align*} $$

In quantum computing, all statevectors must be normalized, since the sum of the probabilities of all possible outcomes must be equal to 1, representing 100%.

As we progress throughout the semester, you will see equations having been normalized for use in quantum computing. An example is $|+\rangle$ where:

$$ |+\rangle = \frac{|0\rangle+|1\rangle}{\sqrt{2}} $$

Don't worry what $|+\rangle$ or other related symbols mean just yet; it will be covered as we progress, but notice the denominator. Since there is a $\sqrt2$ in the denominator, this means that the probability of measuring $|+\rangle$ as $|0\rangle$ or $|1\rangle$ has been normalized to 1.

Ultimately, the best way to become more familiar with linear algebra is just to practice it. There are plenty of videos and resources online to help you master the topics. A very strong understanding of linear algebra will make a lot of quantum computing concepts easier to understand.



1. In the world of linear algebra, a space is a set of vectors that can be reached by scaling the vectors in the basis.