Here, I will be discussing some linear algebra basics that will provide sufficient linear algebra background for Cross Section Econometrics. We will be doing very basic linear algebra that by no means covers the full breadth of this topic. Why linear algebra? Linear algebra allows us to express relatively complex linear expressions in a very compact way. This ability to write complex expressions in a compact way extends to computing capabilities and the application of the theory we cover for estimation is implemented using linear algebra techniques.

Being comfortable with the rules for scalar and matrix addition, subtraction, multiplication, and division (known as inversion) is important for our class.

To learn the basics, consider a small matrix of dimension $2 \times 2$, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\

a_{21} & a_{22}

\end{smallmatrix} \bigr]$. Consider adding a scalar value (e.g. 3) to the A.

$$

\begin{equation}

A+3=\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix}+3

=\begin{bmatrix}

a_{11}+3 & a_{12}+3 \\

a_{21}+3 & a_{22}+3

\end{bmatrix}

\end{equation}

$$

The same basic principle holds true for A-3:

$$

\begin{equation}

A-3=\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix}-3

=\begin{bmatrix}

a_{11}-3 & a_{12}-3 \\

a_{21}-3 & a_{22}-3

\end{bmatrix}

\end{equation}

$$

Notice that we add (or subtract) the scalar value to each element in the matrix A. A can be of any dimension.

Consider two small $2 \times 2$ matrices, where $2 \times 2$ denotes the # of rows $\times$ the # of columns. Let $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\

a_{21} & a_{22}

\end{smallmatrix} \bigr]$ and $B$=$\bigl[ \begin{smallmatrix} b_{11} & b_{12} \\

b_{21} & b_{22}

\end{smallmatrix} \bigr]$. To find the result of $A-B$, simply subtract each element of A with the corresponding element of B:

$$

\begin{equation}

A -B =

\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix} -

\begin{bmatrix} b_{11} & b_{12} \\

b_{21} & b_{22}

\end{bmatrix}

=

\begin{bmatrix}

a_{11}-b_{11} & a_{12}-b_{12} \\

a_{21}-b_{21} & a_{22}-b_{22}

\end{bmatrix}

\end{equation}

$$

Addition works exactly the same way:

$$

\begin{equation}

A + B =

\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix} +

\begin{bmatrix} b_{11} & b_{12} \\

b_{21} & b_{22}

\end{bmatrix}

=

\begin{bmatrix}

a_{11}+b_{11} & a_{12}+b_{12} \\

a_{21}+b_{21} & a_{22}+b_{22}

\end{bmatrix}

\end{equation}

$$

An important point to know about matrix addition and subtraction is that it is only defined when $A$ and $B$ are of the same size. Here, both are $2 \times 2$. Since operations are performed element by element, these two matrices must be conformable- and for addition and subtraction that means they must have the same numbers of rows and columns. I like to be explicit about the dimensions of matrices for checking conformability as I write the equations, so write

$A_{2 \times 2} + B_{2 \times 2}= \begin{bmatrix}

a_{11}+b_{11} & a_{12}+b_{12} \\

a_{21}+b_{21} & a_{22}+b_{22}

\end{bmatrix}_{2 \times 2}$

Notice that the result of a matrix addition or subtraction operation is always of the same dimension as the two operands.

As before, let $A$=$\bigl( \begin{smallmatrix} a_{11} & a_{12} \\

a_{21} & a_{22}

\end{smallmatrix} \bigr)$. Suppose we want to multiply A times a scalar value (e.g. $3 \times A$)

$$

\begin{equation}

3 \times A = 3 \times \begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix}

=

\begin{bmatrix}

3a_{11} & 3a_{12} \\

3a_{21} & 3a_{22}

\end{bmatrix}

= A \times 3

\end{equation}

$$

Scalar multiplication is commutative, so that $3 \times A$=$A \times 3$. Notice that the product is defined for a matrix A of any dimension.

Now, consider the $2 \times 1$ vector $C=\bigl[ \begin{smallmatrix} c_{11} \\

c_{21}

\end{smallmatrix} \bigr]$

Consider multiplying matrix $A_{2 \times 2}$ and the vector $C_{2 \times 1}$. Unlike the addition and subtraction case, this product is defined. Here, conformability depends not on the row *and* column dimensions, but rather on the column dimensions of the first operand and the row dimensions of the second operand. We can write this operation as follows

$$

\begin{equation}

A_{2 \times 2} \times C_{2 \times 1} =

\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix}_{2 \times 2}

\times

\begin{bmatrix}

c_{11} \\

c_{21}

\end{bmatrix}_{2 \times 1}

=

\begin{bmatrix}

a_{11}c_{11} + a_{12}c_{21} \\

a_{21}c_{11} + a_{22}c_{21}

\end{bmatrix}_{2 \times 1}

\end{equation}

$$

Alternatively, consider a matrix C of dimension $2 \times 3$ and a matrix A of dimension $3 \times 2$

$$

\begin{equation}

A_{3 \times 2}=\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22} \\

a_{31} & a_{32}

\end{bmatrix}_{3 \times 2}

,

C_{2 \times 3} =

\begin{bmatrix}

c_{11} & c_{12} & c_{13} \\

c_{21} & c_{22} & c_{23} \\

\end{bmatrix}_{2 \times 3}

\end{equation}

$$

Here, A $\times$ C is

$$

\begin{align}

A_{3 \times 2} \times C_{2 \times 3}=&

\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22} \\

a_{31} & a_{32}

\end{bmatrix}_{3 \times 2}

\times

\begin{bmatrix}

c_{11} & c_{12} & c_{13} \\

c_{21} & c_{22} & c_{23}

\end{bmatrix}_{2 \times 3} \\

=&

\begin{bmatrix}

a_{11} c_{11}+a_{12} c_{21} & a_{11} c_{12}+a_{12} c_{22} & a_{11} c_{13}+a_{12} c_{23} \\

a_{21} c_{11}+a_{22} c_{21} & a_{21} c_{12}+a_{22} c_{22} & a_{21} c_{13}+a_{22} c_{23} \\

a_{31} c_{11}+a_{32} c_{21} & a_{31} c_{12}+a_{32} c_{22} & a_{31} c_{13}+a_{32} c_{23}

\end{bmatrix}_{3 \times 3}

\end{align}

$$

So in general, $X_{r_x \times c_x} \times Y_{r_y \times c_y}$ we have two important things to remember:

- For conformability in matrix multiplication, $c_x=r_y$, or the columns in the first operand must be equal to the rows of the second operand.
- The result will be of dimension $r_x \times c_y$, or of dimensions equal to the rows of the first operand and columns equal to columns of the second operand.

Given these facts, you should convince yourself that matrix multiplication is not generally commutative, that the relationship $X \times Y = Y \times X$ does *not* hold in all cases. For this reason, we will always be very explicit about whether we are pre multiplying ($X \times Y$) or post multiplying ($Y \times X$) X and Y.

For more information on this topic, see this wiki page on matrix multiplication.

The term matrix division is actually a misnomer. To divide in a matrix algebra world we first need to invert the matrix. It is useful to consider the analog case in a scalar work. Suppose we want to divide the $f$ by $g$. We could do this in two different ways:

$$

\begin{equation}

\frac{f}{g}=f \times g^{-1}.

\end{equation}

$$

These are equivalent ways of solving this problem in a scalar world. The second one requires two steps: first we invert g and then we multiply f times g. In a matrix world, we need to think about this second approach. First we have to invert the matrix g and then we will need to pre or post multiply depending on the exact situation we encounter (this is intended to be vague for now).

As before, consider the square $2 \times 2$ matrix $A$=$\bigl[ \begin{smallmatrix} a_{11} & a_{12} \\

a_{21} & a_{22}

\end{smallmatrix} \bigr]$. Let the inverse of matrix A (denoted as $A^{-1}$) be

$$

\begin{equation}

A^{-1}=\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22}

\end{bmatrix}^{-1}=\frac{1}{a_{11}a_{22}-a_{12}a_{21}} \begin{bmatrix}

a_{22} & -a_{12} \\

-a_{21} & a_{11}

\end{bmatrix}

\end{equation}

$$

the inverted matrix $A^{-1}$ has a useful property:

$$

\begin{equation}

A \times A^{-1}=A^{-1} \times A=I

\end{equation}

$$

where I, the identity matrix (the matrix equivalent of the scalar value 1), is

$$

\begin{equation}

I_{2 \times 2}=\begin{bmatrix}

1 & 0 \\

0 & 1

\end{bmatrix}

\end{equation}

$$

furthermore, $A \times I = A$ and $I \times A = A$.

An important feature about matrix inversion is that it is undefined if (in the $2 \times 2$ case), $a_{11}a_{22}-a_{12}a_{21}=0$. If this relationship is equal to zero the inverse of A does not exist. If this term is very close to zero, an inverse may exist but $A^{-1}$ may be poorly conditioned meaning it is prone to rounding error and is likely not well identified computationally. The term $a_{11}a_{22}-a_{12}a_{21}$ is the determinant of matrix A, and for square matrices of size greater than $2 \times 2$, if equal to zero indicates that you have a problem with your data matrix (columns are linearly dependent on other columns). The inverse of matrix A exists if A is square and is of full rank (ie. the columns of A are not linear combinations of other columns of A).

For more information on this topic, see this wiki page on inverting matrices.

At times it is useful to pivot a matrix for conformability- that is in order to matrix divide or multiply, we need to switch the rows and column dimensions of matrices. Consider the matrix

$$

\begin{equation}

A_{3 \times 2}=\begin{bmatrix}

a_{11} & a_{12} \\

a_{21} & a_{22} \\

a_{31} & a_{32}

\end{bmatrix}_{3 \times 2}

\end{equation}

$$

The transpose of A (denoted as $A^{\prime}$) is

$$

\begin{equation}

A^{\prime}=\begin{bmatrix}

a_{11} & a_{21} & a_{31} \\

a_{12} & a_{22} & a_{32} \\

\end{bmatrix}_{2 \times 3}

\end{equation}

$$

An important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then

$$

\begin{equation}

(AB)^{\prime}=B^{\prime}A^{\prime}

\end{equation}

$$

For more information, see this wiki article on matrix transposition.

We really only use this condition in class: For a matrix $\mathbf{x}$ and $\mathbf{y}$ and the product $M=\mathbf{x}’\mathbf{y}$, $\frac{\partial{M}}{\partial{\mathbf{y}}}=\mathbf{x}$.