SPECIAL NOTE

The material in this section of the lecture notes will be used in the quiz 3, however, the midterm and the final exams will not target detailed knowledge of this section. The exams will however require a practical knowledge of linear algebra needed for solving problems similar to the tutorial problems.

πŸ“– Elements of linear algebra#

⏱ | words

Important note

This lecture covers the fundamental concepts of linear algebra, and covers more material than the time permits to cover in a single lecture

  • It is strongly advised to study the details of this lecture note is greater details in preparation for the midterm and final exams

  • I also encourage you to watch some or all of the videos in the 3Blue1Brown: Essence of linear algebra series at your own time to get deeper understanding of the linear algebra concepts through visualizations and excellent explanations

The lecture covers the following main concepts:

  1. Vector space

  2. Linear combination

  3. Span

  4. Linear independence

  5. Basis and dimension

  6. Matrices as linear transformations

  7. Column spaces and null space

  8. Rank of a matrix

  9. Singular matrices

  10. Matrix determinant

  11. Eigenvectors and eigenvalues

  12. Diagonalization of matrices

See references in the bottom of this note

Midterm and final exam may have some but not much of the material from this lecture because it is mainly left for self-study. Open book quiz, however, will contain questions for this material.

Motivating example#

Linear algebra is most useful in economics for solving systems of linear equations

Definition

A general system of linear equations can be written as

\[\begin{split} \begin{array}{c} a_{11} x_1 + a_{12} x_2 + \cdots + a_{1K} x_K = b_1 \\ a_{21} x_1 + a_{22} x_2 + \cdots + a_{2K} x_K = b_2 \\ \vdots \\ a_{N1} x_1 + a_{N2} x_2 + \cdots + a_{NK} x_K = b_N \end{array} \end{split}\]

More often we write the system in matrix form

\[\begin{split} \left( \begin{array}{cccc} a_{11} & a_{12} & \cdots & a_{1K} \\ a_{21} & a_{22} & \cdots & a_{2K} \\ \vdots & \vdots & & \vdots \\ a_{N1} & a_{N2} & \cdots & a_{NK} \end{array} \right) \left( \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_K \end{array} \right) = \left( \begin{array}{c} b_1 \\ b_2 \\ \vdots \\ b_K \end{array} \right) % \end{split}\]
\[ % A x = b % \]

Let’s solve this system on a computer:

import numpy as np
from scipy.linalg import solve
A = [[0, 2, 4],
     [1, 4, 8],
     [0, 3, 7]]
b = (1, 2, 0)
A, b = np.asarray(A), np.asarray(b)
x=solve(A, b)
print(f'Solutions is x={x}')
Solutions is x=[-1.77635684e-15  3.50000000e+00 -1.50000000e+00]

This tells us that the solution is \(x = (0. , 3.5, -1.5)\)

That is,

\[\begin{split} % x = \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \end{array} \right) = \left( \begin{array}{c} 0 \\ 3.5 \\ -1.5 \end{array} \right) % \end{split}\]

The usual question: if computers can do it so easily β€” what do we need to study for?

But now let’s try this similar looking problem

Question: what changed?

import numpy as np
from scipy.linalg import solve
A = [[0, 2, 4],
     [1, 4, 8],
     [0, 3, 6]]
b = (1, 2, 0)
A, b = np.asarray(A), np.asarray(b)
x=solve(A, b)
print(f'Solutions is x={x}')
Solutions is x=[ 0.00000000e+00 -9.00719925e+15  4.50359963e+15]
/var/folders/5d/cjhpvdlx7lg8zt1ypw5fcyy40000gn/T/ipykernel_31662/3978225594.py:8: LinAlgWarning: Ill-conditioned matrix (rcond=4.11194e-18): result may not be accurate.
  x=solve(A, b)

This is the output that we get: LinAlgWarning: Ill-conditioned matrix

  • What does this mean?

  • How can we fix it?

We still need to understand the concepts after all!

Vector Space#

Definition

A field (of numbers) is a set \(\mathbb{F}\) of numbers with the property that if \(a,b \in \mathbb{F}\), then \(a+b\), \(a-b\), \(ab\) and \(a/b\) (provided that \(b \ne 0\)) are also in \(\mathbb{F}\).

Example

\(\mathbb{Q}\) and \(\mathbb{R}\) are fields, but \(\mathbb{N}\), \(\mathbb{Z}\) is not (why?)

Definition

A vector space over a field \(\mathbb{F}\) is a set \(V\) of objects called vectors, together with two operations:

  1. vector addition that takes two vectors \(v,w \in V\) and produces a vector \(v+w \in V\), and

  2. scalar multiplication that takes a scalar \(\alpha \in \mathbb{F}\) and a vector \(v \in V\) and produces a vector \(\alpha v \in V\),

which satisfy the following properties:

  1. Associativity of vector addition: \((u+v)+w = u+ (v+w)\) for all \(u,v,w \in V\)

  2. Existance of zero vector \({\bf 0}\) such that \(v + {\bf 0} = v\) for all \(v \in V\)

  3. Existance of negatives: for each \(v \in V\) there exists a vector \(-v \in V\) such that \(v + (-v) = {\bf 0}\)

  4. Associativity of multiplication: \(\alpha (\beta v) = (\alpha \beta) v\) for all \(\alpha, \beta \in \mathbb{F}\) and \(v \in V\)

  5. Distributivity: \((\alpha + \beta) v = \alpha v + \beta v\) and \(\alpha(v+w) = \alpha v + \alpha w\) for all \(\alpha, \beta \in \mathbb{F}\) and \(v, w \in V\)

  6. Unitarity: \(1u = u\) for all \(u \in V\)

  • may seem like a bunch of abstract nonsense, but it’s actually describes something intuitive

  • what we already know about operations with vectors in \(\mathbb{R}^N\) put in formal definition

Example

\[\begin{split} % \left( \begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array} \right) + \left( \begin{array}{c} 2 \\ 4 \\ 6 \\ 8 \end{array} \right) = \left( \begin{array}{c} 3 \\ 6 \\ 9 \\ 12 \end{array} \right) % \end{split}\]

Example

\[\begin{split} % -1 \left( \begin{array}{c} 1 \\ 2 \\ 3 \\ 4 \end{array} \right) = \left( \begin{array}{c} -1 \\ -2 \\ -3 \\ -4 \end{array} \right) % \end{split}\]
_images/vec_scalar.png

Fig. 52 Scalar multiplication by a negative number#

Dot product and norm#

Definition

The dot product of two vectors \(x, y \in \mathbb{R}^N\) is

\[ % x \cdot y = x^T y = \sum_{n=1}^N x_n y_n % \]

The notation \(\square^T\) flips the vector from column-vector to row-vector, this will make sense when we talk later about matrices for which vectors are special case.

Fact: Properties of dot product

For any \(\alpha, \beta \in \mathbb{R}\) and any \(x, y \in \mathbb{R}^N\), the following statements are true:

  1. \(x^T y = y^T x\)

  2. \((\alpha x)' (\beta y) = \alpha \beta (x^T y)\)

  3. \(x^T (y + z) = x^T y + x^T z\)

Definition

The (Euclidean) norm of \(x \in \mathbb{R}^N\) is defined as

\[ % \| x \| = \sqrt{x^T x } = \left( \sum_{n=1}^N x_n^2 \right)^{1/2} % \]
  • \(\| x \|\) represents the ``length’’ of \(x\)

  • \(\| x - y \|\) represents distance between \(x\) and \(y\)

Fact

For any \(\alpha \in \mathbb{R}\) and any \(x, y \in \mathbb{R}^N\), the following statements are true:

  1. \(\| x \| \geq 0\) and \(\| x \| = 0\) if and only if \(x = 0\)

  • \(\| \alpha x \| = |\alpha| \| x \|\)

  • \(\| x + y \| \leq \| x \| + \| y \|\) (triangle inequality)

  • \(| x^T y | \leq \| x \| \| y \|\) (Cauchy-Schwarz inequality)

Linear combinations#

Definition

A linear combination of vectors \(x_1,\ldots, x_K\) in \(\mathbb{R}^N\) is a vector

\[ % y = \sum_{k=1}^K \alpha_k x_k = \alpha_1 x_1 + \cdots + \alpha_K x_K % \]

where \(\alpha_1,\ldots, \alpha_K\) are scalars

Example

\[\begin{split} % 0.5 \left( \begin{array}{c} 6.0 \\ 2.0 \\ 8.0 \end{array} \right) + 3.0 \left( \begin{array}{c} 0 \\ 1.0 \\ -1.0 \end{array} \right) = \left( \begin{array}{c} 3.0 \\ 4.0 \\ 1.0 \end{array} \right) % \end{split}\]

Fact

Inner products of linear combinations satisfy

\[ % \left( \sum_{k=1}^K \alpha_k x_k \right)' \left( \sum_{j=1}^J \beta_j y_j \right) = \sum_{k=1}^K \sum_{j=1}^J \alpha_k \beta_j x_k' y_j % \]

Span#

Let \(X \subset \mathbb{R}^N\) be any nonempty set (of points in \(\mathbb{R}^N\), i.e. vectors)

Definition

Set of all possible linear combinations of elements of \(X\) is called the span of \(X\), denoted by \(\mathrm{span}(X)\)

For finite \(X = \{x_1,\ldots, x_K\}\) the span can be expressed as

\[ \mathrm{span}(X)= \left\{ \text{ all } \sum_{k=1}^K \alpha_k x_k \text{ such that } (\alpha_1,\ldots, \alpha_K) \in \mathbb{R}^K \right\} \]

Example

Let’s start with the span of a singleton

Let \(X = \{ (1,1) \} \subset \mathbb{R}^2\)

The span of \(X\) is all vectors of the form

\[\begin{split} % \alpha 1 = \left( \begin{array}{c} \alpha \\ \alpha \end{array} \right) \quad \text{ with } \quad \alpha \in \mathbb{R} % \end{split}\]

Constitutes a line in the plane that passes through

  • the vector \((1,1)\) (set \(\alpha = 1\))

  • the origin \((0,0)\) (set \(\alpha = 0\))

_images/span_of_one_vec.png

Fig. 53 The span of \((1,1)\) in \(\mathbb{R}^2\)#

Example

Let \(x_1 = (3, 4, 2)\) and let \(x_2 = (3, -4, 0.4)\)

By definition, the span is all vectors of the form

\[\begin{split} % y = \alpha \left( \begin{array}{c} 3 \\ 4 \\ 2 \end{array} \right) + \beta \left( \begin{array}{c} 3 \\ -4 \\ 0.4 \end{array} \right) \quad \text{where} \quad \alpha, \beta \in \mathbb{R} % \end{split}\]

It turns out to be a plane that passes through

  • the vector \(x_1\)

  • the vector \(x_2\)

  • the origin \(0\)

_images/span_plane.png

Fig. 54 Span of \(x_1, x_2\)#

Definition

Consider the vectors \(\{e_1, \ldots, e_N\} \subset \mathbb{R}^N\), where

\[\begin{split} % e_1 = \left( \begin{array}{c} 1 \\ 0 \\ \vdots \\ 0 \end{array} \right), \quad e_2 = \left( \begin{array}{c} 0 \\ 1 \\ \vdots \\ 0 \end{array} \right), \; \cdots, \; e_N = \left( \begin{array}{c} 0 \\ 0 \\ \vdots \\ 1 \end{array} \right) % \end{split}\]

That is, \(e_n\) has all zeros except for a \(1\) as the \(n\)-th element

Vectors \(e_1, \ldots, e_N\) are called the canonical basis vectors of \(\mathbb{R}^N\)

_images/vec_canon.png

Fig. 55 Canonical basis vectors in \(\mathbb{R}^2\)#

Fact

The span of \(\{e_1, \ldots, e_N\}\) is equal to all of \(\mathbb{R}^N\)

_images/vec_canon_x.png

Fig. 56 Canonical basis vectors in \(\mathbb{R}^2\)#

Linear subspace#

Definition

A nonempty \(S \subset \mathbb{R}^N\) called a linear subspace of \(\mathbb{R}^N\) if

\[ % x, y \in S \; \text{ and } \;\alpha, \beta \in \mathbb{R} \quad \implies \quad \alpha x + \beta y \in S % \]

In other words, \(S \subset \mathbb{R}^N\) is β€œclosed” under vector addition and scalar multiplication

Note: Sometimes we just say subspace and drop β€œlinear”

Example

\(\mathbb{R}^N\) itself is a linear subspace of \(\mathbb{R}^N\)

Fact

Fix \(a \in \mathbb{R}^N\) and let \(A = \{ x \in \mathbb{R}^N \colon a 'x = 0 \}\)

The set \(A\) is a linear subspace of \(\mathbb{R}^N\)

Linear independence#

Definition

A nonempty collection of vectors \(X = \{x_1,\ldots, x_K\} \subset \mathbb{R}^N\) is called linearly independent if

\[ % \sum_{k=1}^K \alpha_k x_k = 0 \; \implies \; \alpha_1 = \cdots = \alpha_K = 0 % \]
  • linear independence of a set of vectors determines how large a space they span

  • loosely speaking, linearly independent sets span large spaces

Example

Let \(x = (1, 2)\) and \(y = (-5, 3)\)

The set \(X = \{x, y\}\) is linearly independent in \(\mathbb{R}^2\)

Indeed, suppose \(\alpha_1\) and \(\alpha_2\) are scalars with

\[\begin{split} % \alpha_1 \left( \begin{array}{c} 1 \\ 2 \end{array} \right) + \alpha_2 \left( \begin{array}{c} -5 \\ 3 \end{array} \right) = 0 % \end{split}\]

Equivalently,

\[\begin{split} % \alpha_1 = 5 \alpha_2 \\ 2 \alpha_1 = -3 \alpha_2 % \end{split}\]

Then \(2(5\alpha_2) = 10 \alpha_2 = -3 \alpha_2\), implying \(\alpha_2 = 0\) and hence \(\alpha_1 = 0\)

Fact

The set of canonical basis vectors \(\{e_1, \ldots, e_N\}\) is linearly independent in \(\mathbb{R}^N\)

Fact

Take \(X = \{x_1,\ldots, x_K\} \subset \mathbb{R}^N\). For \(K > 1\) all of following statements are equivalent

  1. \(X\) is linearly independent

  2. No \(x_i \in X\) can be written as linear combination of the others

  3. \(X_0 \subset X \implies \mathrm{span}(X_0) \subset \mathrm{span}(X)\)

Example

As another visual example of linear independence, consider the pair

\[\begin{split} % x_1 = \begin{pmatrix} 3 \\ 4 \\ 2 \end{pmatrix} \quad \text{and} \quad x_2 = \begin{pmatrix} 3 \\ -4 \\ 1 \end{pmatrix} % \end{split}\]

The span of this pair is a plane in \(\mathbb{R}^3\)

But if we drop either one the span reduces to a line

_images/nonredundant1.png

Fig. 57 The span of \(\{x_1, x_2\}\) is a plane#

_images/nonredundant2.png

Fig. 58 The span of \(\{x_1\}\) alone is a line#

_images/nonredundant3.png

Fig. 59 The span of \(\{x_2\}\) alone is a line#

Fact

If \(X\) is linearly independent, then \(X\) does not contain \(0\)

Fact

If \(X\) is linearly independent, then every subset of \(X\) is linearly independent

Fact

If \(X= \{x_1,\ldots, x_K\} \subset \mathbb{R}^N\) is linearly independent and \(z\) is an \(N\)-vector not in \(\mathrm{span}(X)\), then \(X \cup \{ z \}\) is linearly independent

Fact

If \(X = \{x_1,\ldots, x_K\} \subset \mathbb{R}^N\) is linearly independent and \(y \in \mathbb{R}^N\), then there is at most one set of scalars \(\alpha_1,\ldots,\alpha_K\) such that \(y = \sum_{k=1}^K \alpha_k x_k\)

Here’s one of the most fundamental results in linear algebra

Exchange Lemma

Let \(S\) be a linear subspace of \(\mathbb{R}^N\), and be spanned by \(K\) vectors.
Then any linearly independent subset of \(S\) has at most \(K\) vectors

Example

If \(X = \{x_1, x_2, x_3\} \subset \mathbb{R}^2\), then \(X\) is linearly dependent

  • because \(\mathbb{R}^2\) is spanned by the two vectors \(e_1, e_2\)

_images/vecs.png

Fig. 60 Must be linearly dependent#

Fact

Let \(X = \{ x_1, \ldots, x_N \}\) be any \(N\) vectors in \(\mathbb{R}^N\)

\(\mathrm{span}(X) = \mathbb{R}^N\) if and only if \(X\) is linearly independent

Example

The vectors \(x = (1, 2)\) and \(y = (-5, 3)\) span \(\mathbb{R}^2\)

Bases and dimension#

Definition

Let \(S\) be a linear subspace of \(\mathbb{R}^N\)

A set of vectors \(B = \{b_1, \ldots, b_K\} \subset S\) is called a basis of \(S\) if

  1. \(B\) is linearly independent

  2. \(\mathrm{span}(B) = S\)

Example

Canonical basis vectors form a basis of \(\mathbb{R}^N\)

Example

Recall the plane

\[ % P = \{ (x_1, x_2, 0) \in \mathbb{R}^3 \colon x_1, x_2 \in \mathbb{R \}} % \]

We showed before that \(\mathrm{span}\{e_1, e_2\} = P\) for

\[\begin{split} % e_1 = \left( \begin{array}{c} 1 \\ 0 \\ 0 \end{array} \right), \quad e_2 = \left( \begin{array}{c} 0 \\ 1 \\ 0 \end{array} \right) % \end{split}\]

Moreover, \(\{e_1, e_2\}\) is linearly independent (why?)

Hence \(\{e_1, e_2\}\) is a basis for \(P\)

_images/flat_plane_e_vecs.png

Fig. 61 The pair \(\{e_1, e_2\}\) form a basis for \(P\)#

What are the implications of \(B\) being a basis of \(S\)?

  • every element of \(S\) can be represented uniquely from the smaller set \(B\)

In more detail:

  • \(B\) spans \(S\) and, by linear independence, every element is needed to span \(S\) β€” a β€œminimal” spanning set

  • Since \(B\) spans \(S\), every \(y\) in \(S\) can be represented as a linear combination of the basis vectors

  • By independence, this representation is unique

Fact

If \(B \subset \mathbb{R}^N\) is linearly independent, then \(B\) is a basis of \(\mathrm{span}(B)\)

Fact: Fundamental Properties of Bases

If \(S\) is a linear subspace of \(\mathbb{R}^N\) distinct from \(\{0\}\), then

  1. \(S\) has at least one basis, and

  2. every basis of \(S\) has the same number of elements

Definition

Let \(S\) be a linear subspace of \(\mathbb{R}^N\)

We now know that every basis of \(S\) has the same number of elements

This common number is called the dimension of \(S\)

Example

\(\mathbb{R}^N\) is \(N\) dimensional because the \(N\) canonical basis vectors form a basis

Example

\(P = \{ (x_1, x_2, 0) \in \mathbb{R}^3 \colon x_1, x_2 \in \mathbb{R \}}\) is two dimensional because the first two canonical basis vectors of \(\mathbb{R}^3\) form a basis

Linear Mappings#

  • Linear functions are in one-to-one correspondence with matrices

  • Nonlinear functions are closely connected to linear functions through Taylor approximation

Definition

A function \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is called linear if

\[ % T(\alpha x + \beta y) = \alpha Tx + \beta Ty \qquad \forall \, x, y \in \mathbb{R}^K, \; \forall \, \alpha, \beta \in \mathbb{R} % \]
  • Linear functions often written with upper case letters

  • Typically omit parenthesis around arguments when convenient

Example

\(T \colon \mathbb{R} \to \mathbb{R}\) defined by \(Tx = 2x\) is linear

Example

The function \(f \colon \mathbb{R} \to \mathbb{R}\) defined by \(f(x) = x^2\) is nonlinear

Example

Given constants \(c_1\) and \(c_2\), the function \(T \colon \mathbb{R}^2 \to \mathbb{R}\) defined by

\[ % T x = T (x_1, x_2) = c_1 x_1 + c_2 x_2 % \]

is linear

_images/linfunc.png

Fig. 62 The graph of \(T x = c_1 x_1 + c_2 x_2\) is a plane through the origin#

Note

Thinking of linear functions as those whose graph is a straight line is not correct!

Example

Function \(f \colon \mathbb{R} \to \mathbb{R}\) defined by \(f(x) = 1 + 2x\) is nonlinear

This kind of function is called an affine function

Fact

If \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is a linear mapping and \(x_1,\ldots, x_J\) are vectors in \(\mathbb{R}^K\), then for any linear combination we have

\[ T \left[ \alpha_1 x_1 + \cdots + \alpha_J x_J \right] = \alpha_1 T x_1 + \cdots + \alpha_J T x_J \]

Fact

If \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is a linear mapping, then

\[ % \mathrm{rng}(T) = \mathrm{span}(V) \quad \text{where} \quad V = \{Te_1, \ldots, Te_K\} % \]

Here \(e_k\) is the \(k\)-th canonical basis vector in \(\mathbb{R}^K\)

  • note that \(V = \{Te_1, \ldots, Te_K\}\) is the set of columns of the matrix representation of \(T\)

Example

Let \(T \colon \mathbb{R}^2 \to \mathbb{R}^2\) be defined by

\[\begin{split} % Tx = T(x_1, x_2) = x_1 \begin{pmatrix} 1 \\ 2 \end{pmatrix} + x_2 \begin{pmatrix} 0 \\ -2 \end{pmatrix} % \end{split}\]

Then

\[\begin{split} % Te_1 = \begin{pmatrix} 1 \\ 2 \end{pmatrix} \quad \text{and} \quad Te_2 = \begin{pmatrix} 0 \\ -2 \end{pmatrix} % \end{split}\]

Exercise: Show that \(V = \{Te_1, Te_2\}\) is linearly independent

We conclude that the range of \(T\) is all of \(\mathbb{R}^2\) (why?)

Definition

The null space or kernel of linear mapping \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) is

\[ % \mathrm{kernel}(T) = \{ x \in \mathbb{R}^K \colon Tx = 0\} % \]

Linearity and Bijections#

Recall that an arbitrary function can be

  • one-to-one (injections)

  • onto (surjections)

  • both (bijections)

  • neither

For linear functions from \(\mathbb{R}^N\) to \(\mathbb{R}^N\), the first three are all equivalent!

In particular,

\[ % \text{onto } \iff \text{ one-to-one } \iff \text{ bijection} % \]

Fact

If \(T\) is a linear function from \(\mathbb{R}^N\) to \(\mathbb{R}^N\) then all of the following are equivalent:

  1. \(T\) is a bijection

  2. \(T\) is onto/surjective

  3. \(T\) is one-to-one/injective

  4. \(\mathrm{kernel}(T) = \{ 0 \}\)

  5. The set of vectors \(V = \{Te_1, \ldots, Te_N\}\) is linearly independent

Definition

If any one of the above equivalent conditions is true, then \(T\) is called nonsingular

Fact

If \(T \colon \mathbb{R}^N \to \mathbb{R}^N\) is nonsingular then so is \(T^{-1}\).

Mappings Across Different Dimensions#

Remember that the above results apply to mappings from \(\mathbb{R}^N\) to \(\mathbb{R}^N\)

Things change when we look at linear mappings across dimensions

The general rules for linear mappings are

  • Mappings from lower to higher dimensions cannot be onto

  • Mappings from higher to lower dimensions cannot be one-to-one

  • In either case they cannot be bijections

Fact

For a linear mapping \(T\) from \(\mathbb{R}^K \to \mathbb{R}^N\), the following statements are true:

  1. If \(K < N\) then \(T\) is not onto

  2. If \(K > N\) then \(T\) is not one-to-one

Example

Cost function \(c(k, \ell) = rk + w\ell\) cannot be one-to-one

_images/cost_min_2.png

Matrices as Mappings#

Any \(N \times K\) matrix \(A\) can be thought of as a function \(x \mapsto A x\)

  • In \(A x\) the \(x\) is understood to be a column vector

It turns out that every such mapping is linear!

To see this fix \(N \times K\) matrix \(A\) and let \(T\) be defined by

\[ % T \colon \mathbb{R}^K \to \mathbb{R}^N, \qquad Tx = A x % \]

Pick any \(x\), \(y\) in \(\mathbb{R}^K\), and any scalars \(\alpha\) and \(\beta\)

The rules of matrix arithmetic tell us that

\[ % T(\alpha x + \beta y) = A (\alpha x + \beta y) = \alpha A x + \beta A y =: \alpha Tx + \beta Ty % \]

So matrices make linear functions

How about examples of linear functions that don’t involve matrices?

There are none!

Fact

If \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) then

\[ T \text{ is linear } \; \iff \; \exists \; N \times K \text{ matrix } A \text{ such that } Tx = A x, \;\forall \, x \in \mathbb{R}^K \]

Matrix Product as Composition#

Fact

Let

  • \(A\) be \(N \times K\) and \(B\) be \(K \times M\)

  • \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) be the linear mapping \(Tx = Ax\)

  • \(U \colon \mathbb{R}^M \to \mathbb{R}^K\) be the linear mapping \(Ux = Bx\)

The matrix product \(A B\) corresponds exactly to the composition of \(T\) and \(U\)

This helps us understand a few things

For example, let

  • \(A\) be \(N \times K\) and \(B\) be \(J \times M\)

  • \(T \colon \mathbb{R}^K \to \mathbb{R}^N\) be the linear mapping \(Tx = Ax\)

  • \(U \colon \mathbb{R}^M \to \mathbb{R}^J\) be the linear mapping \(Ux = Bx\)

Then \(A B\) is only defined when \(K = J\)

This is because \(A B\) corresponds to \(T \circ U\)

But for \(T \circ U\) to be well defined we need \(K = J\)

Then \(U\) mappings \(\mathbb{R}^M\) to \(\mathbb{R}^K\) and \(T\) mappings \(\mathbb{R}^K\) to \(\mathbb{R}^N\)

Column Space#

Definition

Let \(A\) be an \(N \times K\) matrix.
The column space of \(A\) is defined as the span of its columns

\[\begin{split} % \mathrm{span}(A) = \mathrm{span} \{ \mathrm{col}_1 (A), \ldots, \mathrm{col}_K(A) \} \\ = \text{all vectors of the form } \sum_{k=1}^K x_k \mathrm{col}_k(A) % \end{split}\]

Equivalently,

\[ % \mathrm{span}(A) = \{ Ax \colon x \in \mathbb{R}^K \} % \]

This is exactly the range of the associated linear mapping

\(T \colon \mathbb{R}^K \to \mathbb{R}^N\) defined by \(T x = A x\)

Example

If

\[\begin{split} % A = \begin{pmatrix} 1 & -5 \\ 2 & 3 \end{pmatrix} % \end{split}\]

then the span is all linear combinations

\[\begin{split} % x_1 \left( \begin{array}{c} 1 \\ 2 \end{array} \right) + x_2 \left( \begin{array}{c} -5 \\ 3 \end{array} \right) \quad \text{where} \quad (x_1, x_2) \in \mathbb{R}^2 % \end{split}\]

These columns are linearly independent (shown earlier)

Hence the column space is all of \(\mathbb{R}^2\) (why?)

Exercise: Show that the column space of any \(N \times K\) matrix is a linear subspace of \(\mathbb{R}^N\)

Rank#

Equivalent questions

  • How large is the range of the linear mapping \(T x = A x\)?

  • How large is the column space of \(A\)?

The obvious measure of size for a linear subspace is its dimension

Definition

The dimension of \(\mathrm{span}(A)\) is known as the rank of \(A\)

\[ % \mathrm{rank}(A) = \mathrm{dim}(\mathrm{span}(A)) % \]

Because \(\mathrm{span}(A)\) is the span of \(K\) vectors, we have

\[ % \mathrm{rank}(A) = \mathrm{dim}(\mathrm{span}(A)) \leq K % \]

Definition

\(A\) is said to have full column rank if

\[ % \mathrm{rank}(A) = \text{ number of columns of } A % \]

Fact

For any matrix \(A\), the following statements are equivalent:

  1. \(A\) is of full column rank

  2. The columns of \(A\) are linearly independent

  3. If \(A x = 0\), then \(x = 0\)

Exercise: Check this, recalling that

\[ % \dim(\mathrm{span}\{a_1, \ldots, a_K\}) = K \, \iff \, \{a_1, \ldots, a_K\} \text{ linearly indepenent} % \]
import numpy as np
A = [[2.0, 1.0],
     [6.5, 3.0]]
print(f'Rank = {np.linalg.matrix_rank(A)}')
A = [[2.0, 1.0],
     [6.0, 3.0]]
print(f'Rank = {np.linalg.matrix_rank(A)}')
Rank = 2
Rank = 1

Systems of Linear Equations#

Let’s look at solving linear equations such as \(A x = b\) where \(A\) is a square matrix

  • Take \(N \times N\) matrix \(A\) and \(N \times 1\) vector \(b\) as given

  • Search for an \(N \times 1\) solution \(x\)

But does such a solution exist? If so is it unique?

The best way to think about this is to consider the corresponding linear mapping

\[ % T \colon \mathbb{R}^N \to \mathbb{R}^N, \qquad Tx = A x % \]

Equivalent:

  1. \(A x = b\) has a unique solution \(x\) for any given \(b\)

  2. \(T x = b\) has a unique solution \(x\) for any given \(b\)

  3. \(T\) is a bijection

We already have conditions for linear mappings to be bijections

Just need to translate these into the matrix setting

Recall that \(T\) called nonsingular if \(T\) is a linear bijection

Definition

We say that \(A\) is nonsingular if \(T: x \to Ax\) is nonsingular

Equivalent:

  • \(x \mapsto A x\) is a bijection from \(\mathbb{R}^N\) to \(\mathbb{R}^N\)

We now list equivalent conditions for nonsingularity

Fact

Let \(A\) be an \(N \times N\) matrix
All of the following conditions are equivalent

  1. \(A\) is nonsingular

  2. The columns of \(A\) are linearly independent

  3. \(\mathrm{rank}(A) = N\)

  4. \(\mathrm{span}(A) = \mathbb{R}^N\)

  5. If \(A x = A y\), then \(x = y\)

  6. If \(A x = 0\), then \(x = 0\)

  7. For each \(b \in \mathbb{R}^N\), the equation \(A x = b\) has a solution

  8. For each \(b \in \mathbb{R}^N\), the equation \(A x = b\) has a unique solution

All equivalent ways of saying that \(Tx = A x\) is a bijection!

Example

For condition 5 the equivalence is:
\(A x = A y\), then \(x = y\) \(\iff\) \(T x = T y\), then \(x = y\) \(\iff\) \(T\) is one-to-one \(\iff\) Since \(T\) is a linear mapping from \(\mathbb{R}^N\) to \(\mathbb{R}^N\), \(T\) is a bijection

Example

For condition 6 the equivalence is:
if \(A x = 0\), then \(x = 0\) \(\iff\) \(\{x: Ax = 0\} = \{0\}\) \(\iff\) \(\{x: Tx = 0\} = \{0\}\) \(\iff\) \(\ker{T}=\{0\}\) \(\iff\) Since \(T\) is a linear mapping from \(\mathbb{R}^N\) to \(\mathbb{R}^N\), \(T\) is a bijection

Example

For condition 7 the equivalence is:
for each \(b\in\mathbb{R}^N\), the equation \(Ax = b\) has a solution \(\iff\) every \(b\in\mathbb{R}^N\) has an \(x\) such that \(Ax = b\) \(\iff\) every \(b\in\mathbb{R}^N\) has an \(x\) such that \(Tx = b\) \(\iff\) \(T is onto/surjection\) \(\iff\) Since \(T\) is a linear mapping from \(\mathbb{R}^N\) to \(\mathbb{R}^N\), \(T\) is a bijection

Example

Now consider condition 2: \

The columns of \(A\) are linearly independent.

Let \(e_j\) be the \(j\)-th canonical basis vector in \(\mathbb{R}^N\).

Observe that \(Ae_j = \mathrm{col}_j(A)\) \(\implies\) \(Te_j = \mathrm{col}_j(A)\) \(\implies\) \(V = \{T e_1, \ldots, Te_N\} =\) columns of \(A\), and \(V\) is linearly independent if and only if \(T\) is a bijection

Example

Consider a one good linear market system

\[\begin{split} % q = a - b p \qquad (\text{demand}) \\ q = c + d p \qquad (\text{supply}) % \end{split}\]

Treating \(q\) and \(p\) as the unknowns, let’s write in matrix form as

\[\begin{split} % \begin{pmatrix} 1 & b \\ 1 & -d \end{pmatrix} \begin{pmatrix} q\\ p \end{pmatrix} = \begin{pmatrix} a \\ c \end{pmatrix} % \end{split}\]

A unique solution exists whenever the columns are linearly independent

  • so \((b, -d)\) is not a scalar multiple of \(1\) \(\iff\) \(b \ne -d\)

_images/not_multiple_of_one.png

Fig. 63 \((b, -d)\) is not a scalar multiple of \(1\)#

Example

Recall in the introduction we try to solve the system \(A x = b\) of this form

\[\begin{split} A = \left[\begin{array}{cccc} 1 & 2 & 4 \\ 1 & 4 & 8 \\ 0 & 3 & 6 \\ \end{array} \right] \end{split}\]

The problem is that \(A\) is singular (not nonsingular)

  • In particular, \(\mathrm{col}_3(A) = 2 \mathrm{col}_2(A)\)

Definition

Given square matrix \(A\), suppose \(\exists\) square matrix \(B\) such that

\[A B = B A = I\]

Then

  • \(B\) is called the inverse of \(A\), and written \(A^{-1}\)

  • \(A\) is called invertible

Fact

A square matrix \(A\) is nonsingular if and only if it is invertible

  • \(A^{-1}\) is just the matrix corresponding to the linear mapping \(T^{-1}\)

Fact

Given nonsingular \(N \times N\) matrix \(A\) and \(b \in \mathbb{R}^N\), the unique solution to \(A x = b\) is given by

\[ x_b = A^{-1} b \]

Fact

In the \(2 \times 2\) case, the inverse has the form

\[\begin{split} % \left( \begin{array}{cc} a & b \\ c & d \\ \end{array} \right)^{-1} = \frac{1}{ad - bc} \left( \begin{array}{cc} d & -b \\ -c & a \\ \end{array} \right) % \end{split}\]

Example

\[\begin{split} % A = \left( \begin{array}{cc} 1 & 2 \\ 1 & -1.5 \\ \end{array} \right) \quad \implies \quad A^{-1} = \frac{1}{-3.5} \left( \begin{array}{cc} -1.5 & -2 \\ -1 & 1 \\ \end{array} \right) % \end{split}\]

Fact

If \(A\) is nonsingular and \(\alpha \ne 0\), then

  1. \(A^{-1}\) is nonsingular and \((A^{-1})^{-1} = A\)

  2. \(\alpha A\) is nonsingular and \((\alpha A)^{-1} = \alpha^{-1} A^{-1}\)

Fact

If \(A\) and \(B\) are \(N \times N\) and nonsingular then

  1. \(A B\) is also nonsingular

  2. \((A B)^{-1} = B^{-1} A^{-1}\)

Singular case#

Example

The matrix \(A\) with columns

\[\begin{split} % a_1 = \begin{pmatrix} 3 \\ 4 \\ 2 \end{pmatrix}, \quad a_2 = \begin{pmatrix} 3 \\ -4 \\ 1 \end{pmatrix} \quad \text{and} \quad a_3 = \begin{pmatrix} -3 \\ 4 \\ -1 \end{pmatrix} % \end{split}\]

is singular (\(a_3 = - a_2\))

Its column space \(\mathrm{span}(A)\) is just a plane in \(\mathbb{R}^2\)

Recall \(b \in \mathrm{span}(A)\)

  • \(\iff\) \(\exists \, x_1, \ldots, x_N\) such that \(\sum_{k=1}^N x_k \mathrm{col}_k(A) = b\)

  • \(\iff\) \(\exists \, x\) such that \(A x = b\)

Thus if \(b\) is not in this plane then \(A x = b\) has no solution

_images/not_in_span.png

Fig. 64 The vector \(b\) is not in \(\mathrm{span}(A)\)#

Fact

If \(A\) is a singular matrix and \(A x = b\) has a solution then it has an infinity (in fact a continuum) of solutions

References and reading#

References
Further reading and self-learning