## Traces and Determinants

Last update: 13 August 2013

## Determinants

Let $a$ be an $n×n$ matrix with entries ${a}_{ij}$.

• The trace of $a$ is  $\mathrm{tr}\left(a\right)=\sum _{i=1}^{n}{a}_{ii}$.
• The determinant of $a$ is  $\mathrm{det}\left(a\right)=\sum _{w\in {S}_{n}}\mathrm{det}\left(w\right){a}_{1w\left(1\right)}{a}_{2w\left(2\right)}\cdots {a}_{nw\left(n\right)}$

Let $𝔽$ be a field and let $n\in {ℤ}_{>0}$.
(a)   Up to constant multiples, $\mathrm{tr}:{M}_{n}\left(𝔽\right)\to 𝔽$ is the unique function such that
(a1)   If $c\in 𝔽$ and $a,b\in {M}_{n}\left(𝔽\right)$ then
 $\mathrm{tr}\left(a+b\right)=\mathrm{tr}\left(a\right)+\mathrm{tr}\left(b\right)\phantom{\rule{2em}{0ex}}\text{and}\phantom{\rule{2em}{0ex}}\mathrm{tr}\left(ca\right)=c\mathrm{tr}\left(a\right).$
(a2)   If $a,b\in {M}_{n}\left(𝔽\right)then$
 $\mathrm{tr}\left(ab\right)=\mathrm{tr}\left(ba\right)$.
(b)   Identify ${M}_{n}\left(𝔽\right)$ with the $𝔽$-module $\underset{\underset{n\phantom{\rule{0.5em}{0ex}}\text{times}}{⏟}}{{𝔽}^{n}×\cdots ×{𝔽}^{n}}$,
 $\begin{array}{ccc}{M}_{n}\left(𝔽\right)& \stackrel{\sim }{⟶}& \underset{\underset{n\phantom{\rule{0.5em}{0ex}}\text{times}}{⏟}}{{𝔽}^{n}×\cdots ×{𝔽}^{n}}\\ a& ⟼& \left({a}_{1}|{a}_{2}|\cdots |{a}_{n}\right)\end{array}$,      where ${a}_{i}$ are the columns of $a$.
The function $\mathrm{det}:{M}_{n}\left(𝔽\right)\to 𝔽$ is the unique function such that
(b1)   (columnwise linear) If $i\in \left\{1,2,\dots ,n\right\}$ and $c\in 𝔽$ then
 $\mathrm{det}\left({a}_{1}|\cdots |{a}_{i}+{b}_{i}|\cdots |{a}_{n}\right)=\mathrm{det}\left({a}_{1}|\cdots |{a}_{i}|\cdots |{a}_{n}\right)+\mathrm{det}\left({a}_{1}|\cdots |{b}_{i}|\cdots |{a}_{n}\right)$
and
 $\mathrm{det}\left({a}_{1}|\cdots |c{a}_{i}|\cdots |{a}_{n}\right)=c\mathrm{det}\left({a}_{1}|\cdots |{a}_{i}|\cdots |{a}_{n}\right)$,
(b2) If $i\in \left\{1,2,\dots ,n-1\right\}$ then
 $\mathrm{det}\left({a}_{1}|\cdots |{a}_{i}|{a}_{i+1}|\cdots |{a}_{n}\right)=-\mathrm{det}\left({a}_{1}|\cdots |{a}_{i+1}|{a}_{i}|\cdots |{a}_{n}\right)$,
(b3)   if $a,b\in {M}_{n}\left(𝔽\right)then$
 $\mathrm{det}\left(ab\right)=\mathrm{det}\left(a\right)\mathrm{det}\left(b\right)$.

(Laplace expansion) $det ( a11a12⋯a1n a21a22⋯a2n ⋮⋮⋱⋮ an1an2⋯ann ) =∑ipi det ( ai1j1 ⋯ ai1jp ⋮⋱⋮ aipj1 ⋯ aipjp ) ( aip+1jp+1 ⋯ aip+1jn ⋮⋱⋮ ainjp+1 ⋯ ainjn )$ where ${j}_{1},\dots ,{j}_{n}$ is a fixed permutation of $1,2,\dots ,n$ and the sum is over all possible divisions of $1,2,\dots ,n$ into two sets $i1<⋯ and ${p}_{i}=+1$ or $-1$ according as ${i}_{1},..,{i}_{n}$ and ${j}_{1},\dots ,{j}_{n}$ are like or unlike derangements of $1,2,\dots ,m\text{.}$

If $A$ and $B$ are $n×n$ matrices then $det(AB)= det(A)det(B).$

Proof.

 To show: $\text{det}\left(AB\right)=\text{det}\left(A\right)\text{det}\left(B\right)$ $det(AB) = ∑w∈Sn det(w) (AB)1,w(1) ⋯ (AB)n,w(n) = ∑w∈Sndet(w) ∑k1,…,kn A1,k1 Bk1,w(1) ⋯ An,kn Bkn,w(n) = ∑k1,…,kn A1,k1 ⋯ An,kn ∑w∈Sndet(w) Bk1,w(1) ⋯ Bkn,w(n) = ∑k=(k1,…,kn)∈Sn det(k) A1,k1 ⋯ An,kn ∑w∈Sn det(k-1) det(w) Bk1,w(1) ⋯ Bkn,w(n) = ∑k=(k1,…,kn)∈Sn det(k) A1,k1 ⋯ An,kn ∑w∈Sn det(wk-1) Bk1,wk-1(k1) ⋯ Bkn,wk-1(kn) = ∑k∈Sn det(k) A1,k1 ⋯ An,kn ∑wk-1∈Sn det(wk-1) B1,wk-1(1) ⋯ Bn,wk-1(n) = det(A) det(B)$

$\square$

 a) Let $B$ be the matrix obtained by switching two rows of $A\text{.}$ Then $det(B)=-det(A).$ b) Let $B$ be the matrix obtained by adding a multiple of a row of $A$ to another row of $A\text{.}$ Then $det(B)=det(A).$ c) Let $B$ be the matrix obtained by multiplying a row of $A$ by a constant $c\in R\text{.}$ Then $det(B)=cdet(A).$

HW: Show that if two rows of $A$ are the same then $\mathrm{det}\left(A\right)=0$.

### Inverses and Cramer's rule"

• The ${\left(i,j\right)}^{\text{th}}$ signed minor or cofactor, ${A}_{ij},$ of $A$ is $\text{det}\left(\stackrel{ˆ}{A}\right)$ where $\stackrel{ˆ}{A}$ is the matrix $A$ with the ${i}^{\text{th}}$ row and the ${j}^{\text{th}}$ column removed.

$∑i=1n aikAhi =δhkdet A.$

If $\text{det}\left(a\right)$ is a unit in $R$ then ${a}^{-1}=\text{det}{\left(a\right)}^{-1}\left({A}_{ij}\right)$. SHOULD THERE BE A TRANSPOSE HERE?

Cramerâ€™s rule for $AX=B\text{.}$

Put Thms VII-X of Hodge and Padoe as exercises.

### $\lambda \text{-matrices}$

If $A$ is a $p×q \lambda \text{-matrix}$ of rank $r$ and ${E}_{1}\left(\lambda \right),\dots ,{E}_{r}\left(\lambda \right)$ are its invariant factors then there exist $M\in {GL}_{p}\left(K\left[\lambda \right]\right)$ and $N\in {GL}_{q}\left(K\left[\lambda \right]\right)$ such that $MAN= ( E1(λ) E2(λ)0 ⋱ Er(λ) 00 ⋱ 0 ) .$

Two $p×q \lambda \text{-matrices}$ $A$ and $B$ are equivalent if and only if they have the same invariant factors and if and only if they have the same elementary divisors.

Note that these proofs work for any Euclidean domain.

• The characteristic polynomial of $A$ is the polynomial $det(A-tIn).$

Cayley-Hamilton Theorem.

Note that the proof of Theorem II §10 Hodge and Padoe!
Theorem III (Hodge and Padoe) gives minimal polynomial.