Last update: 6 November 2012
An algebra is a vector space $A$ over $\u2102$ with a multiplication such that $A$ is a ring with identity and such that for all ${a}_{1},{a}_{2}\in A$ and $c\in \u2102,$
$$\begin{array}{cc}\left(c{a}_{1}\right){a}_{2}={a}_{1}\left(c{a}_{2}\right)=c\left({a}_{1}{a}_{2}\right)\text{.}& \text{(1.1)}\end{array}$$More precisely, an algebra is a vector space over $\u2102$ with a multiplication that is associative, distributive, has an identity, and satisfies (1.1). Suppose that ${a}_{1},{a}_{2},\dots ,{a}_{n}$ is a basis of $A$ and that ${c}_{ij}^{k}$ are constants in $\u2102$ such that
$$\begin{array}{cc}{a}_{i}{a}_{j}=\sum _{k=1}^{n}{c}_{ij}^{k}{a}_{k}\text{.}& \text{(1.2)}\end{array}$$It follows from (1.1) and the distributive property that the equations (1.2) for $1\le i,j\le n$ completely determine the multiplication in $A\text{.}$ The ${c}_{ij}^{k}$ are called structure constants. The center of an algebra $A$ is the subalgebra
$$Z\left(A\right)=\{b\in A\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}ab=ba\phantom{\rule{0.2em}{0ex}}\text{for all}a\in A\}\text{.}$$A nonzero element $p\in A$ such that $pp=p$ is called an idempotent. Two idempotents ${p}_{1},{p}_{2}\in A$ are orthogonal if ${p}_{1}{p}_{2}={p}_{2}{p}_{1}=0\text{.}$ A minimal idempotent is an idempotent $p\in A$ that cannot be written as a sum $p={p}_{1}+{p}_{2}$ of orthogonal idempotents ${p}_{1},{p}_{2}\in A\text{.}$
For each positive integer $d$ we denote the algebra of $d\times d$ matrices with entries from $\u2102$ and ordinary matrix multiplication by ${M}_{d}\left(\u2102\right)\text{.}$ We denote the $d\times d$ identity matrix in ${M}_{d}\left(\u2102\right)$ by ${I}_{d}\text{.}$ For a general algebra $A,$ ${M}_{d}\left(A\right)$ denotes $d\times d$ matrices with entries in $A\text{.}$ We denote the algebra of matrices of the form
$$\left(\begin{array}{cccc}a& 0& \dots & 0\\ 0& a& \dots & 0\\ \dots & \dots & \dots & \dots \\ 0& 0& \dots & a\end{array}\right),\phantom{\rule{1em}{0ex}}a\in A,$$by ${I}_{n}\left(A\right)\text{.}$ Note that ${I}_{n}\left(A\right)\cong A,$ as algebras. The trace, $\text{tr}\left(a\right),$ of a matrix $a=\mid {a}_{ij}\mid $ is the sum of the diagonal entries of $a,$ $\text{tr}\left(a\right)=\sum _{i}{a}_{ii}\text{.}$
An algebra homomorphism of an algebra $A$ into an algebra $V$ is a $\u2102\text{-linear}$ map $f:\phantom{\rule{0.2em}{0ex}}A\to B$ such that for all ${a}_{1},{a}_{2}\in A,$
$$\begin{array}{cc}\begin{array}{c}f\left(1\right)=1,\\ f\left({a}_{1}{a}_{2}\right)=f\left({a}_{1}\right)f\left({a}_{2}\right)\text{.}\end{array}& \text{(1.3)}\end{array}$$A representation of an algebra $A$ is an algebra homomorphism
$$V:\phantom{\rule{0.2em}{0ex}}A\u27f6{M}_{d}\left(\u2102\right)\text{.}$$The dimension of the representation $V$ is $d\text{.}$ The image $V\left(A\right)$ of the representation $V$ is a finite dimensional algebra of $d\times d$ matrices which we call the algebra of the representation $V\text{.}$ It is a subalgebra of ${M}_{d}\left(\u2102\right)\text{.}$ A faithful representation is a representation which is injective. In this case the algebra $V\left(A\right)$ is called a faithful realization of $A$ and $A\cong V\left(A\right)$ The character of the representation $V$ of $A$ is the function ${\chi}_{V}:\phantom{\rule{0.2em}{0ex}}A\to \u2102$ given by
$$\begin{array}{cc}{\chi}_{V}\left(a\right)=\text{tr}\left(V\left(a\right)\right)\text{.}& \text{(1.4)}\end{array}$$An anti-representation of an algebra $A$ is a $\u2102\text{-linear}$ map ${V}^{\prime}:\phantom{\rule{0.2em}{0ex}}A\to {M}_{d}\left(\u2102\right)$ such that for all ${a}_{1},{a}_{2}\in A,$
$$\begin{array}{c}{V}^{\prime}\left(1\right)={I}_{d},\\ {V}^{\prime}\left({a}_{1}{a}_{2}\right)={V}^{\prime}\left({a}_{2}\right){V}^{\prime}\left({a}_{1}\right)\text{.}\end{array}$$As before the dimension of the anti-representation is $d$ and the image, ${V}^{\prime}\left(A\right),$ of the anti-representation is an algebra of matrices called the algebra of the anti-representation.
The group algebra $\u2102G$ of a group $G$ is the algebra of formal finite linear combinations of elements of $G$ where the multiplication is given by the linear extension of the multiplication in $G\text{.}$ The elements of $G$ constitute a basis of $\u2102G\text{.}$ A representation of the group $G$ is a representation of its group algebra.
Let $A$ be an algebra. An $A\text{-module}$ is a vector space $V$ with an $A$ action $A\times V\to V$ such that for all $a,{a}_{1},{a}_{2}\in A,$ $v,{v}_{1},{v}_{2}\in V,$ and ${c}_{1},{c}_{2}\in \u2102,$
$$\begin{array}{cc}\begin{array}{ccc}1v& =& v,\\ {a}_{1}\left({a}_{2}v\right)& =& \left({a}_{1}{a}_{2}\right)v,\\ ({a}_{1}+{a}_{2})v& =& {a}_{1}v+{a}_{2}v,\\ a({c}_{1}{v}_{1}+{c}_{2}{v}_{2})& =& {c}_{1}\left(a{v}_{1}\right)+{c}_{2}\left(a{v}_{2}\right)\text{.}\end{array}& \text{(1.5)}\end{array}$$An $A\text{-module}$ homomorphism is a $\u2102\text{-linear}$ map $f:\phantom{\rule{0.2em}{0ex}}V\to {V}^{\prime}$ between $A\text{-modules}$ $V$ and ${V}^{\prime}$ such that for all $a\in A$ and $v\in V,$
$$\begin{array}{cc}f\left(av\right)=af\left(v\right)\text{.}& \text{(1.6)}\end{array}$$An $A\text{-module}$ isomorphism is a bijective $A\text{-module}$ homomorphism.
By condition 3 of (1.5) the action of $a\in A$ on $V$ is a linear transformation $V\left(a\right)$ of $V\text{.}$ If we specify a basis $B$ of $V$ then the linear transformation $V\left(a\right)$ can be written as a $d\times d$ matrix, where $\text{dim}\phantom{\rule{0.2em}{0ex}}V=d\text{.}$ In this way we associate to every element of $A$ a $d\times d$ matrix. This gives a representation of $A$ which we shall also denote by $V\text{.}$
Conversely, if $T$ is a $d$ dimensional representation of $A$ and $V$ is a $d$ dimensional vector space with basis $B$ then we can define the action of an element $a$ in $A$ by the action of the linear transformation on $V$ determined by the matrix $T\left(a\right)$ so that for all $v\in V,$
$$av=T\left(a\right)v\text{.}$$In this way $V$ becomes an $A\text{-module.}$ Thus the notion of A $A\text{-module}$ is equivalent to the notion of representation. When we view the $A\text{-module}$ we are focusing on the vector space and when we view the representation we are focusing on the linear transformations (matrices).
Let $V$ be an $A\text{-module}$ with basis $B$ and let ${B}^{\prime}$ be another basis of $V$ and denote the change of basis matrix by $P\text{.}$ Let $a\in A$ and let $V\left(a\right),$ ${V}^{\prime}\left(a\right)$ be the matrices, with respect to the bases $B$ and ${B}^{\prime}$ respectively, of the linear transformation on $V$ induced by $a\text{.}$ Then by elementary linear algebra we have that
$$\begin{array}{cc}{V}^{\prime}\left(a\right)=PV\left(a\right){P}^{-1}\text{.}& \text{(1.7)}\end{array}$$This leads us to the following definition. Two $d$ dimensional representations $V$ and ${V}^{\prime}$ of an algebra $A$ are equivalent if there exists an invertible $d\times d$ matrix $P$ such that (1.7) holds for all $a\in A\text{.}$ Isomorphic modules define equivalent representations.
The direct sum ${V}_{1}\oplus {V}_{2}$ of two $A\text{-modules}$ ${V}_{1}$ and $V2$ is the $A\text{-module}$ of all pairs $({v}_{1},{v}_{2}),$ ${v}_{1}\in {V}_{1}$ and ${v}_{2}\in {V}_{2},$ with the $A$ action given by
$$a({v}_{1},{v}_{2})=(a{v}_{1},a{v}_{2}),$$for all $a\in A\text{.}$ The direct sum ${V}_{1}\oplus {V}_{2}$ of two representations ${V}_{1}$ and ${V}_{2}$ of $A$ is the representation $V$ of $A$ given by
$$\begin{array}{cc}V\left(a\right)=\left(\begin{array}{cc}{V}_{1}\left(a\right)& 0\\ 0& {V}_{2}\left(a\right)\end{array}\right)\text{.}& \text{(1.8)}\end{array}$$Direct sums of $n>2$ representations or $A\text{-modules}$ are defined analogously. We denote $V\oplus V\oplus \dots \oplus V,$ $n$ factors, by ${V}^{\oplus n}\text{.}$ Note that the algebra of the representation ${V}^{\oplus n},$ ${V}^{\oplus n}\left(A\right),$ is ${I}_{n}\left(V\left(A\right)\right)\text{.}$
An $A\text{-invariant}$ subspace of an $A\text{-module}$ $V$ is a subspace ${V}^{\prime}$ of $V$ such that
$$\{a{v}^{\prime}\phantom{\rule{0.2em}{0ex}}\mid \phantom{\rule{0.2em}{0ex}}a\in A,\phantom{\rule{0.2em}{0ex}}{v}^{\prime}\in {V}^{\prime}\}=A{V}^{\prime}\subseteq {V}^{\prime}\text{.}$$An $A\text{-invariant}$ subspace of $V$ is just a submodule of $V\text{.}$ Note that the intersection ${V}^{\prime}\cap {V}^{\prime \prime}$ of any two invariant subspaces ${V}^{\prime},$ ${V}^{\prime \prime}$ of $V$ is also an invariant subspace of $V\text{.}$
An $A\text{-module}$ with no submodules is a simple module. An irreducible representation is a representation that is not equivalent to a representation of the form
$$\begin{array}{cc}V\left(a\right)=\left(\begin{array}{cc}{V}^{\prime}\left(a\right)& *\\ 0& *\end{array}\right),& \text{(1.9)}\end{array}$$where ${V}^{\prime}$ is also representation of $A\text{.}$ If ${V}^{\prime},$ ${V}^{\prime \prime}$ are invariant subspaces of a representation $V$ and ${V}^{\prime}$ is irreducible then ${V}^{\prime}\cap {V}^{\prime \prime}$ is either equal to 0 or ${V}^{\prime}\text{.}$ A completely decomposable representation is a representation that is equivalent to a direct sum of irreducible representations. An algebra $A$ is called completely decomposable if every representation of $A$ is completely decomposable.
The centralizer of an algebra $A$ of $d\times d$ matrices is the algebra $\stackrel{\u203e}{A}$ of $d\times d$ matrices $\stackrel{\u203e}{a}$ such that for all matrices $a\in A,$
$$\begin{array}{cc}\stackrel{\u203e}{a}a=a\stackrel{\u203e}{a}\text{.}& \text{(1.10)}\end{array}$$The centralizer of a representation $V$ of an algebra $A$ is the algebra $\stackrel{\u203e}{V\left(A\right)}\text{.}$
1. Let $A$ be an algebra of $d\times d$ matrices. Since all matrices in $A$ commute with all elements of $A,$
$$A\subseteq \stackrel{\stackrel{\u203e}{\u203e}}{A}\text{.}$$Also,
$$\begin{array}{c}\stackrel{\u203e}{{I}_{n}\left(A\right)}={M}_{n}\left(\stackrel{\u203e}{A}\right)\phantom{\rule{1em}{0ex}}\text{and}\\ \stackrel{\u203e}{{M}_{n}\left(A\right)}={I}_{n}\left(\stackrel{\u203e}{A}\right)\text{.}\end{array}$$Hence,
$$\stackrel{\stackrel{\u203e}{\u203e}}{{I}_{n}\left(A\right)}={I}_{n}\left(\stackrel{\stackrel{\u203e}{\u203e}}{A}\right)\text{.}$$2. Schur's lemma. Let ${W}_{1}$ and ${W}_{2}$ be irreducible representations of $A$ of dimensions ${d}_{1}$ and ${d}_{2}\text{.}$ If $B$ is a ${d}_{1}\times {d}_{2}$ matrix such that
$${W}_{1}\left(a\right)B=B{W}_{2}\left(a\right),\phantom{\rule{1em}{0ex}}\text{for all}\phantom{\rule{0.2em}{0ex}}a\in A,$$then either
Proof. | |
$B$ determines a linear transformation $B:{W}_{1}\to {W}_{2}\text{.}$ Since $Ba=aB$ for all $a\in A$ we have that $$B\left(aw1\right)=Baw1=aBw1=aB\left(w1\right),$$for all $a\in A$ and ${w}_{1}\in {W}_{1}$ Thus $B$ is an $A\text{-module}$ homomorphism. $\text{ker}\phantom{\rule{0.2em}{0ex}}B$ and $\text{im}\phantom{\rule{0.2em}{0ex}}B$ are submodules of ${W}_{1}$ and ${W}_{2}$ respectively and are therefore either 0 or equal to ${W}_{1}$ or ${W}_{2}$ respectively. If $\text{ker}\phantom{\rule{0.2em}{0ex}}B={W}_{1}$ or $\text{im}\phantom{\rule{0.2em}{0ex}}B=0$ then $B=0\text{.}$ In the remaining case $B$ is a bijection, and thus an isomorphism between ${W}_{1}$ and ${W}_{2}$ In this case we have that ${d}_{1}={d}_{2}\text{.}$ Thus the matrix $B$ is square and invertible. Now suppose that ${W}_{1}={W}_{2}$ and let $c$ be an eigenvalue of $B\text{.}$ Then the matrix $c{I}_{{d}_{1}}-B$ is such that ${W}_{1}\left(a\right)(c{I}_{{d}_{1}}-B)=(c{I}_{{d}_{1}}-B){W}_{1}\left(a\right)$ for all $a\in A\text{.}$ The argument in the preceding paragraph shows that $c{I}_{{d}_{1}}-B$ is either invertible or 0. But if $c$ is an eigenvalue of $B$ then $\text{det}\phantom{\rule{0.2em}{0ex}}(c{I}_{{d}_{1}}-B)=0\text{.}$ Thus $c{I}_{{d}_{1}}-B=0\text{.}$ $\square $ |
3. Suppose that $V$ is a completely decomposable representation of an algebra $A$ and that $V\cong {\oplus}_{\lambda}{W}_{\lambda}^{\oplus {m}_{\lambda}}$ where the ${W}_{\lambda}$ are nonisomorphic irreducible representations of $A\text{.}$ Schur's lemma shows that the $A$-homomorphisms from ${W}_{\lambda}$ to $V$ form a vector space $${\mathrm{Hom}}_{A}({W}_{\lambda},V)\cong {\u2102}^{\oplus {m}_{\lambda}}\text{.}$$ The multiplicity of the irreducible representation ${W}_{\lambda}$ un $V$ is $${m}_{\lambda}=\mathrm{dim}{\mathrm{Hom}}_{A}({W}_{\lambda},V)\text{.}$$
4. Suppose that $V$ is a completely decomposable representation of an algebra $A$ and that $V\cong {\oplus}_{\lambda}{W}_{\lambda}^{\oplus {m}_{\lambda}}$ where the ${W}_{\lambda}$ are nonisomorphic irreducible representations of $A$ and let $\mathrm{dim}{W}_{\lambda}={d}_{\lambda}\text{.}$ Then $$V\left(A\right)\cong {\oplus}_{i}{W}_{\lambda}^{\oplus {m}_{\lambda}}\left(A\right)\cong {\oplus}_{\lambda}{I}_{{m}_{\lambda}}\left({W}_{\lambda}\left(A\right)\right)\cong {\oplus}_{\lambda}{W}_{\lambda}\left(A\right)\text{.}$$ If we view elements of ${\oplus}_{\lambda}{I}_{{m}_{\lambda}}{W}_{\lambda}\left(A\right)$ as block diagonal matrices with ${m}_{\lambda}$ blocks of size ${d}_{\lambda}\times {d}_{\lambda}$ for each $\lambda $, then by using Ex 1 and Schur's lemma we get that $$\begin{array}{ccc}V\left(A\right)\cong {\oplus}_{\lambda}{I}_{{m}_{\lambda}}\left({W}_{\lambda}\left(A\right)\right)& =& {\oplus}_{\lambda}{M}_{{m}_{\lambda}}\left({W}_{\lambda}\left(A\right)\right)\\ & =& {\oplus}_{\lambda}{M}_{{m}_{\lambda}}\left({I}_{{d}_{\lambda}}\left(\u2102\right)\right)\text{.}\end{array}$$
5. Let $V$ be an $A$-module and let $p$ be an idempotent of $A\text{.}$ Then $pV$ is a subspace of $V$ and the action of $p$ on $V$ is a projection from $V$ to $pV\text{.}$ If ${p}_{1},{p}_{2}\in A$ are orthogonal idempotents of $A$ then ${p}_{1}V$ and ${p}_{2}V$ are mutually orthogonal subspaces of $V,$ since if ${p}_{1}v={p}_{2}v\text{'}$ for some $v,v\text{'}\in V$ then ${p}_{1}v={p}_{1}{p}_{1}v={p}_{1}{p}_{2}v\text{'}=0\text{.}$ So $V={p}_{1}V\oplus {p}_{2}V\text{.}$
6. Let $p$ be an idempotent in $A$ and suppose that for every $a\in A,pap=kp$ for some constant $k\in \u2102\text{.}$ If $p$ is not minimal then $p={p}_{1}+{p}_{2},$ where ${p}_{1},{p}_{2}\in A$ are idempotents such that ${p}_{1}{p}_{2}={p}_{2}{p}_{1}=0\text{.}$ Then ${p}_{1}=p{p}_{1}p=kp$ for some constant $k\in \u2102\text{.}$ This implies that ${p}_{1}={p}_{1}{p}_{1}=k{p}_{1}{p}_{1}=k{p}_{1},$ giving that either $k=1$ or ${p}_{1}=0\text{.}$ So $p$ is minimal.
7. Let $A$ be a finite dimensional algebra and suppose that $z\in A$ is an idempotent of $A\text{.}$ If $z$ is not minimal then $z={p}_{1}+{p}_{2}$ where ${p}_{1}$ and ${p}_{2}$ are orthogonal idempotents of $A\text{.}$ If any idempotent in this sum is not minimal we can decompose it into a sum of orthogonal idempotents. We continue this process until we have decomposed $z$ as a sum of minimal orthogonal idempotents. At any particular stage in this process $z$ is expresed as a sum of orthogonal idempotents, $z={\sum}_{i}{p}_{i}\text{.}$ So $zA={\sum}_{i}{p}_{i}A\text{.}$ None of the spaces ${p}_{i}A$ is 0 since ${p}_{i}={p}_{i}\text{.}1\in {p}_{i}A$ and the spacers ${p}_{i}A$ are all mutually orthogonal. Thus, since $zA$ is finite dimensional it will only take a finite number of steps to decompose $z$ into minimal idempotents. A partition of unity is a decomposition of 1 into minimal orthogonal idempotents.
This is an excerpt from the unpublished first chapter of Arun Ram's dissertation entitled Representation Theory, written July 4, 1990.