## Representations

Last update: 6 November 2012

## Representations

An algebra is a vector space $A$ over $ℂ$ with a multiplication such that $A$ is a ring with identity and such that for all ${a}_{1},{a}_{2}\in A$ and $c\in ℂ,$

$(ca1)a2= a1(ca2)= c(a1a2). (1.1)$

More precisely, an algebra is a vector space over $ℂ$ with a multiplication that is associative, distributive, has an identity, and satisfies (1.1). Suppose that ${a}_{1},{a}_{2},\dots ,{a}_{n}$ is a basis of $A$ and that ${c}_{ij}^{k}$ are constants in $ℂ$ such that

$aiaj= ∑k=1n cijk ak. (1.2)$

It follows from (1.1) and the distributive property that the equations (1.2) for $1\le i,j\le n$ completely determine the multiplication in $A\text{.}$ The ${c}_{ij}^{k}$ are called structure constants. The center of an algebra $A$ is the subalgebra

$Z(A)= { b∈A∣ab= bafor alla∈A } .$

A nonzero element $p\in A$ such that $pp=p$ is called an idempotent. Two idempotents ${p}_{1},{p}_{2}\in A$ are orthogonal if ${p}_{1}{p}_{2}={p}_{2}{p}_{1}=0\text{.}$ A minimal idempotent is an idempotent $p\in A$ that cannot be written as a sum $p={p}_{1}+{p}_{2}$ of orthogonal idempotents ${p}_{1},{p}_{2}\in A\text{.}$

For each positive integer $d$ we denote the algebra of $d×d$ matrices with entries from $ℂ$ and ordinary matrix multiplication by ${M}_{d}\left(ℂ\right)\text{.}$ We denote the $d×d$ identity matrix in ${M}_{d}\left(ℂ\right)$ by ${I}_{d}\text{.}$ For a general algebra $A,$ ${M}_{d}\left(A\right)$ denotes $d×d$ matrices with entries in $A\text{.}$ We denote the algebra of matrices of the form

$( a0…0 0a…0 ………… 00…a ) ,a∈A,$

by ${I}_{n}\left(A\right)\text{.}$ Note that ${I}_{n}\left(A\right)\cong A,$ as algebras. The trace, $\text{tr}\left(a\right),$ of a matrix $a=\mid {a}_{ij}\mid$ is the sum of the diagonal entries of $a,$ $\text{tr}\left(a\right)=\sum _{i}{a}_{ii}\text{.}$

An algebra homomorphism of an algebra $A$ into an algebra $V$ is a $ℂ\text{-linear}$ map $f:\phantom{\rule{0.2em}{0ex}}A\to B$ such that for all ${a}_{1},{a}_{2}\in A,$

$f(1)=1, f(a1a2)=f(a1)f(a2). (1.3)$

A representation of an algebra $A$ is an algebra homomorphism

$V:A⟶Md (ℂ).$

The dimension of the representation $V$ is $d\text{.}$ The image $V\left(A\right)$ of the representation $V$ is a finite dimensional algebra of $d×d$ matrices which we call the algebra of the representation $V\text{.}$ It is a subalgebra of ${M}_{d}\left(ℂ\right)\text{.}$ A faithful representation is a representation which is injective. In this case the algebra $V\left(A\right)$ is called a faithful realization of $A$ and $A\cong V\left(A\right)$ The character of the representation $V$ of $A$ is the function ${\chi }_{V}:\phantom{\rule{0.2em}{0ex}}A\to ℂ$ given by

$χV(a)= tr(V(a)). (1.4)$

An anti-representation of an algebra $A$ is a $ℂ\text{-linear}$ map ${V}^{\prime }:\phantom{\rule{0.2em}{0ex}}A\to {M}_{d}\left(ℂ\right)$ such that for all ${a}_{1},{a}_{2}\in A,$

$V′(1)=Id, V′(a1a2)= V′(a2) V′(a1).$

As before the dimension of the anti-representation is $d$ and the image, ${V}^{\prime }\left(A\right),$ of the anti-representation is an algebra of matrices called the algebra of the anti-representation.

The group algebra $ℂG$ of a group $G$ is the algebra of formal finite linear combinations of elements of $G$ where the multiplication is given by the linear extension of the multiplication in $G\text{.}$ The elements of $G$ constitute a basis of $ℂG\text{.}$ A representation of the group $G$ is a representation of its group algebra.

Let $A$ be an algebra. An $A\text{-module}$ is a vector space $V$ with an $A$ action $A×V\to V$ such that for all $a,{a}_{1},{a}_{2}\in A,$ $v,{v}_{1},{v}_{2}\in V,$ and ${c}_{1},{c}_{2}\in ℂ,$

$1v = v, a1(a2v) = (a1a2)v, (a1+a2)v = a1v+a2v, a ( c1v1+ c2v2 ) = c1(av1)+ c2(av2). (1.5)$

An $A\text{-module}$ homomorphism is a $ℂ\text{-linear}$ map $f:\phantom{\rule{0.2em}{0ex}}V\to {V}^{\prime }$ between $A\text{-modules}$ $V$ and ${V}^{\prime }$ such that for all $a\in A$ and $v\in V,$

$f(av)=af(v). (1.6)$

An $A\text{-module}$ isomorphism is a bijective $A\text{-module}$ homomorphism.

By condition 3 of (1.5) the action of $a\in A$ on $V$ is a linear transformation $V\left(a\right)$ of $V\text{.}$ If we specify a basis $B$ of $V$ then the linear transformation $V\left(a\right)$ can be written as a $d×d$ matrix, where $\text{dim}\phantom{\rule{0.2em}{0ex}}V=d\text{.}$ In this way we associate to every element of $A$ a $d×d$ matrix. This gives a representation of $A$ which we shall also denote by $V\text{.}$

Conversely, if $T$ is a $d$ dimensional representation of $A$ and $V$ is a $d$ dimensional vector space with basis $B$ then we can define the action of an element $a$ in $A$ by the action of the linear transformation on $V$ determined by the matrix $T\left(a\right)$ so that for all $v\in V,$

$av=T(a)v.$

In this way $V$ becomes an $A\text{-module.}$ Thus the notion of A $A\text{-module}$ is equivalent to the notion of representation. When we view the $A\text{-module}$ we are focusing on the vector space and when we view the representation we are focusing on the linear transformations (matrices).

Let $V$ be an $A\text{-module}$ with basis $B$ and let ${B}^{\prime }$ be another basis of $V$ and denote the change of basis matrix by $P\text{.}$ Let $a\in A$ and let $V\left(a\right),$ ${V}^{\prime }\left(a\right)$ be the matrices, with respect to the bases $B$ and ${B}^{\prime }$ respectively, of the linear transformation on $V$ induced by $a\text{.}$ Then by elementary linear algebra we have that

$V′(a)=PV (a)P-1. (1.7)$

This leads us to the following definition. Two $d$ dimensional representations $V$ and ${V}^{\prime }$ of an algebra $A$ are equivalent if there exists an invertible $d×d$ matrix $P$ such that (1.7) holds for all $a\in A\text{.}$ Isomorphic modules define equivalent representations.

The direct sum ${V}_{1}\oplus {V}_{2}$ of two $A\text{-modules}$ ${V}_{1}$ and $V2$ is the $A\text{-module}$ of all pairs $\left({v}_{1},{v}_{2}\right),$ ${v}_{1}\in {V}_{1}$ and ${v}_{2}\in {V}_{2},$ with the $A$ action given by

$a(v1,v2)= (av1,av2),$

for all $a\in A\text{.}$ The direct sum ${V}_{1}\oplus {V}_{2}$ of two representations ${V}_{1}$ and ${V}_{2}$ of $A$ is the representation $V$ of $A$ given by

$V(a)= ( V1(a) 0 0 V2(a) ) . (1.8)$

Direct sums of $n>2$ representations or $A\text{-modules}$ are defined analogously. We denote $V\oplus V\oplus \dots \oplus V,$ $n$ factors, by ${V}^{\oplus n}\text{.}$ Note that the algebra of the representation ${V}^{\oplus n},$ ${V}^{\oplus n}\left(A\right),$ is ${I}_{n}\left(V\left(A\right)\right)\text{.}$

An $A\text{-invariant}$ subspace of an $A\text{-module}$ $V$ is a subspace ${V}^{\prime }$ of $V$ such that

${ av′∣a∈ A,v′∈V′ } =AV′⊆V′.$

An $A\text{-invariant}$ subspace of $V$ is just a submodule of $V\text{.}$ Note that the intersection ${V}^{\prime }\cap {V}^{\prime \prime }$ of any two invariant subspaces ${V}^{\prime },$ ${V}^{\prime \prime }$ of $V$ is also an invariant subspace of $V\text{.}$

An $A\text{-module}$ with no submodules is a simple module. An irreducible representation is a representation that is not equivalent to a representation of the form

$V(a)= ( V′(a) * 0 * ) , (1.9)$

where ${V}^{\prime }$ is also representation of $A\text{.}$ If ${V}^{\prime },$ ${V}^{\prime \prime }$ are invariant subspaces of a representation $V$ and ${V}^{\prime }$ is irreducible then ${V}^{\prime }\cap {V}^{\prime \prime }$ is either equal to 0 or ${V}^{\prime }\text{.}$ A completely decomposable representation is a representation that is equivalent to a direct sum of irreducible representations. An algebra $A$ is called completely decomposable if every representation of $A$ is completely decomposable.

The centralizer of an algebra $A$ of $d×d$ matrices is the algebra $\stackrel{‾}{A}$ of $d×d$ matrices $\stackrel{‾}{a}$ such that for all matrices $a\in A,$

$a‾a=a a‾. (1.10)$

The centralizer of a representation $V$ of an algebra $A$ is the algebra $\stackrel{‾}{V\left(A\right)}\text{.}$

### Examples

1. Let $A$ be an algebra of $d×d$ matrices. Since all matrices in $A$ commute with all elements of $A,$

$A⊆A‾‾.$

Also,

$In(A)‾= Mn(A‾)and Mn(A)‾= In(A‾).$

Hence,

$In(A) ‾‾ =In(A‾‾).$

2. Schur's lemma. Let ${W}_{1}$ and ${W}_{2}$ be irreducible representations of $A$ of dimensions ${d}_{1}$ and ${d}_{2}\text{.}$ If $B$ is a ${d}_{1}×{d}_{2}$ matrix such that

$W1(a)B=B W2(a), for alla∈A,$

then either

1. ${W}_{1}\ncong {W}_{2}$ and $B=0,$ or
2. ${W}_{1}\cong {W}_{2}$ and if ${W}_{1}={W}_{2}$ then $B=c{I}_{{d}_{1}}$ for some $c\in ℂ\text{.}$

 Proof. $B$ determines a linear transformation $B:{W}_{1}\to {W}_{2}\text{.}$ Since $Ba=aB$ for all $a\in A$ we have that $B(aw1)=Baw1 =aBw1=aB(w1) ,$ for all $a\in A$ and ${w}_{1}\in {W}_{1}$ Thus $B$ is an $A\text{-module}$ homomorphism. $\text{ker}\phantom{\rule{0.2em}{0ex}}B$ and $\text{im}\phantom{\rule{0.2em}{0ex}}B$ are submodules of ${W}_{1}$ and ${W}_{2}$ respectively and are therefore either 0 or equal to ${W}_{1}$ or ${W}_{2}$ respectively. If $\text{ker}\phantom{\rule{0.2em}{0ex}}B={W}_{1}$ or $\text{im}\phantom{\rule{0.2em}{0ex}}B=0$ then $B=0\text{.}$ In the remaining case $B$ is a bijection, and thus an isomorphism between ${W}_{1}$ and ${W}_{2}$ In this case we have that ${d}_{1}={d}_{2}\text{.}$ Thus the matrix $B$ is square and invertible. Now suppose that ${W}_{1}={W}_{2}$ and let $c$ be an eigenvalue of $B\text{.}$ Then the matrix $c{I}_{{d}_{1}}-B$ is such that ${W}_{1}\left(a\right)\left(c{I}_{{d}_{1}}-B\right)=\left(c{I}_{{d}_{1}}-B\right){W}_{1}\left(a\right)$ for all $a\in A\text{.}$ The argument in the preceding paragraph shows that $c{I}_{{d}_{1}}-B$ is either invertible or 0. But if $c$ is an eigenvalue of $B$ then $\text{det}\phantom{\rule{0.2em}{0ex}}\left(c{I}_{{d}_{1}}-B\right)=0\text{.}$ Thus $c{I}_{{d}_{1}}-B=0\text{.}$ $\square$

3. Suppose that $V$ is a completely decomposable representation of an algebra $A$ and that $V\cong {\oplus }_{\lambda }{W}_{\lambda }^{\oplus {m}_{\lambda }}$ where the ${W}_{\lambda }$ are nonisomorphic irreducible representations of $A\text{.}$ Schur's lemma shows that the $A$-homomorphisms from ${W}_{\lambda }$ to $V$ form a vector space $Hom A (Wλ,V) ≅ ℂ ⊕ m λ .$ The multiplicity of the irreducible representation ${W}_{\lambda }$ un $V$ is $m λ =dim Hom A (Wλ,V) .$

4. Suppose that $V$ is a completely decomposable representation of an algebra $A$ and that $V\cong {\oplus }_{\lambda }{W}_{\lambda }^{\oplus {m}_{\lambda }}$ where the ${W}_{\lambda }$ are nonisomorphic irreducible representations of $A$ and let $\mathrm{dim}{W}_{\lambda }={d}_{\lambda }\text{.}$ Then $V (A) ≅ ⊕ i W λ ⊕ m λ (A)≅ ⊕ λ I m λ (Wλ(A)) ≅ ⊕ λ W λ (A) .$ If we view elements of ${\oplus }_{\lambda }{I}_{{m}_{\lambda }}{W}_{\lambda }\left(A\right)$ as block diagonal matrices with ${m}_{\lambda }$ blocks of size ${d}_{\lambda }×{d}_{\lambda }$ for each $\lambda$, then by using Ex 1 and Schur's lemma we get that $V(A)≅ ⊕λ Imλ (Wλ(A)) = ⊕ λ M m λ (Wλ(A)) = ⊕ λ M m λ ( Idλ (ℂ)) .$

5. Let $V$ be an $A$-module and let $p$ be an idempotent of $A\text{.}$ Then $pV$ is a subspace of $V$ and the action of $p$ on $V$ is a projection from $V$ to $pV\text{.}$ If ${p}_{1},{p}_{2}\in A$ are orthogonal idempotents of $A$ then ${p}_{1}V$ and ${p}_{2}V$ are mutually orthogonal subspaces of $V,$ since if ${p}_{1}v={p}_{2}v\text{'}$ for some $v,v\text{'}\in V$ then ${p}_{1}v={p}_{1}{p}_{1}v={p}_{1}{p}_{2}v\text{'}=0\text{.}$ So $V={p}_{1}V\oplus {p}_{2}V\text{.}$

6. Let $p$ be an idempotent in $A$ and suppose that for every $a\in A,pap=kp$ for some constant $k\in ℂ\text{.}$ If $p$ is not minimal then $p={p}_{1}+{p}_{2},$ where ${p}_{1},{p}_{2}\in A$ are idempotents such that ${p}_{1}{p}_{2}={p}_{2}{p}_{1}=0\text{.}$ Then ${p}_{1}=p{p}_{1}p=kp$ for some constant $k\in ℂ\text{.}$ This implies that ${p}_{1}={p}_{1}{p}_{1}=k{p}_{1}{p}_{1}=k{p}_{1},$ giving that either $k=1$ or ${p}_{1}=0\text{.}$ So $p$ is minimal.

7. Let $A$ be a finite dimensional algebra and suppose that $z\in A$ is an idempotent of $A\text{.}$ If $z$ is not minimal then $z={p}_{1}+{p}_{2}$ where ${p}_{1}$ and ${p}_{2}$ are orthogonal idempotents of $A\text{.}$ If any idempotent in this sum is not minimal we can decompose it into a sum of orthogonal idempotents. We continue this process until we have decomposed $z$ as a sum of minimal orthogonal idempotents. At any particular stage in this process $z$ is expresed as a sum of orthogonal idempotents, $z={\sum }_{i}{p}_{i}\text{.}$ So $zA={\sum }_{i}{p}_{i}A\text{.}$ None of the spaces ${p}_{i}A$ is 0 since ${p}_{i}={p}_{i}\text{.}1\in {p}_{i}A$ and the spacers ${p}_{i}A$ are all mutually orthogonal. Thus, since $zA$ is finite dimensional it will only take a finite number of steps to decompose $z$ into minimal idempotents. A partition of unity is a decomposition of 1 into minimal orthogonal idempotents.

## Notes and References

This is an excerpt from the unpublished first chapter of Arun Ram's dissertation entitled Representation Theory, written July 4, 1990.