## Lectures in Representation Theory

Last update: 19 August 2013

## Lecture 6

Continued proof.

Let $e={\sum }_{{g}_{i}\in G}{g}_{i}{g}_{i}^{*}\text{;}$ we wish to show that $e=1\text{.}$

Let $a\in A$ be arbitrary, and denote the coefficient of ${g}_{i}$ in any element $a$ by $a{|}_{{g}_{i}}\text{.}$ Then $tr(ae)= ∑gi∈G tr(agigi*)= ∑gi∈G ⟨ag,g*⟩= ∑gi∈G (agi)|gi= tr(a)$ using the dual basis property. Hence $\text{tr}\left(a\left(1-e\right)\right)=0$ for all $a\in A,$ and the nondegeneracy of tr implies $1-e=0,$ that is $e=1\text{.}$

Hence ${P}_{1}:V⟶{V}_{1}$ is a projection.

Furthermore, ${V}_{2}=\left(I-{P}_{1}\right)V$ is an $A\text{-module.}$ Indeed, by the lemma from last time, $A$ commutes with ${P}_{1},$ hence with $I-{P}_{1}\text{.}$ Therefore, for any $a\in A$ and $v\in V,$ $a·\left(I-{P}_{1}\right)v=\left(I-{P}_{1}\right)a·v\in {V}_{2},$ hence $I-{P}_{1}$ is an $A\text{-module}$ homomorphism mapping $V$ onto ${V}_{2}\text{.}$

Finally, we assert that $V={V}_{1}\oplus {V}_{2}\text{.}$ Any $v\in V$ may be written $v={P}_{1}v+\left(I-{P}_{1}\right)v,$ hence $V={V}_{1}+{V}_{2}\text{.}$ To show the sum is direct, note first that ${P}_{1}^{2}={P}_{1}$ since ${P}_{1}$ fixes ${V}_{1}=\text{im} {P}_{1}$ pointwise. Suppose $x\in {V}_{1}\cap {V}_{2},$ so we may write $x=\left(I-{P}_{1}\right)v$ for some $v\in V\text{.}$ Then ${P}_{1}x=x$ as $x\in {V}_{1},$ and hence $x=P1x=P1 (I-P1)v=P1 v-P12v=0.$ which completes the proof of $\left(1\right)⇒\left(2\right)\text{.}$

Next, prove $\left(1\right)⇒\left(3\right)\text{.}$ By the previous case, we may also assume that the all finite dimensional modules are completely decomposable.

Let $V=\stackrel{\to }{A}$ be the hideously denoted left regular representation of $A\text{.}$ Since ${\text{dim}}_{ℂ} A<\infty ,$ we may write $V\cong {\oplus }_{\lambda \in \stackrel{ˆ}{V}}{\left({W}^{\lambda }\right)}^{\otimes {m}_{\lambda }},$ where $\stackrel{ˆ}{V}$ is some (finite) index set.

Note that the representation $V:A⟶{M}_{d}\left(ℂ\right)$ is injective (sometimes called faithful) as $V\left(a\right)·\stackrel{\to }{1}=\stackrel{\to }{a}$ for all $a\in A\text{.}$ Thus $A\cong V\left(A\right)\cong {\oplus }_{\lambda \in \stackrel{ˆ}{V}}{\left({W}^{\lambda }\left(A\right)\right)}^{\otimes {m}_{\lambda }}$ as algebras. The number ${m}_{\lambda }$ are called the multiplicity of the irreducible representation ${W}^{\lambda }$ in $V\left(A\right)\cong A\text{.}$

For convenience, write $W={\oplus }_{\lambda \in \stackrel{ˆ}{V}}{\left({W}^{\lambda }\left(A\right)\right)}^{\otimes {m}_{\lambda }}\text{.}$ Note that an arbitrary element of $W$ is of the form $W(a)= ( Wλ(a) ⋱ 0 Wλ(a) Wμ(a) 0 ⋱ Wν(a) )$ for some $a\in A,$ where there are ${m}_{\gamma }$ occurrences of each ${W}^{\gamma }\left(a\right)$ in the block diagonal decomposition of $V\left(a\right)\text{.}$ The product in the algebra $W\left(A\right)$ is componentwise, hence the map $W⟶{\oplus }_{\lambda \in \stackrel{ˆ}{V}}{W}^{\lambda }$ given by deleting duplicate copies of irreducibles, i.e. $W\left(a\right)↦{\oplus }_{\lambda \in \stackrel{ˆ}{V}}{W}^{\lambda }\left(a\right)$ is an (onto) algebra homomorphism.

It is not hard to see that these algebras are isomorphic. Let $\left\{{g}_{1}^{\lambda },{g}_{2}^{\lambda },\dots ,{g}_{{d}_{\lambda }}^{\lambda }\right\}$ be a basis for ${W}^{\lambda },$ and let ${g}_{i,j}^{\lambda }\in W$ be the corresponding element to ${g}_{i}^{\lambda }$ in the $j\text{th}$ copy of ${W}^{\lambda }$ in $W\text{.}$ The set ${ hiλ= ∑j=1dλ gi,jλ | 1≤i≤dλ, λ∈Vˆ }$ is a basis for $W,$ since an element of $W$ must have identical matrices in all blocks indexed by a given $\lambda \text{.}$ Hence $W\left(A\right)$ has the same dimension as ${\oplus }_{\lambda \in \stackrel{ˆ}{V}}{W}^{\lambda }\left(A\right)\text{;}$ indeed, the homomorphism above sends ${h}_{i}^{\lambda }$ to ${g}_{i}^{\lambda }$ (following the same indexing scheme) and is clearly invertible.

Setting $\stackrel{ˆ}{A}=\stackrel{ˆ}{V},$ we have $A\cong V\left(A\right)\cong {\oplus }_{\lambda \in \stackrel{ˆ}{A}}{W}^{\lambda }\left(A\right)\text{.}$ It remains to show that ${W}^{\lambda }\left(A\right)\cong {M}_{{d}_{\lambda }}\left(ℂ\right)\text{.}$ This follows from the next extremely useful lemma:

Lemma 1.49 (Schur) Suppose $V$ and $W$ are irreducible representations of $A$ of dimensions ${d}_{1}$ and ${d}_{2},$ respectively. If $B$ is a ${d}_{1}×{d}_{2}$ matrix such that $V\left(a\right)B=BW\left(a\right)$ for all $a\in A,$ then either $B=0$ or $V$ is equivalent to $W$ and $B=cI\text{.}$

 Proof. Let $V={ℂ}^{{d}_{1}}$ and $W={ℂ}^{{d}_{2}}$ represent the corresponding $A\text{-modules}$ of these representations, and assume $B\ne 0\text{.}$ Let $\left\{{w}_{1},\dots ,{w}_{{d}_{2}}\right\}$ be the standard basis of $W,$ so that $B$ represents a linear transformation $W⟶V$ given by ${v}_{i}=B{w}_{i}\text{.}$ The condition $V\left(a\right)B=BW\left(a\right)$ becomes $B\left(a·w\right)=a·B\left(w\right),$ i.e. $B$ is an $A\text{-module}$ homomorphism. Since $B\ne 0,$ the simplicity of $W$ forces $\text{ker} B=0\text{.}$ Therefore $B$ is an isomorphism of modules, hence ${d}_{1}={d}_{2}$ and the original representations are equivalent. Observe that for all $c\in ℂ,$ $V\left(a\right)\left(B-cI\right)=\left(B-cI\right)W\left(a\right),$ so $B-cI$ is either zero or an isomorphism. Let $c$ to be an eigenvalue of $B$ (this requires that our field be algebraically closed), hence $B-cI$ is not invertible. Then $B-cI=0$ or $B=cI\text{.}$ $\square$

We have ${W}^{\lambda }\left(A\right)\subseteq {M}_{{d}_{\lambda }}\left(ℂ\right)\text{;}$ to show that ${W}^{\lambda }\left(A\right)={M}_{{d}_{\lambda }}\left(ℂ\right),$ we will show that ${W}^{\lambda }\left(A\right)$ contains the basis of matrix units. Note that ${\sum }_{{g}_{i}\in G}{W}^{\lambda }\left({g}^{*}\right){E}_{i,m}{W}^{\lambda }\left(g\right)$ commutes with ${W}^{\lambda }\left(A\right)$ by the lemma of last time. By Schur’s lemma (with $V=W={W}^{\lambda }\text{)},$ this element must be of the form $c{I}_{{d}_{\lambda }}$ for some $c\in ℂ$ (possibly zero). We calculate traces; using the trace property:

$tr ( ∑g∈GWλ (g*)Ei,m Wλ(g) ) = tr ( ∑g∈GWλ (g*)Wλ (g)Ei,m ) = tr ( Wλ (∑gi∈Ggi*gi⏟1) Ei,m ) =tr(Ei,m) = δi,mdλ.$

On the other hand, $\text{tr}\left(c{I}_{{d}_{\lambda }}\right)=c{d}_{\lambda },$ so $c=0$ for $i\ne m$ and $c={\left(1/{d}_{\lambda }\right)}^{2}$ if $i=m\text{.}$

Continued in next lecture.

## Notes and References

This is a copy of lectures in Representation Theory given by Arun Ram, compiled by Tom Halverson, Rob Leduc and Mark McKinzie.