Examples in representation theory

## Basic theory

1. Let $A$ be an algebra of $d×d$ matrices. Since all matrices in $A$ commute with all elements of $\stackrel{-}{A},$ $A⊆ A - - .$ Also,  I n A = M n A and  M n A = I n A . Hence  I n A = I n A = .
2. Schur's Lemma. Let ${W}_{1}$ and ${W}_{2}$ be irreducible representations of $A$ of dimensions ${d}_{1}$ and ${d}_{2}.$ If $B$ is a ${d}_{1}×{d}_{2}$ matrix such that $W 1 a B=B W 2 a ,for alla∈A,$ then either
3. ${W}_{1}\ncong {W}_{2}$ and $B=0$, or
4. ${W}_{1}\cong {W}_{2}$ and if ${W}_{1}={W}_{2}$ then $B=c{I}_{{d}_{1}}$ for some $c\in ℂ.$
5.  Proof. $B$ determines a inear transformation $B:{W}_{1}\to {W}_{2}$. Since $Ba=aB$ for all $a\in A$ we have that $B a w 1 =Ba w 1 =aB w 1 =aB w 1 ,$ for all $a\in A$ and ${w}_{1}\in {W}_{1}.$ Thus $B$ is an $A$-module homomorphism. $\mathrm{ker}B$ and $\mathrm{im}B$ are submodules of ${W}_{1}$ and ${W}_{2}$ respectively and are therefore equal to either 0 or equal to ${W}_{1}$ or ${W}_{2}$ respectively. If $\mathrm{ker}B={W}_{1}$ or $\mathrm{im}B=0$ then $B=0$. In the remaining case $B$ is a bijection, and thus an isomorphism between ${W}_{1}$ and ${W}_{2}.$ In this case we have that ${d}_{1}={d}_{2}.$ Thus the matrix $B$ is square and invertible. Now suppose that ${W}_{1}={W}_{2}$ and let $c$ be an eigenvalue of $B.$ Then the matrtix $c{I}_{d}-B$ is such that $W 1 a c I d 1 -B = c I d 1 -B W 1 a for alla∈A.$ The arument preceding shows that $\left(c{I}_{{d}_{1}}-B\right)$ is either invertible or 0. But if $c$ is an eigenvalue of $B$ then $\mathrm{det}\left(c{I}_{{d}_{1}}-B\right)=0.$ Thus $\left(c{I}_{{d}_{1}}-B\right)=0.$ □

6. Suppose that $V$ is a completely decomposable representation of an algebra $A$ and that $V\cong {\oplus }_{\lambda }{W}_{\lambda }^{\oplus {m}_{\lambda }}$ where the ${W}_{\lambda }$ are nonisomorphic irreducible representations of $A.$ Schur's lemma shows that the $A$-homomorphisms from ${W}_{\lambda }$ to $V$ form a vector space $Hom A W λ V ≅ ℂ ⊕ m λ .$ The multiplicity of the irreducible representation ${W}_{\lambda }$ un $V$ is $m λ =dim Hom A W λ V .$
7. Suppose that $V$ is a completely decomposable representation of an algebra $A$ and that $V\cong {\oplus }_{\lambda }{W}_{\lambda }^{\oplus {m}_{\lambda }}$ where the ${W}_{\lambda }$ are nonisomorphic irreducible representations of $A$ and let $\mathrm{dim}{W}_{\lambda }={d}_{\lambda }.$ Then $V A ≅ ⊕ i W λ ⊕ m λ A ≅ ⊕ λ I m λ W λ A ≅ ⊕ λ W λ A .$ If we view elements of ${\oplus }_{\lambda }{I}_{{m}_{\lambda }}{W}_{\lambda }\left(A\right)$ as block diagonal matrices with ${m}_{\lambda }$ blocks of size ${d}_{\lambda }×{d}_{\lambda }$ for each $\lambda$, then by using Ex 1 and Schur's lemma we get that  V A λ I m λ W λ A = λ M m λ W λ A = λ M m λ I d λ .
8. Let $V$ be an $A$-module and let $p$ be an idempotent of $A.$ Then $pV$ is a subspace of $V$ and the action of $p$ on $V$ is a projection from $V$ to $pV.$ If ${p}_{1},{p}_{2}\in A$ are orthogonal idempotents of $A$ then ${p}_{1}V$ and ${p}_{2}V$ are mutually orthogonal subspaces of $V,$ since if ${p}_{1}v={p}_{2}v\text{'}$ for some $v,v\text{'}\in V$ then ${p}_{1}v={p}_{1}{p}_{1}v={p}_{1}{p}_{2}v\text{'}=0.$ So $V={p}_{1}V\oplus {p}_{2}V.$
9. Let $p$ be an idempotent in $A$ and suppose that for every $a\in A,pap=kp$ for some constant $k\in ℂ.$ If $p$ is not minimal then $p={p}_{1}+{p}_{2},$ where ${p}_{1},{p}_{2}\in A$ are idempotents such that ${p}_{1}{p}_{2}={p}_{2}{p}_{1}=0.$ Then ${p}_{1}=p{p}_{1}p=kp$ for some constant $k\in ℂ.$ This implies that ${p}_{1}={p}_{1}{p}_{1}=k{p}_{1}{p}_{1}=k{p}_{1},$ giving that either $k=1$ or ${p}_{1}=0.$ So $p$ is minimal.
10. Let $A$ be a finite dimensional algebra and suppose that $z\in A$ is an idempotent of $A.$ If $z$ is not minimal then $z={p}_{1}+{p}_{2}$ where ${p}_{1}$ and ${p}_{2}$ are orthogonal idempotents of $A.$ If any idempotent in this sum is not minimal we can decompose it into a sum of orthogonal idempotents. We continue this process until we have decomposed $z$ as a sum of minimal orthogonal idempotents. At any particular stage in this process $z$ is expresed as a sum of orthogonal idempotents, $z={\sum }_{i}{p}_{i}.$ So $zA={\sum }_{i}{p}_{i}A.$ None of the spaces ${p}_{i}A$ is 0 since ${p}_{i}={p}_{i}.1\in {p}_{i}A$ and the spacers ${p}_{i}A$ are all mutually orthogonal. Thus, since $zA$ is finite dimensional it will only take a finite number of steps to decompose $z$ into minimal idempotents. A partition of unity is a decomposition of 1 into minimal orthogonal idempotents.

## Finite dimensional algebras

1. Let $𝒜=\left\{{a}_{i}\right\}$ and $ℬ=\left\{{b}_{i}\right\}$ be two bases of $A$ and let $𝒜*=\left\{{a}_{i}*\right\}$ and $ℬ*=\left\{{b}_{i}*\right\}$ be the associated dual bases with respect to a nondegenerate trace $\stackrel{\to }{t}$ on $A.$ Then $b i = ∑ j s ij a j ,and$ $b i *= ∑ j t ij a j *,and$ for some constants ${s}_{ij}$ and ${t}_{ij}.$ Then $δ ij = b i b j * = ∑ k s ik a k ∑ l t jl a l * = ∑ k,l s ik t jl a k a l * = ∑ k,l s ik t jl δ kl = ∑ k s ik t jk .$ In matrix notation this says that the matrices $S=\parallel {s}_{ij}\parallel$ and $T=\parallel {t}_{ij}\parallel$ are such that $S T t =I.$ Then, in the setting of Proposition 2.6, $∑ i V 1 b i C V 2 b i * = ∑ i ∑ j s ij V 1 a j C ∑ k t ik V 2 a k * = ∑ j,k ∑ i s ij t ik V 1 a j C V 2 a k * = ∑ j,k δ jk V 1 a j C V 2 a k * = ∑ j V 1 a j C V 2 a j *.$ This shows that the matrix $\left[C\right]$ of proposition 2.6 is independent of the choice of basis.
2. Let $A$ be the algebra of elements of the form ${c}_{1}+{c}_{2}e,{c}_{1},{c}_{2}\in ℂ,$ where ${e}^{2}=0.$ $A$ is commutative and $\stackrel{\to }{t}$ defined by $\stackrel{\to }{t}\left({c}_{1}+{c}_{2}e\right)={c}_{1}+{c}_{2}$ is a nondegenerate trace on $A.$ The regular representation $\stackrel{\to }{A}$ of $A$ is not completely decomposable. The subspace $ℂ\stackrel{\to }{e}\subseteq \stackrel{\to }{A}$ is invariant and its complemenetary subspace is not. The trace of the regular representation is given explicitly by $\mathrm{tr}\left(1\right)=2$ and $\mathrm{tr}\left(e\right)=0.$ $\mathrm{tr}$ is degenerate. There is no matrix representation of $A$ that has trace given by $\stackrel{\to }{t}.$
3. Suppose $G$ is a finite group and that $A=ℂG$ is its group algebra. The the group elements $g\in G$ form a basis of $A.$ So, using 2.7, the trace of the regular representation can be expressed in the form $tr a = ∑ g∈G ag | g = ∑ g∈G a | 1 = G a | 1 ,$ where 1 denotes the identity in $G$ and $a{|}_{g}$ denotes the coefficient of $g$ in $a.$ Since $\mathrm{tr}\left({g}^{-1}g\right)=\left|G\right|\ne 0$ for each $g\in G,$$\mathrm{tr}$ is nondegenerate. If we set $\stackrel{\to }{t}\left(a\right)=a{|}_{1}$ then $\stackrel{\to }{t}$ is a trace on $A$ and ${\left\{{g}^{-1}\right\}}_{g\in G}$ is the dual basis to the basis ${\left\{g\right\}}_{g\in G}$with respect to the trace.
4. Let $\stackrel{\to }{t}$ be the trace of a faithful realisation $\phi$ of an algebra $A$ (ie for each $a\in A,\stackrel{\to }{t}\left(a\right)$ is given by the styandard trace of $\phi \left(a\right)$ where $\phi$ is an injective homomorphism $\phi :A\to {M}_{d}\left(ℂ\right)$). Let $\sqrt{A}=\left\{a\in A\phantom{\rule{.5em}{0ex}}|\phantom{\rule{.5em}{0ex}}\stackrel{\to }{t}\left(ab\right)=0\phantom{\rule{2em}{0ex}}\text{for all}\phantom{\rule{2em}{0ex}}b\in A\right\}.$ $\sqrt{A}$ is an ideal of $A.$ Let $a\in \sqrt{A}.$ Then $\mathrm{tr}\left({a}^{k-1}a\right)=\mathrm{tr}\left({a}^{k}\right)=0$ for all $k.$ If ${\lambda }_{1},\dots ,{\lambda }_{d}$ are the eigenvalues of $\phi \left(a\right)$ then $\stackrel{\to }{t}\left({a}^{k}\right)={\lambda }_{1}^{k}+{\lambda }_{2}^{k}+\dots +{\lambda }_{d}^{k}={p}_{k}\left(\lambda \right)=0$ for all $k>0$, where ${p}_{k}$ represents the $k$-th power symmetric functions [Mac]. Since the power symmetric functions generate the ring of symmetric functions this means that the elementary symmetric functions ${e}_{k}\left(\lambda \right)=0$ for $k>0$, [Mac] p17, 2.14. Since the characteristic polynomial of $\phi \left(a\right)$ can be written in the form $char φ a t = t d - e 1 λ t d-1 + e 2 λ t d-2 +…± e d λ ,$ we get that ${\mathrm{char}}_{\phi \left(a\right)}\left(t\right)={t}^{d}.$ But then the Cayley-Hamilton theorem implies that ${\phi \left(a\right)}^{d}=0.$ Since $\phi$ is injective we have that ${a}^{d}=0.$ So $a$ is nilpotent. Let $J$ be an ideal of nilpotent elements and suppose that $a\in J.$ For every element $b\in A,ba\in J$ and $ba$ is nilpotent. This implies that $\phi \left(ba\right)$ is nilpotent. By noting that a matrix is nilpotent only if in Jordan block form the diagonal contains all zeros we see that $\stackrel{\to }{t}\left(ba\right)=0.$ Thus $a\in \sqrt{A}.$ So $\sqrt{A}$ can be defined as the largest ideal of nilpotent elements. Furthermore, since the regular representation of $A$ is always faitful, $\sqrt{A}$ is equal to the set $\left\{a\in A\phantom{\rule{.5em}{0ex}}|\phantom{\rule{.5em}{0ex}}\mathrm{tr}\left(ab\right)=0\phantom{\rule{2em}{0ex}}\text{for all}\phantom{\rule{2em}{0ex}}b\in A\right\}$ where $\mathrm{tr}$ is the trace of the regular representation of $A.$
5. Let $𝒜$ be the basis and $t →$ the trace of a faithful realisation of an algebra $A$ as in Ex3 and let $G\left(𝒜\right)$ be the Gram matrix with respect to the basis $𝒜$ and the trace $\stackrel{\to }{t}$ as given by 2.2 and 2.3. If $ℬ$ is another basis of $A$ then $G ℬ = P t G 𝒜 P,$ where $P$ is the change of basis matrix from $𝒜$ to $ℬ.$ So the rank of the Gram matrix is independent of the choice of the basis $𝒜.$

Choose a basis $\left\{{a}_{1},,,{a}_{2},,,\dots ,,,{a}_{k}\right\}$ of $\sqrt{A}$ ($\sqrt{A}$ defined in Ex 3) and extend this basis to a basis $\left\{{a}_{1},{a}_{2},\dots ,{a}_{k},{b}_{1},\dots ,{b}_{s}\right\}$ of $A.$ The Gram matrix with respect to this basis is of the form $0 0 0 G B$ where $G\left(B\right)$ denotes the Gram matrix on $\left\{{b}_{1},,,{b}_{2},\dots ,{b}_{s}\right\}.$ So the rank of the Gram matrix is certainly less than or equal to $s$.

Suppose that the rows of $G\left(B\right)$ are linearly dependent. Then for some contants ${c}_{1},{c}_{2},\dots ,{c}_{s},$ not all zero, $c 1 t → b 1 b i + c 2 t → b 2 b i +…+ c s t → b s b i =0$ for all $1\le i\le s.$ So $t → ∑ j c j b j b i =0,for alli.$ This implies that ${\sum }_{j}{c}_{j}{b}_{j}\in \sqrt{A}.$ This is a contradiction to the construction of the ${b}_{j}.$ So the rows of $B\left(B\right)$ are linearly independent.

Thus the rank of the Gram matrix is $s$ or equivalently the corank of the Gram matrix of $A$ is equal to the dimension of the radical $\sqrt{A}.$ Thus the trace $\mathrm{tr}$ of the regular representation of $A$ is nondegenerate iff $\sqrt{A}=\left(0\right).$

6. Let $W$ be an irreducible representation of an arbitrary algebra $A$ and let $d=\mathrm{dim}W.$ Denote $W\left(A\right)$ by ${A}_{W}.$ Note that representation $W$ is also an irreducible representation of ${A}_{W}$( $W\left(a\right)=a$ for all $a\in {A}_{W}$ ).

We show that $\mathrm{tr}$ is nondegenerate on ${A}_{W},$ ie that if $a\in {A}_{W},a\ne 0$, then there exists $b\in {A}_{W}$ such that $\mathrm{tr}\left(ba\right)\ne 0.$ Since $a$ is a nonzero matrix there exists some $w\in W$ such that $aw\ne 0.$ Thus $Aaw=W.$ So there exists some $w\in W$ such that $aw\ne 0.$ Now $Aaw\subseteq W$ is an $A$-invariant subspace of $W$ and not 0 since $aw\ne 0.$ Thus $Aaw=W$. So there exists some $b\in {A}_{W}$ such that $baw=w.$ This shows that $ba$ is not nilpotent. So $\mathrm{tr}\left(ba\right)\ne 0.$ So $\mathrm{tr}$ is nondegenerate on ${A}_{W}.$ This means that ${A}_{W}={\oplus }_{\lambda }{M}_{{d}_{\lambda }}\left(ℂ\right)$ for some ${d}_{\lambda }.$ But since by Schur's lemma  A W = I d , where $d=\mathrm{dim}W,$ we see that $W\left(A\right)={A}_{W}={M}_{d}\left(ℂ\right).$

7. Let $A$ be a finite dimensional algebra and let $\stackrel{\to }{A}$ denote the regular representation of $A.$ The set $\stackrel{\to }{A}$ is the same as the set $A$, but we distinguish elements of $\stackrel{\to }{A}$ by writing $\stackrel{\to }{a}\in A.$

A linear transformation $B$ of $\stackrel{\to }{A}$ is in the centraliser of $\stackrel{\to }{A}$ if for every element $a\in A$ and $\stackrel{\to }{x}\in \stackrel{\to }{A},$ $Ba x → =aB x → .$ Let $B\stackrel{\to }{1}=\stackrel{\to }{b}.$ Then $B a → = Ba 1 → = aB 1 → = a b → = ab → .$ So $B$ acts on $\stackrel{\to }{a}\in \stackrel{\to }{A}$ by right multiplication on $b.$ Conversely it is easy to see that the action of right multiplication commutes with the action of left mutliplication since $a x → b=a x → b ,$ for all $a,b\in A$ and $\stackrel{\to }{x}\in \stackrel{\to }{A}.$ So the centraliser algebra of the regular represnetation is the algebra of matrices determined by the action of right multiplication of elements of $A.$

## Matrix units and characters

1. If $A$ is commutative and semisimple then all irreducible representations of $A$ are one dimensional. This is not necessarily true for algebras over fields which are not algebraically closed (since Schur's lemma takes a different form).
2. If $R$ is a ring with identity and ${M}_{n}\left(R\right)$ denotes $n×n$ matrices with entries in $R$. the ideals of ${M}_{n}\left(R\right)$ are of the form ${M}_{n}\left(I\right)$ where $I$ is an ideal of $R.$
3. If $V$ is a vector space over $ℂ$ and $V*$ is the space of $ℂ$-valued functions on $V$ then $\mathrm{dim}V*=\mathrm{dim}V.$ If $B$ is a basis of $V$ then the functions ${\delta }_{b},b\in B,$ determined by for ${b}_{i}\in B,$ form a basis of $V*.$ If $A$ is a semisimple algebra isomorphic to ${M}_{d}\left(ℂ\right)={\oplus }_{\lambda \in \stackrel{~}{A}}{M}_{{d}_{\lambda }}\left(ℂ\right),$ $\stackrel{~}{A}$ an index set for the irreducible representations ${W}_{\lambda }$ of $A,$ then $dimA= ∑ λ∈ A ~ d λ 2 ,$ and the functions ${W}_{ij}^{\lambda }\left({W}_{ij}^{\lambda }\left(a\right)$ the $\left(i,j\right)$-th entry of the matrix ${W}_{\lambda }\left(a\right),a\in A\right)$ on $A$ form a basis of $A*.$ The ${W}_{ij}^{\lambda }$ are simply the functions ${\delta }_{{e}_{ij}^{\lambda }}$ for an appropriate set of matrix units $\left\{{e}_{ij}^{\lambda }\right\}$ of $A.$ Thi shows that the coordinate functions of the irreducible representations are linearly independent. Since ${\chi }^{\lambda }=\sum _{i}{W}_{ii}^{\lambda },$ the irreducible characters are also linearly independent.
4. Let $A$ be a semisimple algebra. Virtual characters are elements of the vector space $R\left(A\right)$ consisting of the $ℂ$-linear span of the irreducible characters of $A.$ We know that there is a one-to-one correspondence between the minimal central idempotents of $A$ and the irreducible characters of $A.$ Since the minimal central idempotents of $A$ form a basis of the center $Z\left(A\right)$ of $A,$ we ca define a vector space isomorphism $\phi :Z\left(A\right)\to R\left(A\right)$ by setting $\phi \left({z}_{\lambda }\right)={\chi }^{\lambda }$ for each $\lambda \in \stackrel{~}{A}$ and extending linearly to all of $Z\left(A\right).$

Given a nondegenerate trace $\stackrel{\to }{t}$ on $A$ with trace vector $\left({t}_{\lambda }\right)$ it is more natural to define $\phi$ by setting $\phi \left({z}_{\lambda }/{t}_{\lambda }\right)={\chi }^{\lambda }.$ Then, for $z\in Z\left(A\right),$ $φ z a = t → za ,$ since $φ z μ / t μ a = t → z μ / t μ a = t → 1 t μ z μ a = 1 t μ t μ χ μ a = χ μ a .$

5. If $A$ is a semisimple algebra isomorphic to ${M}_{b}\left(ℂ\right)={\oplus }_{\lambda \in \stackrel{~}{A}}{M}_{{d}_{\lambda }}\left(ℂ\right),\phantom{\rule{2em}{0ex}}\stackrel{~}{A}$ an index set for the irreducible representations ${W}_{\lambda }$ of $A,$ then the right regular representation decomposes as $A → ≅ ⊕ λ∈A W λ ⊕ d λ .$ If matrix units ${e}_{ij}^{\lambda }$ are given by (3.7) then $tr e ii λ =tr d λ E ii λ = d λ .$ So the trace of the regular representation of $A,$$\mathrm{tr},$ is given by the trace vector $\stackrel{\to }{t}=\left({t}_{\lambda }\right),$ where ${t}_{\lambda }={d}_{\lambda }$ for each $\lambda \in \stackrel{~}{A}.$
6. Let $A$ be a semisimple algebra and let $B*=\left\{g*\right\}$ be a dual basis to $B=\left\{g\right\}$ of $A$ with respect to the tracde of the regular representation of $A.$ We can define an inner product on the space $R\left(A\right)$ of virtual characters, Ex 4, of $A$ by $χ χ' = ∑ g∈B χ g χ' g* .$ The irreducible characters of $A$ are orthonormak with respect to this inner product. Nate that $\chi ,\chi \text{'}$ are characters of representations $V$ and $V\text{'}$ respectively, then, by Ex4 and Theorem 3.9, $χ χ' =dim Hom A V V' .$ If ${\chi }^{\lambda }$ is the character of the irreducible representation ${W}_{\lambda }$ of $A$ then $⟨\chi ,\chi \text{'}⟩$ gives the multiplicity of ${W}_{\lambda }$ in the representation $V$ as in Section 1, Ex 3.
7. Let $A$ be a semisimple algebra and $\stackrel{\to }{t}=\left({t}_{\lambda }\right)$ be a non-degnerate trace on $A.$ Let $B$ be a basis of $A$ and for each $g\in B$ let $g*$ denote the element of the dual basis to $B$ ith respect to the trace $\stackrel{\to }{t}$ such that $\stackrel{\to }{t}\left(gg*\right)=1.$ For each $a\in A$ define $a = ∑ g∈B gag*.$ By Section 2, Ex 1, the element $\left[a\right]$ is independent of the choice of the basis $B.$ By using a set of matrix units ${e}_{ij}^{\lambda }$ of $A$ we get $a = ∑ i,j,λ 1 t λ e ij λ a e ji λ = ∑ i,j,λ 1 t λ a jj λ e ii λ = ∑ λ 1 t λ ∑ j a jj λ ∑ i e ii λ = ∑ λ 1 t λ χ λ a z λ .$ So ${\chi }^{\lambda }\left(\left[a\right]\right)=\left(\frac{{d}_{\lambda }}{{t}_{\lambda }}\right){\chi }^{\lambda }\left(a\right).$ By 3.9 $∑ g∈B t λ 2 d λ χ μ g* g = ∑ λ ∑ g∈B t λ 2 d λ 1 t λ χ λ g χ μ g* z λ = ∑ λ δ λμ z λ = z μ .$ Thus the $\left[g\right],g\in B,$ span the center of $A.$
8. Let $G$ be a finite group and let $A=ℂG$. Let $\stackrel{\to }{t}$ be the trace on $A$ given by $t → a =a | 1 ,$ where 1 is the identity in $G.$ By Ex 5 and Section 2 Ex 3 the trace vector of $\stackrel{\to }{t}$ is given by ${t}_{\lambda }=\left(\frac{{d}_{\lambda }}{\left|G\right|}\right)$ where ${d}_{\lambda }$ is the dimnesion of the irreducible representation of $G$ corresponding to $\lambda .$

If $h\in G,$ then the element $h = ∑ g∈B ghg*= ∑ g∈B gh g -1$ is a multiple of the sum of the elements of $G$ that are conjugate to $h.$ Let $\Lambda$ be an index set of the conjugacy classes of $G$ and for each $\lambda \in \Lambda$, let ${C}_{\lambda }$ denote the sum of the elements in the conjugacy class indexed by $\lambda$. The ${C}_{\lambda }$ are linearly independent elements of $ℂG$. Furthermore by Ex 7 they span the center of $ℂG.$ Thus $\Lambda$ must also be an index set for the irreducible representations of $G.$ So we see that the irreducible representations of the group algebra of a finite group are indexed by the conjugacy classes.

9. Let $G$ be a finite group and let ${C}_{\lambda }$ denote the conjugacy classes of $G.$ Note that since $tr V hg h -1 =tr V h V g V h -1 =tr V g$ for any representation $V$ of $G$ and all $g,h\in G,$ characters of $G$ are constant on conjugacy classes. Using theorem 3.8, $G δ λμ = ∑ g χ λ g χ μ g -1 = ∑ ρ ∑ g∈ C ρ χ λ g χ μ g -1 = ∑ ρ C ρ χ λ ρ χ μ ρ' ,$ where $\rho \text{'}$ is such that ${C}_{\rho \text{'}}$ is the conjugacy class which contains the inverses of the elements in ${C}_{\rho }.$ Define matrices $\Xi =\parallel {\Xi }_{\lambda \rho }\parallel$ and $\Xi \text{'}=\parallel {\Xi \text{'}}_{\lambda \rho }\parallel$ by ${\Xi }_{\lambda \rho }={\chi }^{\lambda }\left(\rho \right)$ and ${\Xi }_{\lambda \rho }\text{'}=\left|{C}_{\rho }\right|{\chi }^{\lambda }\left({\rho }^{\text{'}}\right).$ By Ex 8 these matrices are square. In matrix notation the above is $Ξ Ξ 't = G I,$ but then we also have that ${\Xi }^{\text{'}t}\Xi =\left|G\right|I,$ or equivalently that $∑ λ χ λ ρ ' χ λ τ = G C ρ δ ρτ .$
10. This example gives a generalisation of the preceding example. Let $A$ be a semisimple algebra and suppose that $B$ is a basis of $A$ and that there is a partition of $B$ into classes such that if $b$ and $b\text{'}\in B$ are in the same classes then for every $\lambda \in \stackrel{~}{A}$, $χ λ b = χ λ b ' .$ The fact that the characters are linearly independent implies that the number of classes must be the same as the number of irreducible characters ${\chi }^{\lambda }.$ Thus we can inbox the classes of $B$ by the elements of $\stackrel{~}{A}.$ Assume that we have fixed such a correspondence and denote the classses of $B$ by ${C}_{\lambda },\lambda \in \stackrel{~}{A}.$

Let $\stackrel{\to }{t}$ be a nondegenerate trace on $A$ and let $G$ be the Gram matrix with respect to the basis $B$ and the trace $\stackrel{\to }{t}.$ If $g\in B,$ let $g*$ denote the element of the dual basis to $B$, with respect to the trace $\stackrel{\to }{t}$, such that $\stackrel{\to }{t}\left(gg*\right)=1.$ Let ${G}^{-1}=C=\parallel {c}_{gg\text{'}}\parallel$ and recall that $g*={\sum }_{g\text{'}\in B}{c}_{gg\text{'}}g\text{'}.$ Then $d λ t λ δ λμ = ∑ g∈B χ λ g χ μ g* = ∑ g∈B χ λ g χ μ ∑ g'∈B c gg' g' = ∑ g,g'∈B χ λ g c gg' χ μ g' .$ Collecting $g,g\text{'}\in B$ by class size gives $d λ t λ δ λμ = ∑ ρ,τ ∑ g∈ C ρ ,g'∈ C τ χ λ g c gg' χ μ g'$ where ${\chi }^{\lambda }\left(\rho \right)$ denotes the value of the charactwr ${\chi }^{\lambda }\left(\rho \right)$ at elements of the class ${C}_{\rho }.$ Now define a matrix  C = c ρτ with entries  c ρτ = g C ρ ,g' C τ c gg' , and let $\Xi =\parallel {\Xi }_{\lambda \rho }\parallel$ and $\Xi \text{'}=\parallel {\Xi \text{'}}_{\lambda \rho }\parallel$ be matrices given by ${\Xi }_{\rho \lambda }={\chi }^{\lambda }\left(\rho \right)$ and ${\Xi \text{'}}_{\lambda \rho }=\left(\frac{{t}_{\lambda }}{{d}_{\lambda }}\right){\chi }^{\lambda }\left(\rho \right).$ Note that all of these matrices are square. Then the above gives that $I=Ξ$ C Ξ' t . So $I=$ C Ξ' t Ξ, or equivalently that $δ ρτ = ∑ σ,λ$ c ρσ t λ d λ χ λ σ χ λ τ = σ,λ g C ρ ,g' C σ c gg' t λ d λ χ λ σ χ λ τ = λ g'B g C ρ c gg' χ λ σ χ λ τ = g C ρ λ χ λ g* χ λ τ .

## Double centraliser nonsense

1. Let $G$ be a group and let $V$ and $W$ be two representations of $G$. Define an action of $G$ on the vector space $V\otimes W$ by $g vw = gv gw ,$ for all $g\in G,v\in V$ and $w\in W$ (see also Section 5 Ex 4). In matrix form, the representation $V\otimes W$ is given by setting $V ⊗ d W g =V g ⊗W g ,$ for each $g\in G.$ Note, however, that if we extend this action to an action of $A=ℂG$ on $V\otimes W,$ then for a general $a\in A,$ $a\left(vw\right)$ is not equal to $\left(av\right)\left(aw\right)$ and $\left(V{\otimes }_{d}W\right)\left(a\right)$ is not equal to $V\left(a\right)\otimes W\left(a\right).$
2. Theorem 4.6 gives that there is a one-to-one correspondence between minimal central idempotents ${z}_{\lambda }^{C}$ of $C$ and characters ${\chi }_{A}^{\lambda }$ of irreducible representations of $A$ of $A$ appearing in the decomposition of $V$. Let ${\chi }_{C}^{\lambda }$ be the irreducible characters of $C$ and for each $\lambda$ set ${d}_{\lambda }^{C}={\chi }_{C}^{\lambda }\left(1\right),$ so that the ${d}_{\lambda }$ are the dimensions of the irreducible representations of $C.$ The Frobenius map is the map $F: Z C → R A 1 d λ C z λ X ↦ χ A λ .$ Let $t:C\otimes A\to ℂ$ be the trace of the action of $C\otimes A$ on the representation $V.$ By taking traces on each side of the isomorphism in Theorem 4.11 we have that $t qa = ∑ λ χ C λ q χ A λ a .$ Let ${\stackrel{\to }{t}}_{C}=\left({t}_{\lambda }^{C}\right)$ be a nondegenerate trace on $C$, let $B$ be a basis of $C$ and for each $g\in B$ let $g*$ be the element of the dual bsis to $B$ with respect to the trace ${\stackrel{\to }{t}}_{C}$ such that ${\stackrel{\to }{t}}_{C}\left(gg*\right)=1.$ Then, for any $z\in Z\left(C\right),$ the center of $C,$ $F z = ∑ g∈B t → C zg* t g. ,$ since, using 3.8 and 3.9, $F z μ C d μ C = ∑ g 1 d μ C t → C z μ C g* t g. = ∑ g tμC d μ C χ C μ g* t g. = ∑ g tμC d μ C χ C μ g* ∑ λ χ C λ q χ A λ . = ∑ g tμC d μ C δ μλ dλC t λ C χ A λ . = χ μ A . .$

If we apply the inverse ${F}^{-1}$ of the Frobenius map to (4.13) we get $F -1 t q. = ∑ λ χ C λ q zλC d λ C .$ Formula 3.13 shows that $F -1 t q. = ∑ λ t λ C dλ C z λ C q .$ In the case that ${\stackrel{\to }{t}}_{C}$ is the trace of the regular representation $\sum _{\lambda }\left(\frac{{t}_{\lambda }^{C}}{{d}_{\lambda }^{C}}\right){z}_{\lambda }^{C}=1$ and ${F}^{-1}\left(t\left(q,.\right)\right)=\left[q\right].$

## Centralisers

1. Let $A,B$ and $C$ be vector spaces. A map $f:A×B\to C$ is bilinear if $f a 1 + a 2 b =f a 1 b +f a 2 b ,$$f a b 1 + b 2 =f a b 1 +f a b 2 ,$$f αa b =f a αb =αf ab ,$ for all $a,{a}_{1},{a}_{2}\in A,b,{b}_{1},{b}_{2}\in B,\alpha \in ℂ.$
2. The tensor product is given by a vector space $A\otimes B$ and a map $i:A×B\to A\otimes B$ such that for every bilinear map $f:A×B\to C$ there exists a linear map $\stackrel{-}{f}:A\otimes B\to C$ such that the following diagram commutes:

One constructs the tensor product $A\otimes B$ as the vector space of elements $a\otimes b,a\in A,b\in B,$ with relations $a 1 + a 2 ⊗b= a 1 ⊗b+ a 2 ⊗b,$$a⊗ b 1 + b 2 =a⊗ b 1 +a⊗ b 2 ,$$αa b=a⊗ αb =α a⊗b ,$ for all $a,{a}_{1},{a}_{2}\in A,b,{b}_{1},{b}_{2}\in B$ and $\alpha \in ℂ.$ The map $i:A×B\to A\otimes B$ is given by $i\left(a,b\right)=a\otimes b.$ Using the above universal mapping property one gets easily that the tensor product is unique in the sense that any two tensor products of $A$ and $B$ are isomorphic.

If $R$ is an algebra and $A$ is a right $R$-module (a vector space that affords an antirepresentation of $R$) and $B$ is a left $R$-module them one forms the vector space $A{\otimes }_{R}B$ as above except that we require a bilinear map $f:A×B\to C$ to satisfy the additional condition $f ar b =f a rb$ for all $r\in R.$ Then the tensor product $A{\otimes }_{R}B$ once again is constructed by using the vector space of elements $a\otimes b,a\in A,b\in B,$ with the relations above and the additional relation $ar⊗b=a⊗rb,$ for all $r\in R.$

3. Let $A\subseteq B$ be semisimple algebras such that $A$ is a subalgebra of $B$ Let $\stackrel{~}{A}$ and $\stackrel{~}{B}$ be index sets of the irreducible representations of $A$ and $B$ respectively, and suppose that $\left\{{f}_{ij}^{\mu }\right\},\mu \in \stackrel{~}{A},$ is a complete set of matrix units of $A$

[Bt] There exists a complete set of matrix units $\left\{{e}_{rs}^{\lambda }\right\},\lambda \in \stackrel{~}{B},$ of $B$ that is a refinement of the ${f}_{ij}^{\mu }$ in the sense that for each $\mu \in \stackrel{~}{A}$ and each $i$, $f ii μ =∑ e rr λ ,$ for some set of ${e}_{rr}^{\lambda }$.

 Proof. Suppose that $B\cong {\oplus }_{\lambda \in \stackrel{~}{B}}{M}_{{d}_{\lambda }}\left(ℂ\right)$. Let ${z}_{\lambda }^{B}$ be the minimal central idempotent of $B$ such that ${I}_{\lambda }={B}_{{z}_{\lambda }}$ is the minimal ideal corresponding to the $\lambda$ block of matrices in ${\oplus }_{\lambda }{M}_{{d}_{\lambda }}\left(ℂ\right).$ For each $\mu \in \stackrel{~}{A}$ and each $i$ decompose ${f}_{ii}^{\mu }$ into minimal orthogonal idempotents of $B$ (Section 1, Ex 7), ${f}_{ii}^{\mu }=\sum {p}_{j}.$ Label each ${p}_{j}$ appearing in this sum by the element $\lambda \in \stackrel{~}{B}$ which indexes the minimal ideal ${I}_{\lambda }=B{p}_{j}B$ of $B$. Then $1= ∑ μ,i f ii μ = ∑ λ∈ B ~ ∑ j=1 d λ p j λ .$ Now $B=1.B.1= ∑ λ,μ∈ B ~ ∑ 1≤i≤ d λ ,1≤j≤ d μ p i λ B p j μ .$ If $\lambda \ne \mu$ then the space ${p}_{i}^{\lambda }B{p}_{j}^{\mu }={p}_{i}^{\lambda }B\left({z}_{\mu }^{B}{p}_{j}^{\mu }\right)={p}_{i}^{\lambda }{z}_{\mu }^{B}B{p}_{j}^{\mu }=0$ for all $i,j.$ Since ${p}_{i}^{\lambda }={p}_{i}^{\lambda }.1.{p}_{i}^{\lambda }\in {p}_{i}^{\lambda }I{p}_{i}^{\lambda }$ and ${p}_{i}^{\lambda }B{p}_{j}^{\lambda }{p}_{j}^{\lambda }B{p}_{i}^{\lambda }={p}_{i}^{\lambda }{I}_{\lambda }{p}_{i}^{\lambda }\ne 0$, we know that ${p}_{i}^{\lambda }B{p}_{j}^{\lambda }$ is not zero for any $1\le i,j\le {d}_{\lambda }.$ Furthermore, since the dimension of $B$ is ${\sum }_{\lambda }{d}_{\lambda }^{2}$ each of the spaces ${p}_{i}^{\lambda }B{p}_{j}^{\lambda }$ is one dimensional. For each ${p}_{i}^{\lambda }$ define ${e}_{ii}^{\lambda }={p}_{i}^{\lambda }.$ For each $\lambda$ and each $1\le i let ${e}_{ii}^{\lambda }$ be some element of ${p}_{i}^{\lambda }B{p}_{j}^{\lambda }.$ Then choose ${e}_{ii}^{\lambda }\in {p}_{j}^{\lambda }B{p}_{i}^{\lambda }$ such that ${e}_{ij}^{\lambda }{e}_{ji}^{\lambda }={e}_{ii}^{\lambda }.$ This defines a complete set of matrix units of $B.\square$

4. Let $G$ be a finite group and let $H$ be a subgroup of $G.$ Let $R=\left\{{g}_{i}\right\}$ be a set of representatives for the left cosets $gH$ of $H$ in $G.$ The action of $G$ on the cosets of $H$ in $G$ by left multiplication defines a representation ${\pi }_{H}$ of $H$ in $G.$ This representation is a permutation representation of $G.$ Let $g\in G.$ The entries ${\pi }_{H}{\left(g\right)}_{i\text{'}i}$ of the matrix ${\pi }_{H}\left(g\right)$ are given by ${\pi }_{H}{\left(g\right)}_{i\text{'}i}={\delta }_{i\text{'}k}$ where $k$ is such that $g{g}_{i}\in {g}_{k}H.$

Let $V$ be a representation of $H.$ Let $B=\left\{{v}_{j}\right\}$ be a basis of $V.$ Then the elements $g\otimes {v}_{j}$ where $g\in G,{v}_{j}\in B$ span $ℂG{\otimes }_{ℂH}V.$ The fourth relation in 5.1 gives that the set $\left\{{g}_{i}\otimes {v}_{j}\right\},{g}_{i}\in R,{v}_{j}\in B$ forms a basis of $ℂG{\otimes }_{ℂH}V.$

Let $g\in G$ and suppose that $g{g}_{i}={g}_{k}h,$ where $h\in H$ and ${g}_{k}\in R.$ Then $g g i ⊗ v j = g k h⊗ v j = g k ⊗h v j = ∑ j g k ⊗ v j' V h j'j = ∑ i',j' g i' ⊗ v j' V h j'j δ i'k = ∑ i',j' g i' ⊗ v j' V h j'j π H g i'i .$ Then $χ V ↑ H G g = ∑ g i ∈R, v j ∈B g g i ⊗ v j | g i ⊗ v j = ∑ g i , v j ,g g i ∈ g i H V g i -1 g g i jj .$

Since characters are constant on conjugacy classes we have that $χ V ↑ H G g = 1 H ∑ h∈H ∑ g i ; h -1 g i -1 g g i h∈H χ V h -1 g i -1 g g i h = 1 H ∑ a∈H,a∈ C g χ V a ,$ where ${C}_{g}$ denotes the conjugacy class of $g.$ This is an alternate proof of Theorem 5.8 for the special case of inducing from a subgroup $H$ of a group $G$ to the group $G.$

5. Define $ℂG{\otimes }_{d}ℂG$ to be the subalgebra of the algebra $ℂG\otimes ℂG$ consisting of the span of the elements $g\otimes g$, $g\in G.$ Then $ℂG\cong ℂG{\otimes }_{d}ℂG$ as algebras.

Let ${V}_{1}$ and ${V}_{2}$ be representations of $G.$ Then the restriction of the $ℂG\otimes ℂG$ representation $V={V}_{1}\otimes {V}_{2}$ to the algebra $ℂG{\otimes }_{d}ℂG$ is the Kronecker product (Section 4, Ex 1) $V 1 ⊗ d V 2 = V 1 ⊗ V 2 ↓ ℂG⊗ℂG ℂG ⊗ d ℂG$ of ${V}_{1}$ and ${V}_{2}.$ Since $ℂG\cong ℂG{\otimes }_{d}ℂG$ we can view ${V}_{1}{\otimes }_{d}{V}_{2}$ as a representation of $G.$

Let ${V}_{\lambda }$ and ${V}_{\mu }$ be irreducible representations of $G$ such that ${V}_{\lambda }\otimes {v}_{\mu }$ appears as an irreducible component of the $ℂG\otimes ℂG$ representation ${V}_{1}\otimes {V}_{2}$. The decomposition of the Kronecker product $V λ ⊗ d V μ = V 1 ⊗ V 2 ↓ ℂG⊗ℂG ℂG ⊗ d ℂG ≅ ⊕ ν g λμ ν V ν$ into irreducible representations ${V}_{\nu }$ of $G$ is given by the branching rule for $ℂG\otimes ℂG\supset ℂG{\otimes }_{d}ℂG.$ Let ${C}_{1}$ and ${C}_{2}$ be the centralisers of the representations ${V}_{1}$ and ${V}_{2}$ respectively. Let $C$ be the centraliser of the $ℂG\otimes ℂG$ representation $V={v}_{1}\otimes {V}_{2}.$ Applying Theorem 5.9 to $V$ where $A=ℂG\otimes ℂG$ and $ℂG{\otimes }_{d}ℂG=B\cong G$ shows that the ${g}_{\lambda \mu }^{\nu }$ are also given by the branching rule for ${C}_{1}\otimes {C}_{2}\subset C.$