Lectures in Representation Theory

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au

Last update: 28 August 2013

Lecture 22

Recall our setting from last time. Suppose A and B are algebras and V is a completely decomposable A-module, with decomposition V=λVˆmλVλ. Suppose that V is also a module for B and that V(B)=V(A).

We showed that B has certain irreducible modules Wλ (called Cλ last time), each of dimension mλ for λVˆ. The module V is a completely decomposable BA module, with decomposition into irreducible BA modules VλVˆ WλVλ.

Corollary 3.22 Let χBλ be the character of Wλ, and let χAλ be the character of Vλ. Then for bB and aA trV(ba)= λVˆ χBλ(b) χAλ(a)

Proof.

Let {wiλ}1imλ be a basis for each Wλ and similarly choose bases {vjλ}1jdλ for the Vλ. Then by the above decomposition, the set { wiλvjλ |1imλ ,1jdλ,λ Vˆ } forms a basis for V. Thus trV(ba) = λVˆ i=1mλ j=1dλ ( (ba) (wiλvjλ) ) |wiλvjλ = λVˆ i=1mλ j=1dλ ( (bwiλ) (avjλ) ) |wiλvjλ = λVˆ i=1mλ j=1dλ (bwiλ) |wiλ (avjλ) |vjλ = λVˆ ( i=1mλ (bwiλ) |wiλ ) ( j=1dλ (avjλ) |vjλ ) = λVˆ χBλ(b) χAλ(a) since χBλ=trWλ and χAλ=trVλ.

Consider the regular representation LA:AMd() (d=dimA) of an algebra A defined by aLa, the transformation defined by left multiplication by a. Recall that we often denoted this action by a1·a2=a1a2, using the vector notation to distinguish elements of the vector space A from transformations in A.

We showed that A=RA, that is, those transformations which commute with all left multiplications consist precisely of the right multiplications by elements of A. Let B=Aop, the opposite algebra of A. The algebra Aop is defined to have A as the underlying vector space, but with multiplication defined by a1a2= a2a1 where the multiplication of the right hand side is the usual multiplication in A.

The algebra B=Aop acts on A by right multiplication: for bB and aA ba= ab

Problem 3.23 Verify that this defines a module action of Aop on A.

Then B acts as RA=A(A), as in the statement of the theorem. If A is completely decomposable as an A-module (i.e. if A is a semisimple algebra), then as a BA module, AλAˆ BλAλ where Bλ and Aλ are the irreducible B and A modules respectively. Note that in this case, each irreducible A-module appears in the decomposition of V=A.

We knew this already, however. If A is semisimple, then AλMdλ(), and the irreducible (left) A-modules are the column spaces of dimension dλ for each simple component. Then, by taking transposes, we see that the irreducible Aop modules are the row vectors for each simple component (Note that they have the same dimension as the corresponding A-module). Then Mdλ()BλAλ and we have the above decomposition.

We do get one useful piece of information from this situation, however. Continuing with A semisimple, suppose Aλ has character χλ, and denote the representation Aλ:AMdλ() by aAλ(a). Then the irreducible modules for the opposite algebra Bλ:BMdλ() are defined by Bλ(b)=(Aλ(b))t. Thus, χBλ(b) = tr(Bλ(b)) = tr(Aλ(b)t) = tr(Aλ(b)) = χAλ(b)

Applying the corollary, trA(ba)= λAˆ χBλ(b) χAλ(a)= λAˆ χAλ(b) χAλ(a).

We next evaluate this trace in another way. Choose a basis ={gi}1idimA and a nondegenerate trace t=(tλ). Recall that b,a=t(ba) is then a non-degenerate symmetric bilinear form on A, so we may choose a dual basis *={gj*} with respect to this form.

We defined [a]=i=1dimAgiagi*. Evaluating the trace above, trA(ba) = i(ba) gi|gi = i (agib) |gi = i agib,gi* = itr (agibgi*) = tr(aigibgi*) = a,[b]. We have shown previously that the [·] mapping is an adjoint map with respect to this form, hence trA(ba)=[a],b. Combining these two calculations,

Proposition 3.24 Let A be a semisimple algebra with irreducible representations Aλ indexed by λAˆ, and denote the character of Aλ by χλ. Then λAˆ χλ(b) χλ(a)= [a],b= a,[b].

Example 3.25 Let G be a finite group, and let A=G be the group algebra over . By Maschke’s theorem, A is semisimple. By definition, the elements of the group G form a basis for A. We have shown that the linear function t:A defined by t(a)=a|1 is a nondegenerate trace on A, and that the dual basis with respect to the associated bilinear form is {g-1|gG}. Recall also that if hG then [h]=gG ghg-1=sh g𝒞hg where sk is the order of the stabilizer of h under conjugation, and 𝒞h is the conjugacy class of h (so sk=|G||𝒞h|). Applying the proposition in this context yields:

Proposition 3.26 (Second Orthogonality Condition) Let G be a finite group. If g,hG, then λAˆ χλ(g) χλ(h-1)= [g],h-1= { sg ifh𝒞g 0 otherwise

3.1Schur-Weyl Duality

Let v1,v2,,vn be non-commuting variables and let V be the vector space with basis {vi}. Let Vm be the vector space with basis {vi1vi2vim|1ijm}. Then 𝒮m acts on Vm by permuting the subscripts of the basis vectors σ·vi1vi2 vim= viσ(1) viσ(2) viσ(m).

What is Vm(𝒮m)? We know that 𝒮m is semisimple by Maschke’s theorem, so Vm is completely decomposable. To use the theorems of the last few classes, we’d like to find an algebra B acting on Vm such that Vm(B)= Vm(𝒮m). Then, Vm= λm(λ)n SλVλ where Sλ are the irreducible 𝒮m modules and Vλ are the irreducible modules for B.

Of course, there is a natural action of the general linear group GL(n), the set of all n×n invertible matrices, on V given by g·vi= j=1n vjgj,i where g=(gi,j). We may extend this action to Vm as the linear extension of g· vi1 vi2 vim= (gvi1) (gvi2) (gvim). This is a group representation, so we may consider the representation of the group algebra GL(n). A word of caution is necessary: since this is a group algebra, the invertible matrices form a basis and the addition operation is formal addition which differs from the usual addition of matrices.

It is easy to see that Vm(GL(n))V(𝒮m), that is, the action of GL(n) commutes with the action of 𝒮m on Vm (we shall prove it). That the reverse inclusion holds is as surprising as it is beautiful.

Theorem 3.27 (Schur-Weyl Duality) With the above notation Vm (GL(n))= Vm(𝒮m) . This is equivalent to the classical Fundamental Theorem of Invariant Theory.

Lemma 3.28 Vm (GL(n)) Vm(𝒮m).

Proof.

Let g=(gi,j)GL(n) and suppose w=vi1vi2vim is a basis word in Vm. Then g·w = (gvi1) (gvim) = 1j1,j2,,jmn vj1vj2 vjmgj1,i1 gj2,i2 gjm,im Then gσ·w = g·viσ(1) viσ(m) = 1j1,j2,,jmn vj1vj2vjm gj1,iσ(1) gj2,iσ(2) gjm,iσ(m) Acting by σ𝒮m, we have σg·w = σ· 1j1,j2,,jmn vj1vj2vjm gj1,i1 gj2,i2 gjm,im = σ· 1j1,j2,,jmn vjσ-1(1) vjσ-1(2) vjσ-1(m) gjσ-1(1),i1 gjσ-1(2),i2 gjσ-1(m),im after replacing jk with jσ-1(k). Acting by sigma, we obtain σg·w = 1j1,j2,,jmn vj1vj2vjm gjσ-1(1),i1 gjσ-1(2),i2 gjσ-1(m),im = 1j1,j2,,jmn vj1vj2vjm gj1,iσ(1) gj2,iσ(2) gjm,iσ(m) = gσ·w Hence Vm(GL(n)) Vm(𝒮m).

The remaining direction of the theorem will take some work. Although we have not yet proven that GL(n) generates the full centralizer of 𝒮m on Vm, we do know that the actions of GL(n) and 𝒮m commute, hence, we may view Vm as a GL(n)𝒮m bimodule. Here the action is given by gσ· vi1vim= gσ(vi1vim).

Let us compute the trace of gσ on Vm, tr(gσ)= 1i1,i2,,imm (gσ(vi1vi2vim)) |vi1vim. For π𝒮m and AGL(n), we have tr(AgA-1πσπ-1) = tr(AgA-1πσπ-1) = tr(Agπσπ-1A-1) Actions Commute = tr(A-1Agπσπ-1) Trace Prop. = tr(πgσπ-1) Actions Commute = tr(gσπ-1π) Trace Prop. = tr(gσ). Thus we may choose to calculate using convenient conjugates of g and σ. For σ𝒮m, there exists π𝒮m such that πσπ-1 =γμ where μ is the cycle type of σ. Moreover, there exists SGL(n) such that A=SgS-1 is in Jordan Canonical Form. In particular, the eigenvalues x1,,xn lie along the diagonal. These are nonzero, since g is invertible.

We have tr(σg)=tr(γμA).

Homework Problem 3.29 Let μ=(μ1,μ2,,μk)m. Then tr(γμA)= i=1ktr (γμiA)

So we need only compute tr(γrA) on Vm, with A in Jordan Canonical Form. Write A=(ai,j) so ai,i=xi and ai,j=0 for j>i. Then tr(γrA) = 1i1,i2,,irn (γrA(vi1vi2vir)) |vi1vi2vir = 1i1,i2,,irn γr 1j1,j2,,jrn vj1vjr aj1,i1 ajr,ir |vi1vi2vir = i,j vjrvj1 vj2vjr-1 aj1,i1 aj2,i2 ajr,ir |vi1vi2vir = i jk vjrvj1 vjr-1 aj1,i1 aj2,i2 ajr,ir |vi1vir = i ai2,i1 ai3,i2 air,ir-1 ai1,ir Note that the terms in the last summation are zero unless i1ir ir-1 i3i2i1. Hence tr(γrA)=i=1nai,ir=i=1nxir is a symmetric function in the eigenvalues xi (called the rth power symmetric function pr(x1,,xn)). To summarize,

Proposition 3.30 If σ𝒮m has cycle type μ and gGL(n) has eigenvalues x1,x2,,xn, then

Recall that can write pμ(x1,,xn) =λm(λ)n χλ(μ)sλ (x1,,xn) where sλ is the Schur function and χλ is the character of 𝒮m labeled by λ.

Aside. Note that not all characters of GL(n) are rational functions of the eigenvalues. Let V:GL(n)M2() be the representation defined by V(g)= ( 1log|det(g)| 01 ) whose character has constant value 2.

Representations of GL(n) in which the matrix V(g) is a rational function of the entries of g are called rational representations. If the entries of V(G) are, in fact, polynomial functions in the entries of g, the representation V is said to be a polynomial representation. Since tr(·) is a polynomial in the entries of V(g), one can equally refer to rational or polynomial characters. Of course, the polynomial representations are a subset of the rational representations.

The proof of the fundamental theorem via symmetric functions, as suggested last time, can be found in paper 59 of Schur’s collected works. The proof we offer here is due to Curtis and Reiner [CR62].

Theorem 3.31 Fundamental Theorem of Invariant Theory. Vm ([GLn])= End[Sm] (Vm).

Proof.

We know that Vm([GLn])End[Sm](Vm), so we show the reverse inclusion. Let cEnd(Vm), Then the action of c on a word vi1vi2vimVm is c·vi1vi2 vim= 1j1,,jmn vj1 vj2 vjm cj,i, where j denotes the sequence j1,,jm, i denotes the sequence i1,,im, and cj,i. If cEnd[Sm](Vm), then for any σSm, we have c·vi1vi2 vim=σ-1 cσ·vi1 vi2vim. Therefore, j vj1 vjm cj,i = c·vi1 vi2 vim = σ-1cσ· vi1vi2 vim = σ-1c· viσ(1) viσ(2) viσ(m) = σ-1j vj1vjm cj,σ(i) = j vjσ-1(1) vjσ-1(m) cj,σ(i) = j vj1 vjm cσ(j),σ(i) Thus, we conclude that cEnd[Sm] (Vm) cj,i= cσ(j),σ(i) for all sequences i and j and all σSm.

Let Ω be the set of pairs of sequences (j,i)m which we view as a two line array (i,j)= ( j1jm i1im ) , satisfying

(1) 1ikn,
(2) 1j1j2jnn,
(3) if jk=jk+1 then ikik+1.
Then if cEnd[Sm](Vm), then c depends only on the values cj,i for pairs (j,i)Ω.

We define a basis of End[Sm](Vm) as follows: for each pair (j,i)Ω, let cj,iEndC[Sm](Vm) be defined by cj,i· vk1 vkm= vjσ(1),, vjσ(m) if (k1,,km)=(viσ(1),,viσ(m)) for some σSm, and by cj,i· vk1vkm=0 if (k1,,km)(viσ(1),,viσ(m)) for any σSm. Then cjiEnd[Sm](Vm) and every element pEnd[Sm](Vm) can be written as p=(j,i) pji cji for pji.

Homework Problem 3.32 Show that dimEnd[Sm](Vm) =(n2+m-1m).

In particular, if gGLn, and g acts on Vm as the transformation gm, then gm (vi1vim)= j1,,jm vj1vjm (gj1,i1gjm,im), and gmEnd[Sm](Vm) so gm= (j,i)Ω gj,i cj,i= (j,i)Ω gj1,i1 gjm,im cj,i. Therefore, Vm([GLn])= span{gm|gGLn} End[Sm](Vm). Now define an inner product on End[Sm](Vm) by cj,i cj,i = { 0 unless j= jand i= i, 1 ifj= jand i= i, for all (j,i),(j,i)Ω. This makes the basis of cj,i orthonormal with respect to .,.. Now consider pVm(GLn) where Vm (GLn)= { pEnd[Sm](Vm) |p,gm =0for allgGLn } . Then we write p and gGLn in terms of the basis of cj,i as p = (j,i)Ω pj,i cj,i g = (j,i)Ω gj,i cj,i and we have 0=p,gm= (j,i)Ω pj,i cj,i, (j,i)Ω gj,i cj,i = (j,i)Ω pj,i gj,i. Thus pVm(GLn) if (j,i)Ω pj,i gj1,i1 gjm,im=0 for all g=(gij)GLn. In other words, we must have (j,i)Ω pj,i gj1,i1 gjm,im=0 for all choices of gi,j for which det(g)= τSnε (τ)g1,τ(1) gn,τ(n)0. Let x1,1,x1,2,,xn,n be commuting variables, and define polynomials P(xij)= (j,i)Ω Pj,i xj1,i1 xjm,im [x1,1,,xn,n] and det(xij)= τSn ε(τ) x1,τ(1) xn,τ(n) [x1,1,,xn,n]. Then the product P(xij)det(xij) is zero when evaluated at all αij, so, by the fundamental theorem of algebra, P(xij)det(xij)=0. But det(xij)0 in [x1,1,,xn,n], and [x1,1,,xn,n] is an integral domain, so P(xij)=0. Moreover, for (j,i)Ω, the monomials xj1,i1xjm,im are distinct, since we ordered them. Thus pj,i=0 for all (j,i)Ω implying that p=0, and therefore, Vm(GLn)= End[Sm](Vm).

Now let A=[Sm] and B=[GLr]. Then Vm is a module for both A and B with the property that Vm(B)= Vm(A) . Moreover, Vm is completely decomposable as an A-module (by Maschke’s theorem), so BA acts on Vm, and we have the decomposition Vm λVˆm SλmVλ, where Sλ is an irreducible [Sm]-module and Vλ is an irreducible [GLn]-module. We showed that if σSm with cycle type μ and gGLn with eigenvalues x1,,xn, then Tr(σg)=pμ (x1,,xr)= λm(λ)n χλ(μ)sλ (x1,,xn), where χλ(μ) is the irreducible character of Sm evaluated on the conjugacy class labeled by μ and sλ is the Schur function corresponding to λ. It follows that:

Corollary 3.33

(1) GLn has some irreducible representations that can be indexed by the partitions λ having (λ)n.
(2) The character ηλ(g) of the GLr-representation indexed by λ can be given by ηλ(g)=sλ(x1,,xn).

Notes and References

This is a copy of lectures in Representation Theory given by Arun Ram, compiled by Tom Halverson, Rob Leduc and Mark McKinzie.

page history