Lectures in Representation Theory

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia

Last update: 20 August 2013

Lecture 11

Definition 2.16 For an integer r>0 we define the power symmetric function pr(x) to be pr(x)=pr (x1,x2,,xn) =x1r+x2r+ xnr, and we extend the definition to sequences μ=(μ1,μ2,,μk) of positive integers by pμ(x)=pμ2 (x)pμ2(x) pμk(x).

As an immediate corollary of 2.14 and 2.15, we have

Corollary 2.17 If σSm has cycle type μm, then wtr(σ)=pμ(x).

2.2Symmetric Functions

Let W=Sn (we use W for Weyl group), and define an action of W on polynomials in [x1,x2,,xn] in the following way. Let wW act on the monomial xi1xi2xin[x1,x2,,xn] by wxi1xi2 xin=xiw(1) xiw(2) xiw(n), and extend the action linearly to all of [x1,x2,,xn]. Notice that the action has the property that w xi1λ1 xi2λ2 xinλn= xiw(1)λ1 xiw(2)λ2 xiw(n)λn= xi1λw-1(1) xi2λw-1(2) xinλw-1(n). Moreover, we have for all wW wpr(x)= pr(x) and wpμ(x)= pμ(x).

Definition 2.18 A polynomial f(x)[x1,x2,,xn] is a symmetric polynomial if it satisfies wf(x)=f(x) for allwW, and a polynomial g(x)[x1,x2,,xn] is an alternating polynomial or a skew-symmetric polynomial if it satisfies wg(x)=ε(w) g(x)for all wW.

Let λ=(λ1,,λn) be a sequence with each λi, and let xλ denote the monomial x1λ1x2λ2xnλn. We construct a symmetric function by “symmetrizing” xλ. Let Re(λ) denote the set of all sequences in n that are rearrangements of the sequence λ. Note that Re(λ) is the W-orbit Wλ of λ in n. Then define the symmetric function mλ(x) by mλ(x)= μRe(λ) xμ=μWλ xμ, Notice that if νRe(λ), then mν(x)= mλ(x). Moreover, there is a unique νRe(λ) such that ν1ν2 νn0. The polynomials { mν(x)|ν =(ν1,ν2,,νn) ,ν1ν2 νn0 } are called the monomial symmetric polynomials.

The symmetric polynomials in [x1,,xn] form a -vector space which we denote by Λn. If f(x) is a symmetric polynomial, let cν denote the coefficient of xν in f(x). Then, since f(x) is symmetric, if λRe(ν), cν is also the coefficient of xλ in f(x). Therefore, f(x)= ν1ν2νn cνmν(x), and the monomial symmetric polynomials form a basis of Λn.

Analogously, the alternating polynomials in [x1,,xn] form a -vector space which we denote by An. To find a basis for An, we anti-symmetrize the monomial xλ. That is, we define aλ(x)= wWε (w)wxλ, where, as before, ε(w) is the sign of w. If vW, then vaλ(x) = wWε(w) vwxλ = ε(v) vwWε (vw)vwxλ = ε(v)aλ (x), and, therefore, aλ is alternating.

Lemma 2.19 Let λ=(λ1,λ2,,λn) with λi, and suppose that λi=λj for some ij. Then, aλ(x)=0.


Let tijW be the transposition that switches i and j. Then we have aλ(x) = wWε(w) wx1λ1 xiλi xjλj xnλn = wWε(w) wtijxλ = ε(tij) tijwW ε(tijw) wtijxλ = ε(tij) aλ(x) = -aλ(x), and, therefore, aλ(x)=0.

If νRe(λ), then ±aλ(x)=aν(x), so, as in the case of the symmetric polynomials, we see that { aν(x)| ν=(ν1,ν2,,νn) ,ν1>ν2> >νn0 } forms a basis of An.

Our next goal is to show that the alternating symmetric functions and the symmetric functions are exactly the same. In fact, we will describe a bijection between Λn and An. To do this we let δ= (n-1,n-2,,2,1,0). Then if λn with λ1>λ2>>λn0, and μ=λ-δ ( λ1-(n-1), λ2-(n-2), λn-2-2, λn-1-1, λn ) , we have μ1μ2μn0.

For example, suppose that n=8, δ=(7,6,5,4,3,2,1,0), and λ=(13,10,9,8,3,2,1,0). Then μ=(6,4,4,4,0,0,0,0). We can picture this as follows. | | | | | | | | The sequence δ is pictured to the left of the wall, the sequence μ is pictured to the right of the wall, and the sequence λ is the entire picture. In this way we get a bijection between the index sets of the symmetric polynomials and the alternating polynomials.

Now we define a map between Λn and An. To this end, we note that aλ(x)= wSn ε(w)wx1λ1 xnλn =det(xiλj), where by (xiλj) we mean the n by n matrix whose i,j-entry is given by xiλj. This is called the Vandermonde determinant, and it satisfies the following

Theorem 2.20 [Weyl’s Denominator Formula] aλ(x)=det (xiλj)= 1i<jn (xi-xj).


We proceed in several steps.

Step 1. i<j(xi-xj) divides aλ(x) for all λ1>λ2>>λn.


We use the evaluation map to send both xi and xj to α. Then the ith and jth row of the matrix (xiλj) are identical, and so aλ(x)=det(xiλj)=0. This holds for all α, so aλ(x) is divisible by (xi-xj). This argument holds for any pair i<j, so the product i<j(xi-xj) divides aλ(x).

Step 2. The polynomial i<j(xi-xj) is alternating.


Write the product i<j(xi-xj) as follows (x1-x2) (x1-x3) (x1-xi-1) (x1-xi) (x1-xi) (x1-xn-1) (x1-xn)· (x2-x3) (x2-x4) (x2-xi) (x2-xi+1) (x2-xi+2) (x2-xn)· (xi-xi+1) (xi-xi+2) (xi-xi+3) (xi-xn-1) (xi-xn)· (xi+1-xi+2) (xi+1-xi+3) (xi+1-xi+4) (xi+1-xn)· (xn-1-xn). (This is a trick of Littlewood.) Then consider the action of the simple transposition si on this product. We see that si preserves all rows except the ith and the (i+1)st. Moreover, the second element of the ith row is changed with the 1st element of the (i+1)st row, the third element of the ith row is changed with the 2nd element of the (i+1)st row, and so on. The first element of the ith row stays the same with a sign change. Therefore, si·i<j (xi-xj)=- i<j (xi-xj)=ε(si) i<j (xi-xj). Since the result holds for each simple transposition si, it holds for the entire symmetric group Sn.

Continued next lecture.

Notes and References

This is a copy of lectures in Representation Theory given by Arun Ram, compiled by Tom Halverson, Rob Leduc and Mark McKinzie.

page history