Root systems

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia

Last update: 27 March 2012

This is a typed version of I.G. Macdonald's lecture notes from lectures at the University of California San Diego from January to March of 1991.


Let V be a real vector space of finite dimension n>0, and let x,y be a positive definite symmetric inner product on V. So we have x,x0 for all x0 in V, and we write |x| = x,x12 for the length of x. A linear transformation f:VV is an isometry if it is length preserving: |f(x)| = |x| for all xV. Equivalently, fx,fy = xy for all x,yV.

Example. V=n with the standard inner product: If   x=(x1,...,xn),   y=(y1,...,yn)   then   x,y = i=1n xiyi. In fact this is essentially the only example: given V as above we can construct an orthonormal basis of V, i.e. a basis v1,...,vn, such that vi,vj = δij   (1i,jn) and then if x= i=1n xivi, y= i=1n yivi, we have x,y = xiyi.

If x,yV are such that x,y =0 we say that x,y are perpendicular (or orthogonal) and write xy. More generally, if x,y0 the angle θ([0,π]) between the vectors x,y is given by cosθ = x,y |x||y| . One other piece of notation: if xV, x0 we shall write x = 2x |x|2 (you'll see why in a moment). We have |x| = 2 |x| , x,x = 2, (x) = 2x |x|2 = x, (cx) = c-1x (c, c0).

Reflections in V

Let αV, α0, and let sα: VV be the orthogonal reflection in the hyperplane Hα = {xV  |  x,α=0} perpendicular to α. Clearly |sα(x)| = |x| for all xV, i.e. sα is an isometry.

  1. sα(x) = x-x,αα   (= x-x,αα).
  2. sα(x) = sα(x) = x-x,αα.
  3. sα2 =1.
  4. sαx,y = x,sαy.
  5. Let f:VV be an isometry. Then sf(α) = fsαf-1.

  1. We have x-sαx=λα, for some λ, so that x-sαx,α = λ|α|2. (IGM 1) On the other hand, x+sαx,α = 0 (IGM 2) because 12 (x+sαx) Hα. Adding (IGM 1) and (IGM 2) we get 2x,α = λ|α|2 i.e., λ= 2x,α |α|2 = x,α, which proves (i).
  2. sα(x) = 2sα(x) |sα(x)|2 = 2sα(x) |x|2 (because   sα   is an isometry) = sα(x) = x-x,αα by (i).
  3. Is obvious from the definition.
  4. sαx,y = x,y - x,α y,α = x,y - 2x,αy,α |α|2 is symmetrical in x and y, hence equal to sαy,x = x,sαy.
  5. Calculate, using (i): fsαf-1(x) = f( f-1x - 2f-1x,α |α|2 α ) = x- 2x,fα |fα|2 fα = sfα(x).

Root systems

A root system in V is a non-empty subset R of V-{0} satisfying the following two axioms:

  1. (R1) For all α,βR, α,β (integrality).
  2. (R2) For all α,βR, sα(β)R (symmetry).

  1. Since sα(α)=-α it follows from (R2) that αR -αR. Suppose on the other hand αR and β=cαR   (c,  c0). Then α,β = cα,α = 2c, so that 2c; and α,β = c-1α,α = 2c-1, so that 2c-1 . So the only possibilities for c are c = ±12,  ±1,  ±2. If αR,   cαR    c=±1 we say that R is reduced. But there are non reduced root systems as well (examples in a moment).
  2. I don't demand that R spans V. The dimension of the subspace of V spanned by R is called the rank of R. It is therefore the maximum number of linearly independent elements of R.
  3. R1V1,   R2V2:   R= R1R2 V1V2 (orthogonal direct sum). R is reducible if it splits up in this way (decomposable would be a better word).


  1. rank R=1: only two possibilities R={±α} (A1), R={±α,±2α} (BC1). The first is reduced, the second isn't.
  2. rank R=2, R reduced. Draw pictures of A1×A1,  A2,  B2,  G2. The first of these is reducible, the others are irreducible.
  3. rank R=2, R non-reduced: draw picture of BC2.
  4. V=n, standard basis e1,...,en (ei,ej=δij). An-1 = {ei-ej  |  1i,jn,   ij} (rank n-1) Bn = {±ei±ej   (1i<jn),   ±ei   (1in) }, Cn = {±ei±ej   (1i<jn),   ±2ei   (1in)}, Dn = {±ei±ej   (1i<jn)}, BCn = BnCn (not reduced). (Exercise: check (R1), (R2) in each case.)

    In fact, as we shall see later, this is almost a complete list of the irreducible root systems: apart from An (n1), Bn (n2), Cn (n3), Dn (n4), BCn (n1) there are just 5 others: E6,  E7,  E8,  F4,  G2 (the last of which we have already met).
  5. In R is a root system, so is R = {α  |  αR} (the dual root system). In the examples above, An-1,  BCn, and Dn are self dual; Bn and Cn are duals of each other.

I want now to start drawing some consequences from the integrality axiom (R1), which as we shall see restricts the possibilities very drastically. So let R be a root system and α,βR. Let θ=θαβ [0,π] be the angle between the vectors α,β, so that cosθ = α,β |α||β| and hence 4cos2θ = 4α,β2 |α|2|β|2 = α,β α,β (IGM 3) so that |α,β| 4.

Let α,βR be linearly independent and assume α,β0. then

  1. α,βα,β=1,2 or 3.
  2. If |α||β| then α,β = ±1 and |α|2 |β|2 =1,2  or  3.

  1. Follows from (IGM 3), since cos2θ 0,1.
  2. |α|2 |β|2 = α,β α,β (since α,β0). Hence |α,β| |α,β| and it follows from (i) that |α,β| = 1, |α,β| =1,2   or   3.

From the relation (IGM 3) we have cos2θ = 0,  14,  12,  34,  1 giving cosθ = 0,  ±12,  ±12,  ±32,  ±1. So the possible values of θ=θαβ are π2,  π3,  2π3,  π4,  3π4,  π6,  5π6,  0,  π or collectively θαβ = rπ12 where 0r12 and r is not prime to 12 (i.e. r1,5,7,11).

R is finite.

R spans a subspace V of V, so we can choose α1,...,αr R forming a basis of V. Let v1,...,vr be the dual basis of V, defined by αi,vj = δij (1i,jr). Let βR, then βV, say β = i=1r mivi and the coefficients mi are given by mi = αi,β. So each mi is an integer and |mi| 4. So only finitely many possibilities.

Weyl group

Let W= sα  |  αR be the group of isometries of V generated by the reflections sα, αR. By (R2) each wW permutes the elements of R, i.e. we have a homomorphism WSym(R) (IGM 4) of W into the group of permutations of R. As in Proposition 3.4, let V be the subspace of V spanned by R and let V = (V) be the orthogonal complement of V, so that V=VV. Each sα   (αR) fixes V pointwise (because VHα), hence each wW fixes V pointwise.

Suppose wW gives rise to the identity permutation under the homomorphism (IGM 4), i.e. w(α) = α for all αR. Then w fixes V pointwise (because R spans V) as well as V, i.e. w=1V. So W embeds in Sym(R) which is a finite group by Proposition 3.4. Hence W is a finite group called the Weyl group of R: notation W=W(R).


  1. Weyl groups of types A,B,C,D (symmetric group, hyperoctahedral group, etc.).
  2. W(R)=W(R).

Let α,βR.

  1. If α,β>0 (i.e. if θαβ is acute) then β-αR{0}.
  2. If α,β<0 (i.e. if θαβ is obtuse) then β+αR{0}.

(ii) comes from (i) by replacing α by -α, so it is enough to prove (i). First of all, if β=cα (c) then (as we have seen) 12,  1   or   2 so that β-α = (c-1)α = { -12α = -β, if   c=12, 0, if   c=1, α, if   c=2. } So we may assume α,β linearly independent and then by Proposition 3.3 (i) either α,β = 1 or α,β = 1. If say α,β = 1 then by Proposition 2.1 we have sα(β) = β- α,βα = β-α and hence also β-α = -(α-β) R.

Strings of roots

Let α,βR be linearly independent and let I= {i  |  β+iαR}.

  1. I is an interval [-p,q] of , where p,q0.
  2. p-q=α,β.

  1. Certainly 0I. Let -p (resp. q) be the smallest (resp. largest) element of I. Suppose I[-p,q]. Then there exist r,sI such that s>r+1,  r+1 I,   s-1I. So we have β+rαR, (β+rα) + αR, hence β+rα,α 0, also β+sαR, (β+sα) - αR, hence β+sα,α 0, both by Proposition 4.1. Subtract and we get (r-s) |α|2 0, hence rs, a contradiction. So I=[-p,q].
  2. We have sα(β+rα) = β+rα - β+rα,αα = β- ( α,β+r )α. Hence rI implies -( α,β+r )I.

    Take r=q: α,β+q p, i.e. α,β p-q.

    Take r=-p: p-α,β q, i.e. α,β p-q.

The set of roots β+iα (-piq) is called the α-string through β. It follows from Proposition 5.1 that a string of roots has as most 4 elements: (take q=0, i.e. β at the end of the chain: p= α,β 3 because α,β linearly independent.)

Bases of R

A basis of R is a subset B of R such that

  1. (B1) B is linearly independent.
  2. For each αR we have α = βBmββ, with coefficients mβ and either all mβ0 or all mβ0.

From (B2) it follows that B spans the subspace V of V spanned by R, hence (B1) B is a basis of V and therefore Card(B)=rank(R).

Examples. An-1: B = { ei-ei+1,  1in-1 }. Bn: B = { ei-ei+1,  1in-1 ;  en }. Cn: B = { ei-ei+1,  1in-1 ;  2en }. Dn: B = { ei-ei+1,  1in-1 ;  en-1+en }. G2: Draw picture.

Defining something gives no guarantee that it exists. However, the following construction provides bases (in fact, all of them). Say xV is regular if α,x0 for all αR, i.e. if x does not lie in any of the reflecting hyperplanes Hα (αR). Let x be regular and let R+ = Rx+ = { αR  |  α,x>0 }. Since αR implies that -αR it follows that R=Rx+(-Rx+) (disjoint union). A root αRx+ will be called (temporarily) decomposable if α=β+γ with β,γR+; otherwise indecomposable. Let Bx be the set of indecomposable elements of Rx+.

Bx is a basis of R.

In several steps.
  1. Let S = βBxβ. I clain that Rx+S. Suppose not, and choose αRx+, αS such that α,x is as small as possible. Certainly αBx (because BxS), hence α is decomposable, say α = β+γ   (β,γRx+). Hence α,x = β,x + γ,x; both β,x and γ,x are positive, hence less than α,x. It follows that βS and γS, hence (as S is closed under addition) αS, contradiction. Hence R = Rx+ (-Rx+) S(-S) and so Bx satisfies (B2).
  2. Let α,βBx, αβ. Then α,β 0 (i.e., θαβπ2).

    Suppose α,β >0. By Proposition 4.1 we have β-αR and hence also α-βR. So either β-α Rx+, in which case β = α+(β-α) is decomposable; or α-β Rx+, in which case α = β+(α-β) is decomposable. Contradiction in either case.
  3. Bx is linearly independent.

    Suppose not, then there exists a linear dependence relation which we can write in the form αBmαα = βBnββ (=λ   say ) where B,  B are disjoint subsets of Bx, and the coefficients mα, nβ are all >0. By (2) above we have |λ|2 = α,β mαnβ α,β 0 and hence λ=0. Hence 0=λ,x = αmα α,x = βnβ β,x. Since α,x and β,x are positive it follows that B = B′′ = .

Conversely, all bases B of R are of the form Bx, where xV is regular.

Let B be any basis of R, and let C = {xV  |  α,x>0  all  αB}. Then C, every xC is regular and B=Bx, for all xC.

Let B = {α1,...,αr}; B is a basis of V (as remarked earlier), hence there exists a dual basis {v1,...,vr} of V such that αi,vj = δij. Let xV, say x = i=1r xivi; then αi,x = xi and hence xC provided all the coefficients xi are >0. So C is certainly not empty.

Now let xC, αR+: α = i=1r miαi with mi0, hence α,x = i=1r mi αi,x >0 for all αR+, and likewise α,x<0 for all αR-. So x is regular and R+ = Rx+, whence B=Bx.

So the above construction provides all bases of R.

Let B be a basis of R; α,βB,   αβ. Then α,β 0 (i.e., θαβπ2).

By Proposition 6.3, B=Bx for some regular xV. Hence Proposition 6.4 follows from the proof of Proposition 6.2.

From now on it will be simpler (and will involve no loss of generality) to assume that V=V, i.e. that R spans V. Let B = {α1,...,αr} be a basis of R (so r=rank(R)) and let R+ be the set of positive roots relative to B; R-=-R+ the set of negative roots. (α1,...,αr also called a set of simple roots).

Also assume R reduces until further notice ( αR 2αR ).

Let si = sαi   (1ir) and set ρ = 12 αR+α (half the sum of the positive roots).

  1. si permutes the set R+-{αi}.
  2. siρ=ρ-αi.
  3. ρ,αi=1.

  1. Let βR+, βαi. Then by Proposition 2.1 siβ = β- β,αiαi. Now β is of the form i=1r mjαj with at least one coefficient mj, ji, positive (because 2αi is not a root). Hence the coefficient of αj in siβ is also positive, hence siβR+.
  2. From (i) it follows that siρ = 12 βR+ βαi β-12αi = ρ-αi.
  3. Follows from (ii), since siρ = ρ- ρ,αi αi.

As in Proposition 6.3, let C = {xV  |  x,αi>0   (1ir)} = {xV  |  x,α>0,   all   αR+}. C is the Weyl chamber associated with the basis B={α1,...,αr}. It is the intersection of r half spaces in V (dimV=r now), so it is an open simplicial cone. Relative to the dual basis {v1,...,vr} it is the positive octant.

From Proposition 6.5 (iii) it follows that ρC.

Let xV be regular. Then there exists wW such that wxC.

Choose wW such that wx,ρ is as large as possible. Then for 1ir we have wx,ρ siwx,ρ = wx,siρ = wx,ρ-αi by Proposition 6.5 = wx,ρ - wx,αi so that wx,αi 0; but also wx,αi = x,w-1αi 0 (because x is regular and w-1αi R ), hence wx,αi>0 for 1ir, i.e., wxC.

Let B be another basis of R. Then B=wB for some wW.

By Proposition 6.3 we have B=Bx, some regular xV, and the corresponding set of positive roots is Rx+ = {αR  |  α,x>0}. By Proposition 6.6 there exists wW such that wxC and therefore αRx+ α,x>0 wα,wx>0 wαR+, so that Rx+ = w-1R+ and hence B=Bxw-1=B.

We shall show later that B=wB for exactly one wW.

  1. Let αR, then α=wαi for some wW and some i (i.e. R=WB).
  2. W is generated by s1,...,sr.   (si=sαi)

  1. Let W0 be the subgroup of W generated by s1,...,sr. We shall show that (i) holds for some wW0. We may assume that αR+, for if -α=wαi then α=wsiαi.

    For αR+, say α = i=1r miαi define the height of α to be ht(α) = i=1r mi, the sum of the coefficients. (So   ht(α)=1 αB ). We proceed by induction on ht(α). We must have α,αi>0 for some i, for otherwise we should have |α|2 = i=1r mi α,αi 0, which is impossible. Hence siα = α- α,αiαi has height ht(siα) = ht(α) - α,αi <ht(α) and hence by the inductive hypothesis siα = wαj for some wW0,   αjB. So α=siwαj, and siwW0.
  2. Enough to show sαW0 for each αR. But α=wαi with wW0, hence sα = wsiw-1 (Proposition 2.1) W0.

From Proposition 6.9, each wW can be written in the form w = sa1 sap. If (for a given w) the number p of factors is as small as possible, then sa1sap is called a reduced expression for w, and p is the length of w, denoted by (w) (relative to the generators s1,...,sr). Thus (1)=0;   (w)=1  w=si;  (w)=(w-1).

Let wW, then (w) > (siw) w-1αi < 0. (i.e.,  αiwR-).

Suppose w-1 αi<0. Let w=t1tp be a reduced expression for w, where each ti is an sj, say ti = sβi,  βiB. Let wj = t1tj   (0jp), so that w0=1 and wp=w. So we have w0-1αi = αi>0 and wp-1αi = w-1αi<0, hence there exists j[1,p] such that β= wj-1-1αi >0, wj-1αi<0. Now wj-1 = tjwj-1-1, so that we have β>0, tjβ<0, tj=sβj, βjB. By Proposition 6.5, ββj    tjβ<0, so we must have β=βj and hence αi = wj-1β = wj-1βj giving Proposition 2.1 si = wj-1 tj wj-1-1 = wj wj-1-1, or wj = siwj-1, and therefore siw = (siwj-1) tjtj+1tp = (t1tj) tjtj+1tp = t1tj-1 tj+1tp, showing that (siw) p-1 < (w). So we have proved that w-1αi <0    (siw) < (w). (IGM 5) Suppose now that w-1αi>0, then (siw)-1αi = w-1siαi = -w-1αi <0, hence (replacing w by siw in (IGM 5)) we have w-1αi <0    (w) < (siw). This completes the proof.

Suppose w1,w2W,  w1w2. Then w1Bw2B.

We have to show that B w1-1w2B, i.e. BwB if w1. So let w=si be a reduced expression for w. Then (siw) < (w), hence w-1αi<0, hence w-1αiB, i.e. αiwB. So BwB as required.

Example. Since B is a basis of R, so is -B. Positive roots relative to B are negative roots relative to -B and vice versa. By Proposition 6.11 we have -B=w0B for a unique w0W. w0 is called the longest element of W (relative to the basis B). We have w02=1, because w02B = w0(-B) = B.

For each wW let R(w) = {αR+  |  w-1αR-} = R+wR-.

Suppose that (w)>(siw). Then R(w) = si R(siw) {αi}.

We have R(siw) = R+siwR- and therefore si R(siw) = siR+ wR-. (IGM 6) Now by Proposition 6.5 siR+ = (R+-{αi}) {-αi} (IGM 7) and by Proposition 6.10 w-1αi <0, i.e. αi wR- and therefore -αi wR-. Hence from (IGM 6) and (IGM 7) we deduce that siR(siw) = (R+-{αi}) wR- = R+wR- - {αi} = R(w) - {αi}.

[Compare Schubert polynomials, Ch. I, esp. (1.2).]

Note that αi siR(siw) (otherwise we should have -αi = siαi R(siw) R+, impossible).

  1. Let w=t1tp be a reduced expression, where ti = sβi,  βiB. Then R(w) = {t1ti-1βi  |  1ip} (IGM 8) and these p roots are all distinct.
  2. (w)=Card R(w).

  1. Since t1w=t2tp it follows that (w) = p > (t1w), hence by Proposition 6.12 R(w) = {β1} t1 R(t2tp) from which (IGM 8) follows by induction on p [SP, (1.7)]. Suppose t1ti-1βi = t1tj-1βj where i<j. Then βi = titj-1βj and therefore by Proposition 2.1 ti = sβi = titj-1sβj (titj-1)-1 = titj (titj-1)-1 from which it follows that titj = tititj-1 = ti+1tj-1 and hence that w = t1tp = t1 ti^ tj^ tp contradicting the assumption that t1tp is reduced.
  2. Hence Card R(w) = p = (w).

Example. R(w0) = R+w0R- = R+, hence (w0) = Card R+ = number of reflections in   W.

(w) = (siw)+1 w-1αi < 0, (w) = (siw)-1 w-1αi > 0.

We have w-1 αi (w) > (siw) Proposition 6.10 R(w) = si R(siw) {αi} Proposition 6.12 (w) = (siw)+1 Proposition 6.13 Replace w by siw: w-1 αi (siw)-1 αi = w-1 siαi = -w-1 αi<0 (siw) = (w)+1.

(Exchange lemma) Let w = t1tp = u1up be two reduced expressions for w, where ti=sβi,  ui=sγi with βi,  γiB. Then for some i[1,p] we have w = u1t1ti^tp (i.e. we can exchange u1 with one of the ti) [SP, (1.8)].

By Proposition 6.13 we have γ1R(w), hence γ1 = t1ti-1βi for some i[1,p]. Hence by Proposition 2.1 u1 = sγ1 = t1ti-1sβi (t1ti-1)-1 = (t1ti-1ti) (t1ti-1)-1 and therefore t1ti = u1t1ti-1 giving w = t1titp = u1t1ti-1ti+1tp.

We shall next deduce from this exchance lemma that the Weyl group W is a Coxeter group (definition later). Consider two generators si,sj of W (ij) and let mij = order of   sisj   in   W = order of   sjsi   in   W (because sjsi = (sisj)-1 ). Then we have sisjsi = sjsisj (IGM 9) where there are mij (2) terms on either side.

Let wW, of length (w)=p. A reduced word for w is a sequence t_ = (t1,...,tp) where each ti is one of the sj, and w = t1tp. Let S(w) denote the set of all reduced words for w. We make S(w) into a graph as follows: let uij denote the word uij = (si,sj,si,...) of length   mij. (ij) Suppose t_S(w) contains uij as a subword and let t_S(w), and we join t_, t_ by an edge.

The graph S(w) is connected.

Induction on (w). When (w) = 1,  w=si and S(w) has just one element.

Let t_ = (t1,...,tp),   u_ = (u1,...,up) S(w),   (p=(w)). We shall write t_u_ if t_, u_ are in the same connected component of S(w). The inductive hypothesis assures us that t_u_   if either   t1=u1   or   tp=up. (IGM 10) For is w = t1w = u1w then (w) = p-1 and hence (t2,...,tp) (u2,...,up). We want to prove that t_u_. If t1=u1 we are through, by (IGM 10). If t2u1, then (exchange) there exists i[1,p] such that a_ = (u1,t1,...,ti^,...,tp) S(w). Suppose ip. Then t_ a_ u_ by (IGM 10), and therefore t_ u_.

Suppose i=p. Let m be the order of t1u1 in W. If m=2 then a_ = (t1,u1,t2,...,tp-1) S(w) and t_ a_ a_ u_ so again t_u_.

Suppose i=p and m>2. We have a_ = (u1,t1,...,tp-1) S(w), t_ = (t1,t2,...,tp), hence (exchange) there exists i[1,p-1] such that b_ = (t1,u1,t1,...,ti^,...,tp-1) S(w). Suppose ip-1. Then we have t_ b_ a_ u_ by (IGM 10), and hence t_u_.

Suppose i=p-1 and m=3. Then b_ = (u1,t1,u1,t2,...,tp-2) S(w) and t_ b_ b_ u_, so again we are through.

Suppose i=p-1 and m>3. Then we have b_ = (t1,u1,t1,t2,...,tp-2) S(w), u_ = (u1,u2,...,up-1) S(w), so by exchange there exists i[1,p-2] such that c_ = (u1,t1,u1,t1,...,ti^,...,tp-2) S(w). Suppose ip-2. Then t_ b_ c_ u_ and again t_ u_.

Suppose i=p-2 and m=4. Then c_ = (t1,u1,t1,u1,t2,...,tp-2) S(w) and t_ c_ c_ u_ so again t_u_.

Suppose i=p-2 and m>4. Repeat the argument: eventually we shall get t_ u_, as required.

The generators si   (1ir) and relations si2=1, (sisj)mij = 1 (ij) form a presentation of W.

What this means is the following: given a group G and elements giG   (1ir) satisfying gi2=1,   (gigj)mij=1   (ij), there exists a homomorphism f:WG (necessarily unique) such that f(si) = gi (1ir). Let wW and let (t1,...,tp) = t_ S(w). Since w=t1tp we must have f(w) = f(t1)f(tp) = F(t_) say. So we have to show that F(t_) = F(u_) if t_,u_S(w). Now in G we have gigjgi = gjgigj (mij terms on either side), i.e. f(si) f(sj) f(si) = f(sj) f(si) f(sj) . Hence F(t_) = F(u_) if t_,u_ are joined by an edge in S(w). By Proposition 6.16 it follows that F(t_) = F(u_) for all t_,u_ S(w), as required. So f is well defined and it remains to check that it is a homomorphism.

Consider f(siw): suppose first that (siw) = (w)+1. If w=t1tp is a reduced expression, then siw = sit1tp is also reduced, hence f(siw) = f(si) f(t1) f(tp) = f(si) f(w). If on the other hand (siw) = (w)-1 Proposition 6.14, replace w by siw: f(w) = f(si) f(siw) and hence f(siw) = f(si)-1 f(w) = f(si) f(w) since f(si) = gi = gi-1.

So we have f(siw) = f(si) f(w) (IGM 11) in all cases. Hence if vW, v = u1uq reduced, f(vw) = f(u1u2uqw) = f(u1) f(u2uqw) = = f(u1) f(uq) f(w) = f(v) f(w).

Weyl chamber

R, B etc. as before. Recall that the Weyl chamber associated with B is C = {xV  |  x,αi>0   (1ir)}. It is an open simplicial cone and its closure in V is C_ = {xV  |  x,αi0   (1ir)}.

C_ is a fundamental domain for the action of W on V (i.e. every W-orbit in V meets C_ in exactly one point.)

  1. (cf. Proposition 6.6) Let xV, let ρ = 12 α>0 α, and choose wW so that wx,ρ is as large as possible. Then for i[1,r] we have wx,ρ siwx,ρ = wx,siρ = wx,ρ-αi Proposition 6.5 = wx,ρ - wx,αi, so that wx,αi 0 and hence wxC_. So each W-orbit meets C_.
  2. Remains to prove that if xC_ and y=wxC_ then x=y (but it doesn't follow necessarily that w=1). We proceed by induction on (w). If (w)=0 then w=1, so y=x. If (w)>1 we can write w = siw with (w) = (w)-1 (take a reduced word w=si). Then w = siw, so that (w) = (siw)+1, hence w-1αi R- by Proposition 6.14. It follows that αi,y = αi,wx = w-1αi,x 0, (because xC_) but also αi,y0 (because yC_). Hence αi,y = 0, i.e. siy=y and therefore wx = siwx = siy = y. By the induction hypothesis we conclude that x=y.

The set Vreg = V-αHα is an open dense subset of V. By Lemma 6.15 Vreg = wWwC and V=Vreg_ = wWwC_, by taking closures.

It follows from Proposition 7.1 that Vreg is the disjoint union of the chambers wC (i.e. they don't overlap). For if xC, y=wxC where w1, the proof of Proposition 7.1 shows that αi,y =0 for some i, which contradicts yC. So if w1 we have Cw-1C=.

Hence the chambers wC (wW) are the connected components of the topological space Vreg: each wC is a cone, hence convex, hence connected, also open.

The basis B corresponding to C may be described as follows:

Let αR+. Then αB    C_Hα spans Hα.

Let αR+, say α = i=1r miαi. Let I = {i  |  mi0}. We have xC_Hα αi,x0   (1ir) and α,x = iI mi αi,x =0 αi,x0   (1ir) and αi,x = 0 (iI). It follows that C_Hα iI Hαi, of dimension r-|I|. Hence C_Hα spans Hα    |I|=1    αB.

As a corollary:

Let B be a basis of R. Then B is a basis of R.

Follows from Proposition 7.1a, since Hα=Hα.

Let (v1,...,vr) be the basis of V dual to B = (α1,...,αr): αi,vj = δij. If xV we have x = i=1r x,αivi so that C_ is the cone consisting of all nonnegative linear combinations of the dual basis vectors vi.

The dual cone C*_ consists of the nonnegative linear combinations of the αi, and we have xC*_    x,vi 0   (1ir). (acute cone and obtuse cone: pictures for A2,B2,G2). We make use of C*_ to define a partial order on V: if x,yV then xy means that x-y C*_, i.e. x-y = i=1r ciαi with   ci,  ci0, or equivalently x-y,vi 0   (1ir).

Example. Suppose R is of type An-1, αi = ei-ei+1   (1in-1); Vn is the hyperplane perpendicular to e = 1n (e1++en). We have ei,e = 1n = e,e , so that ei = ei-e V. The dual basis is (v1,...,vn-1) where vi = e1 ++ ei = e1 ++ ei- ine = 1n ( n-i,...,n-i, i -i,...,-i n-i ).

Let x = i=1n xiei,   y = i=1n yiei V. Then x,vi = x1++xi, hence xy    x1++xi y1++yi (1in-1) (note that x1++xn = y1++yn = 0 ) the dominance partial order.

Let xV. Then the following are equivalent:

  1. xwx, all wW,
  2. xsix  (1ir),
  3. xC_.

(i) (ii): obvious.

(ii) (iii): We have x-six = αi,x αi from Proposition 2.1, hence xsix means that αi,x0 or equivalently αi,x0   (1ir), i.e. xC_.

(iii) (i): Let xC_, wW. Induction on (w). (w)=0 implies w=1, OK. Suppose (w)1. Then w=wsi for some i[1,r] and (w) = (w)-1 (take a reduced expression for w ending with si). We have x-wx = (x-wx) + w(x-six). Now x-wx0 (induction hypothesis), and w(x-six) = w(six-x) = -αi,x wαi; by Proposition 6.14 (with w replaced by w-1) we have wαi<0, hence αi,x wαi0. So x-wx0 as required.

Let xV and let Rx = {αR  |  x,α=0} Wx = {wW  |  w(x)=x} = isotropy group of x in W. (So Rx= if and only if x is regular.)

If xV is not regular, then Rx is a root system and Wx is its Weyl group.

  1. Let α,βRx, then α,β and xHα, so that sαβ,x = β,sαx = β,x = 0, so that sαβRx. So Rx is a root system.
  2. Let Wx = sα  |  α,x=0. Clearly Wx is a subgroup of Wx and we have to show Wx = Wx. If y=ux (uW) then α,y=0    u-1α,x=0, and hence Wy is generated by the su-1α = u-1sαu where αRx, so that Wy = u-1Wxu and likewise Wy = u-1Wxu.

    Choose uW such that y=uxC_. Enough to show Wy=Wy. So let wWy, i.e. y=wy. The proof of Proposition 7.5 shows that if w1 then w=siw with (w) < (w) and si(y) = 0, so that wy = siwy = siy = y i.e. wWy. By induction on (w) we may assume wWy and then w=siw Wy.

(So the isometry group of xV is generated by the reflections it contains.)

If xC_, Rx=RI with basis BI = {αi  |  x,αi=0}.


I.G. Macdonald
Issac Newton Institute for the Mathematical Sciences
20 Clarkson Road
Cambridge CB3 OEH U.K.

Version: October 30, 2001

page history