Root systems
Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au
Last update: 27 March 2012
Abstract.
This is a typed version of I.G. Macdonald's lecture notes from lectures at the University of California San Diego from January to March of 1991.
Introduction
Let be a real vector space of finite dimension and let be a positive definite symmetric inner product on So we have for all in and we write
for the length of A linear transformation is an isometry if it is length preserving:
for all Equivalently,
for all
Example.
with the standard inner product:
In fact this is essentially the only example: given as above we can construct an orthonormal basis of i.e. a basis such that
and then if
we have
If are such that
we say that are perpendicular (or orthogonal) and write More generally, if the angle
between the vectors is given by
One other piece of notation: if we shall write
(you'll see why in a moment). We have
Reflections in
Let
and let
be the orthogonal reflection in the hyperplane
perpendicular to
Clearly
for all
i.e.
is an
isometry.
-
-
-
-
-
Let be an isometry. Then
|
|
Proof.
|
|
- We have for some so that
On the other hand,
because
Adding (IGM 1) and (IGM 2) we get
i.e.,
which proves (i).
-
- Is obvious from the definition.
-
is symmetrical in and hence equal to
- Calculate, using (i):
|
Root systems
A root system in is a non-empty subset of satisfying the following two axioms:
- (R1) For all
(integrality).
- (R2) For all
(symmetry).
- Since it follows from (R2) that
Suppose on the other hand and
Then
so that and
so that
So the only possibilities for are
If
we say that is reduced. But there are non reduced root systems as well (examples in a moment).
- I don't demand that spans The dimension of the subspace of spanned by is called the rank of It is therefore the maximum number of linearly independent elements of
-
(orthogonal direct sum). is reducible if it splits up in this way (decomposable would be a better word).
Examples.
- only two possibilities
The first is reduced, the second isn't.
- reduced. Draw pictures of
The first of these is reducible, the others are irreducible.
- non-reduced: draw picture of
- standard basis
(Exercise: check (R1), (R2) in each case.)
In fact, as we shall see later, this is almost a complete list of the irreducible root systems: apart from (), (), (), (), () there are just 5 others:
(the last of which we have already met).
- In is a root system, so is
(the dual root system). In the examples above,
and are self dual; and are duals of each other.
I want now to start drawing some consequences from the integrality axiom (R1), which as we shall see restricts the possibilities very drastically. So let be a root system and Let
be the angle between the vectors so that
and hence
so that
Let be linearly independent and assume then
- or
- If then
and
|
|
Proof.
|
|
- Follows from (IGM 3), since
-
(since ). Hence
and it follows from (i) that
|
From the relation (IGM 3) we have
giving
So the possible values of are
or collectively
where and is not prime to 12 (i.e. ).
is finite.
|
|
Proof.
|
|
spans a subspace of so we can choose
forming a basis of Let
be the dual basis of defined by
Let then
say
and the coefficients are given by
So each is an integer and
So only finitely many possibilities.
|
Weyl group
Let
be the group of isometries of generated by the reflections By (R2) each permutes the elements of i.e. we have a homomorphism
of into the group of permutations of As in Proposition 3.4, let be the subspace of spanned by and let
be the orthogonal complement of so that
Each
fixes pointwise (because ), hence each fixes pointwise.
Suppose gives rise to the identity permutation under the homomorphism (IGM 4), i.e.
for all Then fixes pointwise (because spans ) as well as i.e.
So embeds in which is a finite group by Proposition 3.4. Hence is a finite group called the Weyl group of notation
Examples.
- Weyl groups of types (symmetric group, hyperoctahedral group, etc.).
Let
- If (i.e. if is acute) then
- If (i.e. if is obtuse) then
|
|
Proof.
|
|
(ii) comes from (i) by replacing by so it is enough to prove (i). First of all, if () then (as we have seen)
so that
So we may assume linearly independent and then by Proposition 3.3 (i) either
or
If say
then by Proposition 2.1 we have
and hence also
|
Strings of roots
Let be linearly independent and let
- is an interval of where
|
|
Proof.
|
|
- Certainly Let (resp. ) be the smallest (resp. largest) element of Suppose Then there exist such that
both by Proposition 4.1. Subtract and we get
hence a contradiction. So
- We have
Hence implies
Take
i.e.
Take
i.e.
|
The set of roots () is called the string through . It follows from Proposition 5.1 that a string of roots has as most 4 elements: (take i.e. at the end of the chain:
because linearly independent.)
Bases of
A basis of is a subset of such that
- (B1) is linearly independent.
- For each we have
with coefficients and either all or all
From (B2) it follows that spans the subspace of spanned by hence (B1) is a basis of and therefore
Examples.
Defining something gives no guarantee that it exists. However, the following construction provides bases (in fact, all of them). Say is regular if for all i.e. if does not lie in any of the reflecting hyperplanes (). Let be regular and let
Since implies that it follows that
(disjoint union). A root will be called (temporarily) decomposable if with otherwise indecomposable. Let be the set of indecomposable elements of
is a basis of
|
|
Proof.
|
|
In several steps.
- Let
I clain that Suppose not, and choose , such that is as small as possible. Certainly (because ), hence is decomposable, say
Hence
both and are positive, hence less than It follows that and hence (as is closed under addition) contradiction. Hence
and so satisfies (B2).
- Let Then
(i.e., ).
Suppose
By Proposition 4.1 we have and hence also So either
in which case
is decomposable; or
in which case
is decomposable. Contradiction in either case.
- is linearly independent.
Suppose not, then there exists a linear dependence relation which we can write in the form
where are disjoint subsets of and the coefficients are all By (2) above we have
and hence Hence
Since and are positive it follows that
|
Conversely, all bases of are of the form where is regular.
Let be any basis of and let
Then every is regular and for all
|
|
Proof.
|
|
Let
is a basis of (as remarked earlier), hence there exists a dual basis
of such that
Let say
then
and hence provided all the coefficients are So is certainly not empty.
Now let
with hence
for all and likewise
for all So is regular and
whence
|
So the above construction provides all bases of
Let be a basis of Then
(i.e., ).
|
|
Proof.
|
|
By Proposition 6.3, for some regular Hence Proposition 6.4 follows from the proof of Proposition 6.2.
|
From now on it will be simpler (and will involve no loss of generality) to assume that i.e. that spans Let
be a basis of (so ) and let be the set of positive roots relative to
the set of negative roots. ( also called a set of simple roots).
Also assume reduces until further notice
Let
and set
- permutes the set
|
|
Proof.
|
|
- Let Then by Proposition 2.1
Now is of the form
with at least one coefficient positive (because is not a root). Hence the coefficient of in is also positive, hence
- From (i) it follows that
- Follows from (ii), since
|
As in Proposition 6.3, let
is the Weyl chamber associated with the basis It is the intersection of half spaces in ( now), so it is an open simplicial cone. Relative to the dual basis it is the positive octant.
From Proposition 6.5 (iii) it follows that
Let be regular. Then there exists such that
|
|
Proof.
|
|
Choose such that
is as large as possible. Then for we have
so that
but also
(because is regular and
),
hence
for
i.e.,
|
Let be another basis of Then for some
|
|
Proof.
|
|
By Proposition 6.3 we have some regular and the corresponding set of positive roots is
By Proposition 6.6 there exists such that and therefore
so that
and hence
|
- Let then for some and some (i.e. ).
- is generated by
|
|
Proof.
|
|
-
Let be the subgroup of generated by We shall show that (i) holds for some We may assume that for if then
For say
define the height of to be
the sum of the coefficients.
We proceed by induction on We must have
for some for otherwise we should have
which is impossible. Hence
and hence by the inductive hypothesis
for some
So and
-
Enough to show for each But with hence
□
|
From Proposition 6.9, each w∈W can be written in the form
w
=
sa1⋯
sap.
If (for a given w) the number p of factors is as small as possible, then sa1⋯sap is called a reduced expression for w, and p is the length of w, denoted by ℓ(w) (relative to the generators s1,...,sr). Thus
ℓ(1)=0;
ℓ(w)=1
⇔
w=si;
ℓ(w)=ℓ(w-1).
Let w∈W, then
ℓ(w)
>
ℓ(siw)
⇔
w-1αi
<
0.
(i.e., αi∈wR-).
|
|
Proof.
|
|
Suppose
w-1
αi<0.
Let w=t1⋯tp be a reduced expression for w, where each ti is an sj, say
ti
=
sβi,
βi∈B.
Let
wj
=
t1⋯tj
(0≤j≤p),
so that w0=1 and wp=w. So we have
w0-1αi
=
αi>0
and
wp-1αi
=
w-1αi<0,
hence there exists
j∈[1,p]
such that
β=
wj-1-1αi
>0,
wj-1αi<0.
Now
wj-1
=
tjwj-1-1,
so that we have
β>0,
tjβ<0,
tj=sβj,
βj∈B.
By Proposition 6.5,
β≠βj
⇒
tjβ<0,
so we must have β=βj and hence
αi
=
wj-1β
=
wj-1βj
giving Proposition 2.1
si
=
wj-1
tj
wj-1-1
=
wj
wj-1-1,
or
wj
=
siwj-1,
and therefore
siw
=
(siwj-1)
tjtj+1⋯tp
=
(t1⋯tj)
tjtj+1⋯tp
=
t1⋯tj-1
tj+1⋯tp,
showing that
ℓ(siw)
≤
p-1
<
ℓ(w).
So we have proved that
w-1αi
<0
⇒
ℓ(siw)
<
ℓ(w).
(IGM 5)
Suppose now that
w-1αi>0,
then
(siw)-1αi
=
w-1siαi
=
-w-1αi
<0,
hence (replacing w by siw in (IGM 5)) we have
w-1αi
<0
⇒
ℓ(w)
<
ℓ(siw).
This completes the proof.
□
|
Suppose
w1,w2∈W,
w1≠w2.
Then
w1B≠w2B.
|
|
Proof.
|
|
We have to show that
B≠
w1-1w2B,
i.e.
B≠wB
if w≠1. So let w=si⋯ be a reduced expression for w. Then
ℓ(siw)
<
ℓ(w),
hence
w-1αi<0,
hence
w-1αi∉B,
i.e. αi∉wB. So B≠wB as required.
□
|
Example.
Since B is a basis of R, so is -B. Positive roots relative to B are negative roots relative to -B and vice versa. By Proposition 6.11 we have -B=w0B for a unique w0∈W. w0 is called the longest element of W (relative to the basis B). We have w02=1, because
w02B
=
w0(-B)
=
B.
For each w∈W let
R(w)
=
{α∈R+
|
w-1α∈R-}
=
R+∩wR-.
Suppose that ℓ(w)>ℓ(siw). Then
R(w)
=
si
R(siw)
∪
{αi}.
|
|
Proof.
|
|
We have
R(siw)
=
R+∩siwR-
and therefore
si
R(siw)
=
siR+
∩
wR-.
(IGM 6)
Now by Proposition 6.5
siR+
=
(R+-{αi})
∪
{-αi}
(IGM 7)
and by Proposition 6.10
w-1αi
<0,
i.e.
αi∈
wR-
and therefore
-αi
∉
wR-.
Hence from (IGM 6) and (IGM 7) we deduce that
siR(siw)
=
(R+-{αi})
∩
wR-
=
R+∩wR-
-
{αi}
=
R(w)
-
{αi}.
□
|
[Compare Schubert polynomials, Ch. I, esp. (1.2).]
Note that
αi
∉
siR(siw)
(otherwise we should have
-αi
=
siαi
∈
R(siw)
⊆
R+,
impossible).
- Let w=t1⋯tp be a reduced expression, where
ti
=
sβi,
βi∈B.
Then
R(w)
=
{t1⋯ti-1βi
|
1≤i≤p}
(IGM 8)
and these p roots are all distinct.
- ℓ(w)=Card R(w).
|
|
Proof.
|
|
- Since t1w=t2⋯tp it follows that
ℓ(w)
=
p
>
ℓ(t1w),
hence by Proposition 6.12
R(w)
=
{β1}
∪
t1
R(t2⋯tp)
from which (IGM 8) follows by induction on p [SP, (1.7)]. Suppose
t1⋯ti-1βi
=
t1⋯tj-1βj
where i<j. Then
βi
=
ti⋯tj-1βj
and therefore by Proposition 2.1
ti
=
sβi
=
ti⋯tj-1sβj
(ti⋯tj-1)-1
=
ti⋯tj
(ti⋯tj-1)-1
from which it follows that
ti⋯tj
=
titi⋯tj-1
=
ti+1⋯tj-1
and hence that
w
=
t1⋯tp
=
t1⋯
ti^⋯
tj^⋯
tp
contradicting the assumption that t1⋯tp is reduced.
- Hence
Card R(w)
=
p
=
ℓ(w).
□
|
Example.
R(w0)
=
R+∩w0R-
=
R+,
hence
ℓ(w0)
=
Card R+
=
number of reflections in W.
ℓ(w)
=
ℓ(siw)+1
⇔
w-1αi
<
0,
ℓ(w)
=
ℓ(siw)-1
⇔
w-1αi
>
0.
|
|
Proof.
|
|
We have
w-1
αi
⇒
ℓ(w)
>
ℓ(siw)
Proposition 6.10
⇒
R(w)
=
si
R(siw)
∪
{αi}
Proposition 6.12
⇒
ℓ(w)
=
ℓ(siw)+1
Proposition 6.13
Replace w by siw:
w-1
αi
⇒
(siw)-1
αi
=
w-1
siαi
=
-w-1
αi<0
⇒
ℓ(siw)
=
ℓ(w)+1.
□
|
(Exchange lemma) Let
w
=
t1⋯tp
=
u1⋯up
be two reduced expressions for w, where
ti=sβi,
ui=sγi
with
βi,
γi∈B.
Then for some i∈[1,p] we have
w
=
u1t1⋯ti^⋯tp
(i.e. we can exchange u1 with one of the ti) [SP, (1.8)].
|
|
Proof.
|
|
By Proposition 6.13 we have γ1∈R(w), hence
γ1
=
t1⋯ti-1βi
for some i∈[1,p]. Hence by Proposition 2.1
u1
=
sγ1
=
t1⋯ti-1sβi
(t1⋯ti-1)-1
=
(t1⋯ti-1ti)
(t1⋯ti-1)-1
and therefore
t1⋯ti
=
u1t1⋯ti-1
giving
w
=
t1⋯ti⋯tp
=
u1t1⋯ti-1ti+1⋯tp.
□
|
We shall next deduce from this exchance lemma that the Weyl group W is a Coxeter group (definition later). Consider two generators si,sj of W (i≠j) and let
mij
=
order of sisj in W
=
order of sjsi in W
(because
sjsi
=
(sisj)-1
).
Then we have
sisjsi⋯
=
sjsisj⋯
(IGM 9)
where there are mij (≥2) terms on either side.
Let w∈W, of length ℓ(w)=p. A reduced word for w is a sequence
t_
=
(t1,...,tp)
where each ti is one of the sj, and
w
=
t1⋯tp.
Let S(w) denote the set of all reduced words for w. We make S(w) into a graph as follows: let uij denote the word
uij
=
(si,sj,si,...)
of length mij.
(i≠j)
Suppose t_∈S(w) contains uij as a subword and let t′_∈S(w), and we join
t_,
t′_
by an edge.
The graph S(w) is connected.
|
|
Proof.
|
|
Induction on ℓ(w). When
ℓ(w)
=
1,
w=si
and S(w) has just one element.
Let
t_
=
(t1,...,tp),
u_
=
(u1,...,up)
∈S(w),
(p=ℓ(w)).
We shall write t_≡u_ if t_, u_ are in the same connected component of S(w). The inductive hypothesis assures us that
t_≡u_
if either
t1=u1
or
tp=up.
(IGM 10)
For is
w′
=
t1w
=
u1w
then
ℓ(w′)
=
p-1
and hence
(t2,...,tp)
≡
(u2,...,up).
We want to prove that t_≡u_. If t1=u1 we are through, by (IGM 10). If t2≠u1, then (exchange) there exists i∈[1,p] such that
a_
=
(u1,t1,...,ti^,...,tp)
∈
S(w).
Suppose i≠p. Then
t_
≡
a_
≡
u_
by (IGM 10), and therefore
t_
≡
u_.
Suppose i=p. Let m be the order of t1u1 in W. If m=2 then
a′_
=
(t1,u1,t2,...,tp-1)
∈
S(w)
and
t_
≡
a′_
≡
a_
≡
u_
so again t_≡u_.
Suppose i=p and m>2. We have
a_
=
(u1,t1,...,tp-1)
∈
S(w),
t_
=
(t1,t2,...,tp),
hence (exchange) there exists i∈[1,p-1] such that
b_
=
(t1,u1,t1,...,ti^,...,tp-1)
∈
S(w).
Suppose i≠p-1. Then we have
t_
≡
b_
≡
a_
≡
u_
by (IGM 10), and hence t_≡u_.
Suppose i=p-1 and m=3. Then
b′_
=
(u1,t1,u1,t2,...,tp-2)
∈
S(w)
and
t_
≡
b_
≡
b′_
≡
u_,
so again we are through.
Suppose i=p-1 and m>3. Then we have
b_
=
(t1,u1,t1,t2,...,tp-2)
∈
S(w),
u_
=
(u1,u2,...,up-1)
∈
S(w),
so by exchange there exists i∈[1,p-2] such that
c_
=
(u1,t1,u1,t1,...,ti^,...,tp-2)
∈
S(w).
Suppose i≠p-2. Then
t_
≡
b_
≡
c_
≡
u_
and again
t_
≡
u_.
Suppose i=p-2 and m=4. Then
c′_
=
(t1,u1,t1,u1,t2,...,tp-2)
∈
S(w)
and
t_
≡
c′_
≡
c_
≡
u_
so again t_≡u_.
Suppose i=p-2 and m>4. Repeat the argument: eventually we shall get
t_
≡
u_,
as required.
□
|
The generators
si
(1≤i≤r)
and relations
si2=1,
(sisj)mij
=
1
(i≠j)
form a presentation of W.
|
|
Proof.
|
|
What this means is the following: given a group G and elements
gi∈G
(1≤i≤r)
satisfying
gi2=1,
(gigj)mij=1
(i≠j),
there exists a homomorphism f:W→G (necessarily unique) such that
f(si)
=
gi
(1≤i≤r).
Let w∈W and let
(t1,...,tp)
=
t_
∈
S(w).
Since w=t1⋯tp we must have
f(w)
=
f(t1)⋯f(tp)
=
F(t_)
say.
So we have to show that
F(t_)
=
F(u_)
if t_,u_∈S(w). Now in G we have
gigjgi⋯
=
gjgigj⋯
(mij terms on either side), i.e.
f(si)
f(sj)
f(si)
⋯
=
f(sj)
f(si)
f(sj)
⋯.
Hence
F(t_)
=
F(u_)
if
t_,u_
are joined by an edge in S(w). By Proposition 6.16 it follows that
F(t_)
=
F(u_)
for all
t_,u_
∈S(w),
as required. So f is well defined and it remains to check that it is a homomorphism.
Consider f(siw): suppose first that
ℓ(siw)
=
ℓ(w)+1.
If w=t1⋯tp is a reduced expression, then
siw
=
sit1⋯tp
is also reduced, hence
f(siw)
=
f(si)
f(t1)
⋯
f(tp)
=
f(si)
f(w).
If on the other hand
ℓ(siw)
=
ℓ(w)-1
Proposition 6.14, replace w by siw:
f(w)
=
f(si)
f(siw)
and hence
f(siw)
=
f(si)-1
f(w)
=
f(si)
f(w)
since
f(si)
=
gi
=
gi-1.
So we have
f(siw)
=
f(si)
f(w)
(IGM 11)
in all cases. Hence if v∈W,
v
=
u1⋯uq
reduced,
f(vw)
=
f(u1u2⋯uqw)
=
f(u1)
f(u2⋯uqw)
=
⋯
=
f(u1)
⋯
f(uq)
f(w)
=
f(v)
f(w).
□
|
Weyl chamber
R, B etc. as before. Recall that the Weyl chamber associated with B is
C
=
{x∈V
|
⟨x,αi⟩>0
(1≤i≤r)}.
It is an open simplicial cone and its closure in V is
C_
=
{x∈V
|
⟨x,αi⟩≥0
(1≤i≤r)}.
C_ is a fundamental domain for the action of W on V (i.e. every W-orbit in V meets C_ in exactly one point.)
|
|
Proof.
|
|
- (cf. Proposition 6.6) Let x∈V, let
ρ
=
12
∑α>0
α,
and choose w∈W so that
⟨wx,ρ⟩
is as large as possible. Then for i∈[1,r] we have
⟨wx,ρ⟩
≥
⟨siwx,ρ⟩
=
⟨wx,siρ⟩
=
⟨wx,ρ-αi⟩
Proposition 6.5
=
⟨wx,ρ⟩
-
⟨wx,αi⟩,
so that
⟨wx,αi⟩
≥0
and hence wx∈C_. So each W-orbit meets C_.
-
Remains to prove that if x∈C_ and y=wx∈C_ then x=y (but it doesn't follow necessarily that w=1). We proceed by induction on ℓ(w). If ℓ(w)=0 then w=1, so y=x. If ℓ(w)>1 we can write
w
=
siw′
with
ℓ(w′)
=
ℓ(w)-1
(take a reduced word w=si⋯). Then
w′
=
siw,
so that
ℓ(w)
=
ℓ(siw)+1,
hence
w-1αi
∈
R-
by Proposition 6.14. It follows that
⟨αi,y⟩
=
⟨αi,wx⟩
=
⟨w-1αi,x⟩
≤
0,
(because x∈C_)
but also
⟨αi,y⟩≥0
(because y∈C_). Hence
⟨αi,y⟩
=
0,
i.e.
siy=y
and therefore
w′x
=
siwx
=
siy
=
y.
By the induction hypothesis we conclude that x=y.
□
|
The set
Vreg
=
V-⋃αHα
is an open dense subset of V. By Lemma 6.15
Vreg
=
⋃w∈WwC
and
V=Vreg_
=
⋃w∈WwC_,
by taking closures.
Hence the chambers wC (w∈W) are the connected components of the topological space Vreg: each wC is a cone, hence convex, hence connected, also open.
The basis B corresponding to C may be described as follows:
Let α∈R+. Then
α∈B
⇔
C_∩Hα
spans Hα.
|
|
Proof.
|
|
Let α∈R+, say
α
=
∑i=1r
miαi.
Let
I
=
{i
|
mi≠0}.
We have
x∈C_∩Hα
⇔
⟨αi,x⟩≥0
(1≤i≤r)
and
⟨α,x⟩
=
∑i∈I
mi
⟨αi,x⟩
=0
⇔
⟨αi,x⟩≥0
(1≤i≤r)
and
⟨αi,x⟩
=
0
(i∈I).
It follows that
C_∩Hα
⊆
⋂i∈I
Hαi,
of dimension r-|I|. Hence
C_∩Hα
spans
Hα
⇔
|I|=1
⇔
α∈B.
□
|
As a corollary:
Let B be a basis of R. Then B∨ is a basis of R∨.
|
|
Proof.
|
|
Follows from Proposition 7.1a, since Hα=Hα∨.
□
|
Let (v1,...,vr) be the basis of V dual to
B
=
(α1,...,αr):
⟨αi,vj⟩
=
δij.
If x∈V we have
x
=
∑i=1r
⟨x,αi⟩vi
so that C_ is the cone consisting of all nonnegative linear combinations of the dual basis vectors vi.
The dual cone C*_ consists of the nonnegative linear combinations of the αi, and we have
x∈C*_
⇔
⟨x,vi⟩
≥0
(1≤i≤r).
(acute cone and obtuse cone: pictures for A2,B2,G2). We make use of C*_ to define a partial order on V: if x,y∈V then x≥y means that
x-y
∈
C*_,
i.e.
x-y
=
∑i=1r
ciαi
with
ci∈ℝ,
ci≥0,
or equivalently
⟨x-y,vi⟩
≥
0
(1≤i≤r).
Example.
Suppose R is of type An-1,
αi
=
ei-ei+1
(1≤i≤n-1);
V⊆ℝn
is the hyperplane perpendicular to
e
=
1n
(e1+⋯+en).
We have
⟨ei,e⟩
=
1n
=
⟨e,e⟩
,
so that
e′i
=
ei-e
∈V.
The dual basis is
(v1,...,vn-1)
where
vi
=
e′1
+⋯+
e′i
=
e1
+⋯+
ei-
ine
=
1n
(
n-i,...,n-i,
⏟
i
-i,...,-i
⏟
n-i
).
Let
x
=
∑i=1n
xiei,
y
=
∑i=1n
yiei
∈V.
Then
⟨x,vi⟩
=
x1+⋯+xi,
hence
x≥y
⇔
x1+⋯+xi
≥
y1+⋯+yi
(1≤i≤n-1)
(note that
x1+⋯+xn
=
y1+⋯+yn
=
0
)
the dominance partial order.
Let x∈V. Then the following are equivalent:
- x≥wx, all w∈W,
- x≥six (1≤i≤r),
- x∈C_.
|
|
Proof.
|
|
(i) ⇒ (ii): obvious.
(ii) ⇒ (iii): We have
x-six
=
⟨αi∨,x⟩
αi
from Proposition 2.1, hence x≥six means that
⟨αi∨,x⟩≥0
or equivalently
⟨αi,x⟩≥0
(1≤i≤r),
i.e. x∈C_.
(iii) ⇒ (i): Let x∈C_, w∈W. Induction on ℓ(w). ℓ(w)=0 implies w=1, OK. Suppose ℓ(w)≥1. Then w=w′si for some i∈[1,r] and
ℓ(w′)
=
ℓ(w)-1
(take a reduced expression for w ending with si). We have
x-wx
=
(x-w′x)
+
w′(x-six).
Now
x-w′x≥0
(induction hypothesis), and
w′(x-six)
=
w(six-x)
=
-⟨αi∨,x⟩
wαi;
by Proposition 6.14 (with w replaced by w-1) we have
wαi<0,
hence
⟨αi∨,x⟩
wαi≥0.
So x-wx≥0 as required.
□
|
Let x∈V and let
Rx
=
{α∈R
|
⟨x,α⟩=0}
Wx
=
{w∈W
|
w(x)=x}
=
isotropy group of x in W.
(So Rx=∅ if and only if x is regular.)
If x∈V is not regular, then Rx is a root system and Wx is its Weyl group.
|
|
Proof.
|
|
- Let α,β∈Rx, then
⟨α∨,β⟩
∈ℤ
and x∈Hα, so that
⟨sαβ,x⟩
=
⟨β,sαx⟩
=
⟨β,x⟩
=
0,
so that sαβ∈Rx. So Rx is a root system.
-
Let
W′x
=
⟨sα
|
⟨α,x⟩=0⟩.
Clearly W′x is a subgroup of Wx and we have to show
W′x
=
Wx.
If y=ux (u∈W) then
⟨α,y⟩=0
⇔
⟨u-1α,x⟩=0,
and hence W′y is generated by the
su-1α
=
u-1sαu
where α∈Rx, so that
W′y
=
u-1W′xu
and likewise
Wy
=
u-1Wxu.
Choose u∈W such that
y=ux∈C_.
Enough to show W′y=Wy. So let w∈Wy, i.e. y=wy. The proof of Proposition 7.5 shows that if w≠1 then
w=siw′
with
ℓ(w′)
<
ℓ(w)
and
si(y)
=
0,
so that
w′y
=
siwy
=
siy
=
y
i.e. w′∈Wy. By induction on ℓ(w) we may assume
w′∈W′y
and then
w=siw′
∈W′y.
□
|
(So the isometry group of x∈V is generated by the reflections it contains.)
If x∈C_, Rx=RI with basis
BI
=
{αi
|
⟨x,αi⟩=0}.
References
I.G. Macdonald
Issac Newton Institute for the Mathematical Sciences
20 Clarkson Road
Cambridge CB3 OEH U.K.
Version: October 30, 2001
page history