## Root systems

Last update: 27 March 2012

Abstract.
This is a typed version of I.G. Macdonald's lecture notes from lectures at the University of California San Diego from January to March of 1991.

## Introduction

Let $V$ be a real vector space of finite dimension $n>0,$ and let $⟨x,y⟩$ be a positive definite symmetric inner product on $V.$ So we have $⟨x,x⟩\ne 0$ for all $x\ne 0$ in $V,$ and we write $|x| = ⟨x,x⟩12$ for the length of $x.$ A linear transformation $f:V\to V$ is an isometry if it is length preserving: $|f\left(x\right)|=|x|$ for all $x\in V.$ Equivalently, $⟨fx,fy⟩=⟨xy⟩$ for all $x,y\in V.$

Example. $V={ℝ}^{n}$ with the standard inner product: In fact this is essentially the only example: given $V$ as above we can construct an orthonormal basis of $V,$ i.e. a basis ${v}_{1},...,{v}_{n},$ such that and then if $x=\sum _{i=1}^{n}{x}_{i}{v}_{i},$ $y=\sum _{i=1}^{n}{y}_{i}{v}_{i},$ we have $⟨x,y⟩=\sum {x}_{i}{y}_{i}.$

If $x,y\in V$ are such that $⟨x,y⟩=0$ we say that $x,y$ are perpendicular (or orthogonal) and write $x\perp y.$ More generally, if $x,y\ne 0$ the angle $\theta \left(\in \left[0,\pi \right]\right)$ between the vectors $x,y$ is given by $cosθ = ⟨x,y⟩ |x|⋅|y| .$ One other piece of notation: if $x\in V,$ $x\ne 0$ we shall write $x∨ = 2x |x|2$ (you'll see why in a moment). We have

## Reflections in $V$

Let $\alpha \in V,$ $\alpha \ne 0,$ and let ${s}_{\alpha }:V\to V$ be the orthogonal reflection in the hyperplane perpendicular to $\alpha .$ Clearly $|{s}_{\alpha }\left(x\right)|=|x|$ for all $x\in V,$ i.e. ${s}_{\alpha }$ is an isometry.

1. ${s}_{\alpha }{\left(x\right)}^{\vee }={s}_{\alpha }\left({x}^{\vee }\right)={x}^{\vee }-⟨{x}^{\vee },\alpha ⟩{\alpha }^{\vee }.$
2. ${s}_{\alpha }^{2}=1.$
3. $⟨{s}_{\alpha }x,y⟩=⟨x,{s}_{\alpha }y⟩.$
4. Let $f:V\to V$ be an isometry. Then ${s}_{f\left(\alpha \right)}=f{s}_{\alpha }{f}^{-1}.$

 Proof. We have $x-{s}_{\alpha }x=\lambda \alpha ,$ for some $\lambda \in ℝ,$ so that $⟨x-sαx,α⟩ = λ|α|2. (IGM 1)$ On the other hand, $⟨x+sαx,α⟩ = 0 (IGM 2)$ because $\frac{1}{2}\left(x+{s}_{\alpha }x\right)\in {H}_{\alpha }.$ Adding (IGM 1) and (IGM 2) we get $2⟨x,α⟩ = λ|α|2$ i.e., $\lambda =\frac{2⟨x,\alpha ⟩}{{|\alpha |}^{2}}=⟨x,{\alpha }^{\vee }⟩,$ which proves (i). Is obvious from the definition. $⟨{s}_{\alpha }x,y⟩=⟨x,y⟩-⟨x,{\alpha }^{\vee }⟩⟨y,\alpha ⟩=⟨x,y⟩-\frac{2⟨x,\alpha ⟩⟨y,\alpha ⟩}{{|\alpha |}^{2}}$ is symmetrical in $x$ and $y,$ hence equal to $⟨{s}_{\alpha }y,x⟩=⟨x,{s}_{\alpha }y⟩.$ Calculate, using (i): $fsαf-1(x) = f( f-1x - 2⟨f-1x,α⟩ |α|2 α ) = x- 2⟨x,fα⟩ |fα|2 fα = sfα(x).$ $\square$

## Root systems

A root system in $V$ is a non-empty subset $R$ of $V-\left\{0\right\}$ satisfying the following two axioms:

1. (R1) For all $\alpha ,\beta \in R,$ $⟨{\alpha }^{\vee },\beta ⟩\in ℤ\phantom{\rule{2em}{0ex}}$ (integrality).
2. (R2) For all $\alpha ,\beta \in R,$ ${s}_{\alpha }\left(\beta \right)\in R\phantom{\rule{2em}{0ex}}$ (symmetry).

1. Since ${s}_{\alpha }\left(\alpha \right)=-\alpha$ it follows from (R2) that $α∈R ⇒ -α∈R.$ Suppose on the other hand $\alpha \in R$ and Then $⟨{\alpha }^{\vee },\beta ⟩=c⟨{\alpha }^{\vee },\alpha ⟩=2c,$ so that $2c\in ℤ;$ and $⟨\alpha ,{\beta }^{\vee }⟩={c}^{-1}⟨\alpha ,{\alpha }^{\vee }⟩=2{c}^{-1},$ so that $2{c}^{-1}\in ℤ.$ So the only possibilities for $c$ are If we say that $R$ is reduced. But there are non reduced root systems as well (examples in a moment).
2. I don't demand that $R$ spans $V.$ The dimension of the subspace of $V$ spanned by $R$ is called the rank of $R.$ It is therefore the maximum number of linearly independent elements of $R.$
3. (orthogonal direct sum). $R$ is reducible if it splits up in this way (decomposable would be a better word).

Examples.

1. only two possibilities $R={±α} (A1), R={±α,±2α} (BC1).$ The first is reduced, the second isn't.
2. $R$ reduced. Draw pictures of The first of these is reducible, the others are irreducible.
3. $R$ non-reduced: draw picture of $B{C}_{2}.$
4. $V={ℝ}^{n},$ standard basis ${e}_{1},...,{e}_{n}$ $\left(⟨{e}_{i},{e}_{j}⟩={\delta }_{ij}\right).$ (Exercise: check (R1), (R2) in each case.)

In fact, as we shall see later, this is almost a complete list of the irreducible root systems: apart from ${A}_{n}$ ($n\ge 1$), ${B}_{n}$ ($n\ge 2$), ${C}_{n}$ ($n\ge 3$), ${D}_{n}$ ($n\ge 4$), $B{C}_{n}$ ($n\ge 1$) there are just 5 others: (the last of which we have already met).
5. In $R$ is a root system, so is (the dual root system). In the examples above, and ${D}_{n}$ are self dual; ${B}_{n}$ and ${C}_{n}$ are duals of each other.

I want now to start drawing some consequences from the integrality axiom (R1), which as we shall see restricts the possibilities very drastically. So let $R$ be a root system and $\alpha ,\beta \in R.$ Let $\theta ={\theta }_{\alpha \beta }\in \left[0,\pi \right]$ be the angle between the vectors $\alpha ,\beta ,$ so that $cosθ = ⟨α,β⟩ |α||β|$ and hence $4cos2θ = 4⟨α,β⟩2 |α|2|β|2 = ⟨α,β∨⟩ ⟨α∨,β⟩ ∈ℤ (IGM 3)$ so that $|⟨{\alpha }^{\vee },\beta ⟩|\le 4.$

Let $\alpha ,\beta \in R$ be linearly independent and assume $⟨\alpha ,\beta ⟩\ne 0.$ then

1. $⟨\alpha ,{\beta }^{\vee }⟩⟨{\alpha }^{\vee },\beta ⟩=1,2$ or $3.$
2. If $|\alpha |\ge |\beta |$ then $⟨{\alpha }^{\vee },\beta ⟩=±1$ and

 Proof. Follows from (IGM 3), since ${\mathrm{cos}}^{2}\theta \ne 0,1.$ $\frac{{|\alpha |}^{2}}{{|\beta |}^{2}}=\frac{⟨\alpha ,{\beta }^{\vee }⟩}{⟨{\alpha }^{\vee },\beta ⟩}$ (since $⟨\alpha ,\beta ⟩\ne 0$). Hence $|⟨{\alpha }^{\vee },\beta ⟩|\le |⟨\alpha ,{\beta }^{\vee }⟩|$ and it follows from (i) that $\square$

From the relation (IGM 3) we have giving So the possible values of $\theta ={\theta }_{\alpha \beta }$ are or collectively $θαβ = rπ12$ where $0\le r\le 12$ and $r$ is not prime to 12 (i.e. $r\ne 1,5,7,11$).

$R$ is finite.

 Proof. $R$ spans a subspace ${V}^{\prime }$ of $V,$ so we can choose ${\alpha }_{1},...,{\alpha }_{r}\in R$ forming a basis of ${V}^{\prime }.$ Let ${v}_{1},...,{v}_{r}$ be the dual basis of ${V}^{\prime },$ defined by $⟨{\alpha }_{i},{v}_{j}⟩={\delta }_{ij}\phantom{\rule{.5em}{0ex}}\left(1\le i,j\le r\right).$ Let $\beta \in R,$ then ${\beta }^{\vee }\in {V}^{\prime },$ say $β∨ = ∑i=1r mivi$ and the coefficients ${m}_{i}$ are given by $mi = ⟨αi,β∨⟩.$ So each ${m}_{i}$ is an integer and $|{m}_{i}|\le 4.$ So only finitely many possibilities. $\square$

## Weyl group

Let be the group of isometries of $V$ generated by the reflections ${s}_{\alpha },$ $\alpha \in R.$ By (R2) each $w\in W$ permutes the elements of $R,$ i.e. we have a homomorphism $W→Sym(R) (IGM 4)$ of $W$ into the group of permutations of $R.$ As in Proposition 3.4, let ${V}^{\prime }$ be the subspace of $V$ spanned by $R$ and let ${V}^{\prime \prime }={\left({V}^{\prime }\right)}^{\perp }$ be the orthogonal complement of ${V}^{\prime },$ so that $V={V}^{\prime }\oplus {V}^{\prime \prime }.$ Each fixes ${V}^{\prime \prime }$ pointwise (because ${V}^{\prime \prime }\subseteq {H}_{\alpha }$), hence each $w\in W$ fixes ${V}^{\prime \prime }$ pointwise.

Suppose $w\in W$ gives rise to the identity permutation under the homomorphism (IGM 4), i.e. $w\left(\alpha \right)=\alpha$ for all $\alpha \in R.$ Then $w$ fixes ${V}^{\prime }$ pointwise (because $R$ spans ${V}^{\prime }$) as well as ${V}^{\prime \prime },$ i.e. $w={1}_{V}.$ So $W$ embeds in $\mathrm{Sym}\left(R\right)$ which is a finite group by Proposition 3.4. Hence $W$ is a finite group called the Weyl group of $R:$ notation $W=W\left(R\right).$

Examples.

1. Weyl groups of types $A,B,C,D$ (symmetric group, hyperoctahedral group, etc.).
2. $W\left({R}^{\vee }\right)=W\left(R\right).$

Let $\alpha ,\beta \in R.$

1. If $⟨\alpha ,\beta ⟩>0$ (i.e. if ${\theta }_{\alpha \beta }$ is acute) then $\beta -\alpha \in R\cup \left\{0\right\}.$
2. If $⟨\alpha ,\beta ⟩<0$ (i.e. if ${\theta }_{\alpha \beta }$ is obtuse) then $\beta +\alpha \in R\cup \left\{0\right\}.$

 Proof. (ii) comes from (i) by replacing $\alpha$ by $-\alpha ,$ so it is enough to prove (i). First of all, if $\beta =c\alpha$ ($c\in ℝ$) then (as we have seen) so that So we may assume $\alpha ,\beta$ linearly independent and then by Proposition 3.3 (i) either $⟨{\alpha }^{\vee },\beta ⟩=1$ or $⟨\alpha ,{\beta }^{\vee }⟩=1.$ If say $⟨{\alpha }^{\vee },\beta ⟩=1$ then by Proposition 2.1 we have $sα(β) = β- ⟨α∨,β⟩α = β-α$ and hence also $\beta -\alpha =-\left(\alpha -\beta \right)\in R.$ $\square$

## Strings of roots

Let $\alpha ,\beta \in R$ be linearly independent and let

1. $I$ is an interval $\left[-p,q\right]$ of $ℤ,$ where $p,q\ge 0.$
2. $p-q=⟨\alpha ,{\beta }^{\vee }⟩.$

 Proof. Certainly $0\in I.$ Let $-p$ (resp. $q$) be the smallest (resp. largest) element of $I.$ Suppose $I\ne \left[-p,q\right].$ Then there exist $r,s\in I$ such that $So we have β+rα∈R, (β+rα) + α∉R, hence ⟨β+rα,α⟩ ≥ 0, also β+sα∈R, (β+sα) - α∉R, hence ⟨β+sα,α⟩ ≤ 0,$ both by Proposition 4.1. Subtract and we get $\left(r-s\right){|\alpha |}^{2}\ge 0,$ hence $r\ge s,$ a contradiction. So $I=\left[-p,q\right].$ We have ${s}_{\alpha }\left(\beta +r\alpha \right)=\beta +r\alpha -⟨\beta +r\alpha ,{\alpha }^{\vee }⟩\alpha =\beta -\left(⟨{\alpha }^{\vee },\beta ⟩+r\right)\alpha .$ Hence $r\in I$ implies $-\left(⟨{\alpha }^{\vee },\beta ⟩+r\right)\in I.$ Take $r=q:$ $⟨{\alpha }^{\vee },\beta ⟩+q\le p,$ i.e. $⟨{\alpha }^{\vee },\beta ⟩\le p-q.$ Take $r=-p:$ $p-⟨{\alpha }^{\vee },\beta ⟩\le q,$ i.e. $⟨{\alpha }^{\vee },\beta ⟩\ge p-q.$ $\square$

The set of roots $\beta +i\alpha$ ($-p\le i\le q$) is called the $\alpha -$string through $\beta$. It follows from Proposition 5.1 that a string of roots has as most 4 elements: (take $q=0,$ i.e. $\beta$ at the end of the chain: $p=⟨{\alpha }^{\vee },\beta ⟩\le 3$ because $\alpha ,\beta$ linearly independent.)

## Bases of $R$

A basis of $R$ is a subset $B$ of $R$ such that

1. (B1) $B$ is linearly independent.
2. For each $\alpha \in R$ we have $α = ∑β∈Bmββ,$ with coefficients ${m}_{\beta }\in ℤ$ and either all ${m}_{\beta }\ge 0$ or all ${m}_{\beta }\le 0.$

From (B2) it follows that $B$ spans the subspace ${V}^{\prime }$ of $V$ spanned by $R,$ hence (B1) $B$ is a basis of ${V}^{\prime }$ and therefore $\mathrm{Card}\left(B\right)=\mathrm{rank}\left(R\right).$

Examples.

Defining something gives no guarantee that it exists. However, the following construction provides bases (in fact, all of them). Say $x\in V$ is regular if $⟨\alpha ,x⟩\ne 0$ for all $\alpha \in R,$ i.e. if $x$ does not lie in any of the reflecting hyperplanes ${H}_{\alpha }$ ($\alpha \in R$). Let $x$ be regular and let Since $\alpha \in R$ implies that $-\alpha \in R$ it follows that $R={R}_{x}^{+}\cup \left(-{R}_{x}^{+}\right)$ (disjoint union). A root $\alpha \in {R}_{x}^{+}$ will be called (temporarily) decomposable if $\alpha =\beta +\gamma$ with $\beta ,\gamma \in {R}^{+};$ otherwise indecomposable. Let ${B}_{x}$ be the set of indecomposable elements of ${R}_{x}^{+}.$

${B}_{x}$ is a basis of $R.$

 Proof. In several steps. Let $S=\sum _{\beta \in {B}_{x}}ℕ\beta .$ I clain that ${R}_{x}^{+}\subseteq S.$ Suppose not, and choose $\alpha \in {R}_{x}^{+}$, $\alpha \notin S$ such that $⟨\alpha ,x⟩$ is as small as possible. Certainly $\alpha \notin {B}_{x}$ (because ${B}_{x}\subseteq S$), hence $\alpha$ is decomposable, say Hence $⟨α,x⟩ = ⟨β,x⟩ + ⟨γ,x⟩;$ both $⟨\beta ,x⟩$ and $⟨\gamma ,x⟩$ are positive, hence less than $⟨\alpha ,x⟩.$ It follows that $\beta \in S$ and $\gamma \in S,$ hence (as $S$ is closed under addition) $\alpha \in S,$ contradiction. Hence $R = Rx+ ∪ (-Rx+) ⊆ S∪(-S)$ and so ${B}_{x}$ satisfies (B2). Let Then $⟨\alpha ,\beta ⟩\le 0$ (i.e., ${\theta }_{\alpha \beta }\ge \frac{\pi }{2}$). Suppose $⟨\alpha ,\beta ⟩>0.$ By Proposition 4.1 we have $\beta -\alpha \in R$ and hence also $\alpha -\beta \in R.$ So either $\beta -\alpha \in {R}_{x}^{+},$ in which case $\beta =\alpha +\left(\beta -\alpha \right)$ is decomposable; or $\alpha -\beta \in {R}_{x}^{+},$ in which case $\alpha =\beta +\left(\alpha -\beta \right)$ is decomposable. Contradiction in either case. ${B}_{x}$ is linearly independent. Suppose not, then there exists a linear dependence relation which we can write in the form where are disjoint subsets of ${B}_{x},$ and the coefficients are all $>0.$ By (2) above we have $|λ|2 = ∑α,β mαnβ ⟨α,β⟩ ≤0$ and hence $\lambda =0.$ Hence $0=⟨λ,x⟩ = ∑αmα ⟨α,x⟩ = ∑βnβ ⟨β,x⟩.$ Since $⟨\alpha ,x⟩$ and $⟨\beta ,x⟩$ are positive it follows that ${B}^{\prime }={B}^{\prime \prime }=\varnothing .$ $\square$

Conversely, all bases $B$ of $R$ are of the form ${B}_{x},$ where $x\in V$ is regular.

Let $B$ be any basis of $R,$ and let Then $C\ne \varnothing ,$ every $x\in C$ is regular and $B={B}_{x},$ for all $x\in C.$

 Proof. Let $B=\left\{{\alpha }_{1},...,{\alpha }_{r}\right\};$ $B$ is a basis of ${V}^{\prime }$ (as remarked earlier), hence there exists a dual basis $\left\{{v}_{1},...,{v}_{r}\right\}$ of ${V}^{\prime }$ such that $⟨{\alpha }_{i},{v}_{j}⟩={\delta }_{ij}.$ Let $x\in {V}^{\prime },$ say $x=\sum _{i=1}^{r}{x}_{i}{v}_{i};$ then $⟨{\alpha }_{i},x⟩={x}_{i}$ and hence $x\in C$ provided all the coefficients ${x}_{i}$ are $>0.$ So $C$ is certainly not empty. Now let $x\in C,$ $\alpha \in {R}^{+}:$ $\alpha =\sum _{i=1}^{r}{m}_{i}{\alpha }_{i}$ with ${m}_{i}\ge 0,$ hence $⟨\alpha ,x⟩=\sum _{i=1}^{r}{m}_{i}⟨{\alpha }_{i},x⟩>0$ for all $\alpha \in {R}^{+},$ and likewise $⟨\alpha ,x⟩<0$ for all $\alpha \in {R}^{-}.$ So $x$ is regular and ${R}^{+}={R}_{x}^{+},$ whence $B={B}_{x}.$ $\square$

So the above construction provides all bases of $R.$

Let $B$ be a basis of $R;$ Then $⟨\alpha ,\beta ⟩\le 0$ (i.e., ${\theta }_{\alpha \beta }\ge \frac{\pi }{2}$).

 Proof. By Proposition 6.3, $B={B}_{x}$ for some regular $x\in V.$ Hence Proposition 6.4 follows from the proof of Proposition 6.2. $\square$

From now on it will be simpler (and will involve no loss of generality) to assume that ${V}^{\prime }=V,$ i.e. that $R$ spans $V.$ Let $B = {α1,...,αr}$ be a basis of $R$ (so $r=\mathrm{rank}\left(R\right)$) and let ${R}^{+}$ be the set of positive roots relative to $B;$ ${R}^{-}=-{R}^{+}$ the set of negative roots. (${\alpha }_{1},...,{\alpha }_{r}$ also called a set of simple roots).

Also assume $R$ reduces until further notice $\left(\alpha \in R⇒2\alpha \notin R\right).$

Let and set $ρ = 12 ∑α∈R+α (half the sum of the positive roots).$

1. ${s}_{i}$ permutes the set ${R}^{+}-\left\{{\alpha }_{i}\right\}.$
2. ${s}_{i}\rho =\rho -{\alpha }_{i}.$
3. $⟨\rho ,{\alpha }_{i}^{\vee }⟩=1.$

 Proof. Let Then by Proposition 2.1 $siβ = β- ⟨β,αi∨⟩αi.$ Now $\beta$ is of the form $\sum _{i=1}^{r}{m}_{j}{\alpha }_{j}$ with at least one coefficient ${m}_{j},$ $j\ne i,$ positive (because $2{\alpha }_{i}$ is not a root). Hence the coefficient of ${\alpha }_{j}$ in ${s}_{i}\beta$ is also positive, hence ${s}_{i}\beta \in {R}^{+}.$ From (i) it follows that $siρ = 12 ∑ β∈R+ β≠αi β-12αi = ρ-αi.$ Follows from (ii), since $siρ = ρ- ⟨ρ,αi∨⟩ αi.$ $\square$

As in Proposition 6.3, let $C$ is the Weyl chamber associated with the basis $B=\left\{{\alpha }_{1},...,{\alpha }_{r}\right\}.$ It is the intersection of $r$ half spaces in $V$ ($\mathrm{dim}V=r$ now), so it is an open simplicial cone. Relative to the dual basis $\left\{{v}_{1},...,{v}_{r}\right\}$ it is the positive octant.

From Proposition 6.5 (iii) it follows that $\rho \in C.$

Let $x\in V$ be regular. Then there exists $w\in W$ such that $wx\in C.$

 Proof. Choose $w\in W$ such that $⟨wx,\rho ⟩$ is as large as possible. Then for $1\le i\le r$ we have $⟨wx,ρ⟩ ≥ ⟨siwx,ρ⟩ = ⟨wx,siρ⟩ = ⟨wx,ρ-αi⟩ by Proposition 6.5 = ⟨wx,ρ⟩ - ⟨wx,αi⟩$ so that $⟨wx,{\alpha }_{i}⟩\ge 0;$ but also $⟨wx,{\alpha }_{i}⟩=⟨x,{w}^{-1}{\alpha }_{i}⟩\ne 0$ (because $x$ is regular and ${w}^{-1}{\alpha }_{i}\in R$), hence $⟨wx,{\alpha }_{i}⟩>0$ for $1\le i\le r,$ i.e., $wx\in C.$ $\square$

Let ${B}^{\prime }$ be another basis of $R.$ Then ${B}^{\prime }=wB$ for some $w\in W.$

 Proof. By Proposition 6.3 we have ${B}^{\prime }={B}_{x},$ some regular $x\in V,$ and the corresponding set of positive roots is By Proposition 6.6 there exists $w\in W$ such that $wx\in C$ and therefore $α∈Rx+ ⇔ ⟨α,x⟩>0 ⇔ ⟨wα,wx⟩>0 ⇔ wα∈R+,$ so that ${R}_{x}^{+}={w}^{-1}{R}^{+}$ and hence ${B}^{\prime }={B}_{x}{w}^{-1}=B.$ $\square$

We shall show later that ${B}^{\prime }=wB$ for exactly one $w\in W.$

1. Let $\alpha \in R,$ then $\alpha =w{\alpha }_{i}$ for some $w\in W$ and some $i$ (i.e. $R=WB$).
2. $W$ is generated by

 Proof. Let ${W}_{0}$ be the subgroup of $W$ generated by ${s}_{1},...,{s}_{r}.$ We shall show that (i) holds for some $w\in {W}_{0}.$ We may assume that $\alpha \in {R}^{+},$ for if $-\alpha =w{\alpha }_{i}$ then $\alpha =w{s}_{i}{\alpha }_{i}.$ For $\alpha \in {R}^{+},$ say $α = ∑i=1r miαi$ define the height of $\alpha$ to be $ht(α) = ∑i=1r mi,$ the sum of the coefficients. We proceed by induction on $\mathrm{ht}\left(\alpha \right).$ We must have $⟨\alpha ,{\alpha }_{i}⟩>0$ for some $i,$ for otherwise we should have $|α|2 = ∑i=1r mi ⟨α,αi⟩ ≤0,$ which is impossible. Hence $siα = α- ⟨α,αi∨⟩αi has height ht(siα) = ht(α) - ⟨α,αi⟩ and hence by the inductive hypothesis ${s}_{i}\alpha =w{\alpha }_{j}$ for some So $\alpha ={s}_{i}w{\alpha }_{j},$ and ${s}_{i}w\in {W}_{0}.$ Enough to show ${s}_{\alpha }\in {W}_{0}$ for each $\alpha \in R.$ But $\alpha =w{\alpha }_{i}$ with $w\in {W}_{0},$ hence ${s}_{\alpha }=w{s}_{i}{w}^{-1}\left(Proposition 2.1\right)\in {W}_{0}.$ $\square$

From Proposition 6.9, each $w\in W$ can be written in the form $w = sa1⋯ sap.$ If (for a given $w$) the number $p$ of factors is as small as possible, then ${s}_{{a}_{1}}\cdots {s}_{{a}_{p}}$ is called a reduced expression for $w,$ and $p$ is the length of $w,$ denoted by $\ell \left(w\right)$ (relative to the generators ${s}_{1},...,{s}_{r}$). Thus

Let $w\in W,$ then

 Proof. Suppose ${w}^{-1}{\alpha }_{i}<0.$ Let $w={t}_{1}\cdots {t}_{p}$ be a reduced expression for $w,$ where each ${t}_{i}$ is an ${s}_{j},$ say Let so that ${w}_{0}=1$ and ${w}_{p}=w.$ So we have ${w}_{0}^{-1}{\alpha }_{i}={\alpha }_{i}>0$ and ${w}_{p}^{-1}{\alpha }_{i}={w}^{-1}{\alpha }_{i}<0,$ hence there exists $j\in \left[1,p\right]$ such that $β= wj-1-1αi >0, wj-1αi<0.$ Now ${w}_{j}^{-1}={t}_{j}{w}_{j-1}^{-1},$ so that we have $β>0, tjβ<0, tj=sβj, βj∈B.$ By Proposition 6.5, so we must have $\beta ={\beta }_{j}$ and hence $αi = wj-1β = wj-1βj$ giving Proposition 2.1 $si = wj-1 tj wj-1-1 = wj wj-1-1,$ or $wj = siwj-1,$ and therefore $siw = (siwj-1) tjtj+1⋯tp = (t1⋯tj) tjtj+1⋯tp = t1⋯tj-1 tj+1⋯tp,$ showing that $ℓ(siw) ≤ p-1 < ℓ(w).$ So we have proved that Suppose now that ${w}^{-1}{\alpha }_{i}>0,$ then ${\left({s}_{i}w\right)}^{-1}{\alpha }_{i}={w}^{-1}{s}_{i}{\alpha }_{i}=-{w}^{-1}{\alpha }_{i}<0,$ hence (replacing $w$ by ${s}_{i}w$ in (IGM 5)) we have This completes the proof. $\square$

Suppose Then ${w}_{1}B\ne {w}_{2}B.$

 Proof. We have to show that $B\ne {w}_{1}^{-1}{w}_{2}B,$ i.e. $B\ne wB$ if $w\ne 1.$ So let $w={s}_{i}\cdots$ be a reduced expression for $w.$ Then $\ell \left({s}_{i}w\right)<\ell \left(w\right),$ hence ${w}^{-1}{\alpha }_{i}<0,$ hence ${w}^{-1}{\alpha }_{i}\notin B,$ i.e. ${\alpha }_{i}\notin wB.$ So $B\ne wB$ as required. $\square$

Example. Since $B$ is a basis of $R,$ so is $-B.$ Positive roots relative to $B$ are negative roots relative to $-B$ and vice versa. By Proposition 6.11 we have $-B={w}_{0}B$ for a unique ${w}_{0}\in W.$ ${w}_{0}$ is called the longest element of $W$ (relative to the basis $B$). We have ${w}_{0}^{2}=1,$ because ${w}_{0}^{2}B={w}_{0}\left(-B\right)=B.$

For each $w\in W$ let

Suppose that $\ell \left(w\right)>\ell \left({s}_{i}w\right).$ Then $R(w) = si R(siw) ∪ {αi}.$

 Proof. We have $R(siw) = R+∩siwR-$ and therefore $si R(siw) = siR+ ∩ wR-. (IGM 6)$ Now by Proposition 6.5 $siR+ = (R+-{αi}) ∪ {-αi} (IGM 7)$ and by Proposition 6.10 ${w}^{-1}{\alpha }_{i}<0,$ i.e. ${\alpha }_{i}\in w{R}^{-}$ and therefore $-{\alpha }_{i}\notin w{R}^{-}.$ Hence from (IGM 6) and (IGM 7) we deduce that $siR(siw) = (R+-{αi}) ∩ wR- = R+∩wR- - {αi} = R(w) - {αi}.$ $\square$

[Compare Schubert polynomials, Ch. I, esp. (1.2).]

Note that ${\alpha }_{i}\notin {s}_{i}R\left({s}_{i}w\right)$ (otherwise we should have $-{\alpha }_{i}={s}_{i}{\alpha }_{i}\in R\left({s}_{i}w\right)\subseteq {R}^{+},$ impossible).

1. Let $w={t}_{1}\cdots {t}_{p}$ be a reduced expression, where Then and these $p$ roots are all distinct.

 Proof. Since ${t}_{1}w={t}_{2}\cdots {t}_{p}$ it follows that $\ell \left(w\right)=p>\ell \left({t}_{1}w\right),$ hence by Proposition 6.12 $R(w) = {β1} ∪ t1 R(t2⋯tp)$ from which (IGM 8) follows by induction on $p$ [SP, (1.7)]. Suppose $t1⋯ti-1βi = t1⋯tj-1βj$ where $i Then $βi = ti⋯tj-1βj$ and therefore by Proposition 2.1 $ti = sβi = ti⋯tj-1sβj (ti⋯tj-1)-1 = ti⋯tj (ti⋯tj-1)-1$ from which it follows that $ti⋯tj = titi⋯tj-1 = ti+1⋯tj-1$ and hence that $w = t1⋯tp = t1⋯ ti^⋯ tj^⋯ tp$ contradicting the assumption that ${t}_{1}\cdots {t}_{p}$ is reduced. Hence $\square$

Example. $R\left({w}_{0}\right)={R}^{+}\cap {w}_{0}{R}^{-}={R}^{+},$ hence

$ℓ(w) = ℓ(siw)+1 ⇔ w-1αi < 0, ℓ(w) = ℓ(siw)-1 ⇔ w-1αi > 0.$

 Proof. We have $w-1 αi ⇒ ℓ(w) > ℓ(siw) Proposition 6.10 ⇒ R(w) = si R(siw) ∪ {αi} Proposition 6.12 ⇒ ℓ(w) = ℓ(siw)+1 Proposition 6.13$ Replace $w$ by ${s}_{i}w:$ $w-1 αi ⇒ (siw)-1 αi = w-1 siαi = -w-1 αi<0 ⇒ ℓ(siw) = ℓ(w)+1.$ $\square$

(Exchange lemma) Let $w={t}_{1}\cdots {t}_{p}={u}_{1}\cdots {u}_{p}$ be two reduced expressions for $w,$ where with Then for some $i\in \left[1,p\right]$ we have $w = u1t1⋯ti^⋯tp$ (i.e. we can exchange ${u}_{1}$ with one of the ${t}_{i}$) [SP, (1.8)].

 Proof. By Proposition 6.13 we have ${\gamma }_{1}\in R\left(w\right),$ hence ${\gamma }_{1}={t}_{1}\cdots {t}_{i-1}{\beta }_{i}$ for some $i\in \left[1,p\right].$ Hence by Proposition 2.1 $u1 = sγ1 = t1⋯ti-1sβi (t1⋯ti-1)-1 = (t1⋯ti-1ti) (t1⋯ti-1)-1$ and therefore $t1⋯ti = u1t1⋯ti-1$ giving $w={t}_{1}\cdots {t}_{i}\cdots {t}_{p}={u}_{1}{t}_{1}\cdots {t}_{i-1}{t}_{i+1}\cdots {t}_{p}.$ $\square$

We shall next deduce from this exchance lemma that the Weyl group $W$ is a Coxeter group (definition later). Consider two generators ${s}_{i},{s}_{j}$ of $W$ ($i\ne j$) and let (because ${s}_{j}{s}_{i}={\left({s}_{i}{s}_{j}\right)}^{-1}$). Then we have $sisjsi⋯ = sjsisj⋯ (IGM 9)$ where there are ${m}_{ij}$ ($\ge 2$) terms on either side.

Let $w\in W,$ of length $\ell \left(w\right)=p.$ A reduced word for $w$ is a sequence $\underset{_}{t}=\left({t}_{1},...,{t}_{p}\right)$ where each ${t}_{i}$ is one of the ${s}_{j},$ and $w={t}_{1}\cdots {t}_{p}.$ Let $S\left(w\right)$ denote the set of all reduced words for $w.$ We make $S\left(w\right)$ into a graph as follows: let ${u}_{ij}$ denote the word Suppose $\underset{_}{t}\in S\left(w\right)$ contains ${u}_{ij}$ as a subword and let $\underset{_}{{t}^{\prime }}\in S\left(w\right),$ and we join $\underset{_}{t},\underset{_}{{t}^{\prime }}$ by an edge.

The graph $S\left(w\right)$ is connected.

 Proof. Induction on $\ell \left(w\right).$ When and $S\left(w\right)$ has just one element. Let We shall write $\underset{_}{t}\equiv \underset{_}{u}$ if are in the same connected component of $S\left(w\right).$ The inductive hypothesis assures us that For is $w\prime ={t}_{1}w={u}_{1}w$ then $\ell \left(w\prime \right)=p-1$ and hence $\left({t}_{2},...,{t}_{p}\right)\equiv \left({u}_{2},...,{u}_{p}\right).$ We want to prove that $\underset{_}{t}\equiv \underset{_}{u}.$ If ${t}_{1}={u}_{1}$ we are through, by (IGM 10). If ${t}_{2}\ne {u}_{1},$ then (exchange) there exists $i\in \left[1,p\right]$ such that $a_ = (u1,t1,...,ti^,...,tp) ∈ S(w).$ Suppose $i\ne p.$ Then $t_ ≡ a_ ≡ u_$ by (IGM 10), and therefore $\underset{_}{t}\equiv \underset{_}{u}.$ Suppose $i=p.$ Let $m$ be the order of ${t}_{1}{u}_{1}$ in $W.$ If $m=2$ then $a′_ = (t1,u1,t2,...,tp-1) ∈ S(w)$ and $t_ ≡ a′_ ≡ a_ ≡ u_$ so again $\underset{_}{t}\equiv \underset{_}{u}.$ Suppose $i=p$ and $m>2.$ We have $a_ = (u1,t1,...,tp-1) ∈ S(w), t_ = (t1,t2,...,tp),$ hence (exchange) there exists $i\in \left[1,p-1\right]$ such that $b_ = (t1,u1,t1,...,ti^,...,tp-1) ∈ S(w).$ Suppose $i\ne p-1.$ Then we have $t_ ≡ b_ ≡ a_ ≡ u_$ by (IGM 10), and hence $\underset{_}{t}\equiv \underset{_}{u}.$ Suppose $i=p-1$ and $m=3.$ Then $b′_ = (u1,t1,u1,t2,...,tp-2) ∈ S(w)$ and $\underset{_}{t}\equiv \underset{_}{b}\equiv \underset{_}{b\prime }\equiv \underset{_}{u},$ so again we are through. Suppose $i=p-1$ and $m>3.$ Then we have $b_ = (t1,u1,t1,t2,...,tp-2) ∈ S(w), u_ = (u1,u2,...,up-1) ∈ S(w),$ so by exchange there exists $i\in \left[1,p-2\right]$ such that $c_ = (u1,t1,u1,t1,...,ti^,...,tp-2) ∈ S(w).$ Suppose $i\ne p-2.$ Then $t_ ≡ b_ ≡ c_ ≡ u_$ and again $\underset{_}{t}\equiv \underset{_}{u}.$ Suppose $i=p-2$ and $m=4.$ Then $c′_ = (t1,u1,t1,u1,t2,...,tp-2) ∈ S(w)$ and $t_ ≡ c′_ ≡ c_ ≡ u_$ so again $\underset{_}{t}\equiv \underset{_}{u}.$ Suppose $i=p-2$ and $m>4.$ Repeat the argument: eventually we shall get $\underset{_}{t}\equiv \underset{_}{u},$ as required. $\square$

The generators and relations $si2=1, (sisj)mij = 1 (i≠j)$ form a presentation of $W.$

 Proof. What this means is the following: given a group $G$ and elements satisfying there exists a homomorphism $f:W\to G$ (necessarily unique) such that $f(si) = gi (1≤i≤r).$ Let $w\in W$ and let $\left({t}_{1},...,{t}_{p}\right)=\underset{_}{t}\in S\left(w\right).$ Since $w={t}_{1}\cdots {t}_{p}$ we must have $f(w) = f(t1)⋯f(tp) = F(t_) say.$ So we have to show that $F\left(\underset{_}{t}\right)=F\left(\underset{_}{u}\right)$ if $\underset{_}{t},\underset{_}{u}\in S\left(w\right).$ Now in $G$ we have $gigjgi⋯ = gjgigj⋯$ (${m}_{ij}$ terms on either side), i.e. $f(si) f(sj) f(si) ⋯ = f(sj) f(si) f(sj) ⋯.$ Hence $F\left(\underset{_}{t}\right)=F\left(\underset{_}{u}\right)$ if $\underset{_}{t},\underset{_}{u}$ are joined by an edge in $S\left(w\right).$ By Proposition 6.16 it follows that $F\left(\underset{_}{t}\right)=F\left(\underset{_}{u}\right)$ for all $\underset{_}{t},\underset{_}{u}\in S\left(w\right),$ as required. So $f$ is well defined and it remains to check that it is a homomorphism. Consider $f\left({s}_{i}w\right):$ suppose first that $\ell \left({s}_{i}w\right)=\ell \left(w\right)+1.$ If $w={t}_{1}\cdots {t}_{p}$ is a reduced expression, then ${s}_{i}w={s}_{i}{t}_{1}\cdots {t}_{p}$ is also reduced, hence $f(siw) = f(si) f(t1) ⋯ f(tp) = f(si) f(w).$ If on the other hand $\ell \left({s}_{i}w\right)=\ell \left(w\right)-1$ Proposition 6.14, replace $w$ by ${s}_{i}w:$ $f(w) = f(si) f(siw)$ and hence $f(siw) = f(si)-1 f(w) = f(si) f(w)$ since $f\left({s}_{i}\right)={g}_{i}={g}_{i}^{-1}.$ So we have $f(siw) = f(si) f(w) (IGM 11)$ in all cases. Hence if $v\in W,$ $v={u}_{1}\cdots {u}_{q}$ reduced, $f(vw) = f(u1u2⋯uqw) = f(u1) f(u2⋯uqw) = ⋯ = f(u1) ⋯ f(uq) f(w) = f(v) f(w).$ $\square$

## Weyl chamber

etc. as before. Recall that the Weyl chamber associated with $B$ is It is an open simplicial cone and its closure in $V$ is

$\stackrel{_}{C}$ is a fundamental domain for the action of $W$ on $V$ (i.e. every $W-$orbit in $V$ meets $\stackrel{_}{C}$ in exactly one point.)

 Proof. (cf. Proposition 6.6) Let $x\in V,$ let $\rho =\frac{1}{2}\sum _{\alpha >0}\alpha ,$ and choose $w\in W$ so that $⟨wx,\rho ⟩$ is as large as possible. Then for $i\in \left[1,r\right]$ we have $⟨wx,ρ⟩ ≥ ⟨siwx,ρ⟩ = ⟨wx,siρ⟩ = ⟨wx,ρ-αi⟩ Proposition 6.5 = ⟨wx,ρ⟩ - ⟨wx,αi⟩,$ so that $⟨wx,{\alpha }_{i}⟩\ge 0$ and hence $wx\in \stackrel{_}{C}.$ So each $W-$orbit meets $\stackrel{_}{C}.$ Remains to prove that if $x\in \stackrel{_}{C}$ and $y=wx\in \stackrel{_}{C}$ then $x=y$ (but it doesn't follow necessarily that $w=1$). We proceed by induction on $\ell \left(w\right).$ If $\ell \left(w\right)=0$ then $w=1,$ so $y=x.$ If $\ell \left(w\right)>1$ we can write $w={s}_{i}w\prime$ with $\ell \left(w\prime \right)=\ell \left(w\right)-1$ (take a reduced word $w={s}_{i}\cdots$). Then $w\prime ={s}_{i}w,$ so that $\ell \left(w\right)=\ell \left({s}_{i}w\right)+1,$ hence ${w}^{-1}{\alpha }_{i}\in {R}^{-}$ by Proposition 6.14. It follows that $⟨αi,y⟩ = ⟨αi,wx⟩ = ⟨w-1αi,x⟩ ≤ 0, (because x∈C_)$ but also $⟨{\alpha }_{i},y⟩\ge 0$ (because $y\in \stackrel{_}{C}$). Hence $⟨{\alpha }_{i},y⟩=0,$ i.e. ${s}_{i}y=y$ and therefore $w\prime x={s}_{i}wx={s}_{i}y=y.$ By the induction hypothesis we conclude that $x=y.$ $\square$

The set $Vreg = V-⋃αHα$ is an open dense subset of $V.$ By Lemma 6.15 $Vreg = ⋃w∈WwC and V=Vreg_ = ⋃w∈WwC_,$ by taking closures.

It follows from Proposition 7.1 that ${V}_{\mathrm{reg}}$ is the disjoint union of the chambers $wC$ (i.e. they don't overlap). For if where $w\ne 1,$ the proof of Proposition 7.1 shows that $⟨{\alpha }_{i},y⟩=0$ for some $i,$ which contradicts $y\in C.$ So if $w\ne 1$ we have $C\cap {w}^{-1}C=\varnothing .$

Hence the chambers $wC$ $\left(w\in W\right)$ are the connected components of the topological space ${V}_{}:$ each $wC$ is a cone, hence convex, hence connected, also open.

The basis $B$ corresponding to $C$ may be described as follows:

Let $\alpha \in {R}^{+}.$ Then spans ${H}_{\alpha }.$

 Proof. Let $\alpha \in {R}^{+},$ say $\alpha =\sum _{i=1}^{r}{m}_{i}{\alpha }_{i}.$ Let We have It follows that $\stackrel{_}{C}\cap {H}_{\alpha }\subseteq \bigcap _{i\in I}{H}_{{\alpha }_{i}},$ of dimension $r-|I|.$ Hence $\stackrel{_}{C}\cap {H}_{\alpha }$ spans $\square$

As a corollary:

Let $B$ be a basis of $R.$ Then ${B}^{\vee }$ is a basis of ${R}^{\vee }.$

 Proof. Follows from Proposition 7.1a, since ${H}_{\alpha }={H}_{{\alpha }^{\vee }}.$ $\square$

Let $\left({v}_{1},...,{v}_{r}\right)$ be the basis of $V$ dual to $B=\left({\alpha }_{1},...,{\alpha }_{r}\right):$ $⟨αi,vj⟩ = δij.$ If $x\in V$ we have $x = ∑i=1r ⟨x,αi⟩vi$ so that $\stackrel{_}{C}$ is the cone consisting of all nonnegative linear combinations of the dual basis vectors ${v}_{i}.$

The dual cone $\stackrel{_}{{C}^{*}}$ consists of the nonnegative linear combinations of the ${\alpha }_{i},$ and we have (acute cone and obtuse cone: pictures for ${A}_{2},{B}_{2},{G}_{2}$). We make use of $\stackrel{_}{{C}^{*}}$ to define a partial order on $V:$ if $x,y\in V$ then $x\ge y$ means that $x-y\in \stackrel{_}{{C}^{*}},$ i.e. or equivalently

Example. Suppose $R$ is of type ${A}_{n-1},$ $V\subseteq {ℝ}^{n}$ is the hyperplane perpendicular to $e=\frac{1}{n}\left({e}_{1}+\cdots +{e}_{n}\right).$ We have $⟨{e}_{i},e⟩=\frac{1}{n}=⟨e,e⟩,$ so that ${e\prime }_{i}={e}_{i}-e\in V.$ The dual basis is $\left({v}_{1},...,{v}_{n-1}\right)$ where $vi = e′1 +⋯+ e′i = e1 +⋯+ ei- ine = 1n ( n-i,...,n-i, ⏟ i -i,...,-i ⏟ n-i ).$

Let Then $⟨x,{v}_{i}⟩={x}_{1}+\cdots +{x}_{i},$ hence (note that ${x}_{1}+\cdots +{x}_{n}={y}_{1}+\cdots +{y}_{n}=0$) the dominance partial order.

Let $x\in V.$ Then the following are equivalent:

1. $x\ge wx,$ all $w\in W,$
2. $x\in \stackrel{_}{C}.$

 Proof. (i) $⇒$ (ii): obvious. (ii) $⇒$ (iii): We have $x-{s}_{i}x=⟨{\alpha }_{i}^{\vee },x⟩{\alpha }_{i}$ from Proposition 2.1, hence $x\ge {s}_{i}x$ means that $⟨{\alpha }_{i}^{\vee },x⟩\ge 0$ or equivalently i.e. $x\in \stackrel{_}{C}.$ (iii) $⇒$ (i): Let Induction on $\ell \left(w\right).$ $\ell \left(w\right)=0$ implies $w=1,$ OK. Suppose $\ell \left(w\right)\ge 1.$ Then $w=w\prime {s}_{i}$ for some $i\in \left[1,r\right]$ and $\ell \left(w\prime \right)=\ell \left(w\right)-1$ (take a reduced expression for $w$ ending with ${s}_{i}$). We have $x-wx = (x-w′x) + w′(x-six).$ Now $x-w\prime x\ge 0$ (induction hypothesis), and $w′(x-six) = w(six-x) = -⟨αi∨,x⟩ wαi;$ by Proposition 6.14 (with $w$ replaced by ${w}^{-1}$) we have $w{\alpha }_{i}<0,$ hence $⟨{\alpha }_{i}^{\vee },x⟩w{\alpha }_{i}\ge 0.$ So $x-wx\ge 0$ as required. $\square$

Let $x\in V$ and let (So ${R}_{x}=\varnothing$ if and only if $x$ is regular.)

If $x\in V$ is not regular, then ${R}_{x}$ is a root system and ${W}_{x}$ is its Weyl group.

 Proof. Let $\alpha ,\beta \in {R}_{x},$ then $⟨{\alpha }^{\vee },\beta ⟩\in ℤ$ and $x\in {H}_{\alpha },$ so that $⟨sαβ,x⟩ = ⟨β,sαx⟩ = ⟨β,x⟩ = 0,$ so that ${s}_{\alpha }\beta \in {R}_{x}.$ So ${R}_{x}$ is a root system. Let Clearly ${W\prime }_{x}$ is a subgroup of ${W}_{x}$ and we have to show ${W\prime }_{x}={W}_{x}.$ If $y=ux$ ($u\in W\right)$ then and hence ${W\prime }_{y}$ is generated by the ${s}_{{u}^{-1}\alpha }={u}^{-1}{s}_{\alpha }u$ where $\alpha \in {R}_{x},$ so that ${W\prime }_{y}={u}^{-1}{W\prime }_{x}u$ and likewise ${W}_{y}={u}^{-1}{W}_{x}u.$ Choose $u\in W$ such that $y=ux\in \stackrel{_}{C}.$ Enough to show ${W\prime }_{y}={W}_{y}.$ So let $w\in {W}_{y},$ i.e. $y=wy.$ The proof of Proposition 7.5 shows that if $w\ne 1$ then $w={s}_{i}w\prime$ with $\ell \left(w\prime \right)<\ell \left(w\right)$ and ${s}_{i}\left(y\right)=0,$ so that $w′y = siwy = siy = y$ i.e. $w\prime \in {W}_{y}.$ By induction on $\ell \left(w\right)$ we may assume $w\prime \in {W\prime }_{y}$ and then $w={s}_{i}w\prime \in {W\prime }_{y}.$ $\square$

(So the isometry group of $x\in V$ is generated by the reflections it contains.)

If with basis