## Kac-Moody Lie AlgebrasChapter II: Root system and the Weyl group

Last update: 27 August 2012

Abstract.
This is a typed version of I.G. Macdonald's lecture notes on Kac-Moody Lie algebras from 1983.

## Introduction

In this chapter, $A$ is a Cartan matrix until further notice.

Recall that $Q=\sum _{i=1}^{n}ℤ{\alpha }_{i}$ is the lattice generated by $B$ in ${𝔥}^{*}$, and that $\alpha \in Q$ is a root of $𝔤\left(A\right)$ if $\alpha \ne 0$ and ${𝔤}_{\alpha }\ne 0$. The multiplicity of $\alpha$ is

${m}_{\alpha }=\text{dim}\phantom{\rule{0.2em}{0ex}}{g}_{\alpha }<\infty$

(and may well be $>1$).

Let $R$ denote the set of roots. By (1.7), each root is either positive $\left(\alpha >0\right)$ or negative $\left(\alpha <0\right)$. Moreover the involution $\omega$ interchanges ${𝔤}_{\alpha }$ and ${𝔤}_{-\alpha }$, hence if $\alpha$ is a root, so is $-\alpha$ with the same multiplicity.

Let ${R}^{+}$ denote the set of positive roots. We have

${𝔫}_{+}=\underset{\alpha \in {R}^{+}}{\oplus }{𝔤}_{\alpha },\phantom{\rule{1em}{0ex}}{𝔫}_{-}=\underset{\alpha \in {R}^{+}}{\oplus }{𝔤}_{-\alpha }\text{.}$

(2.1)

1. $R={R}^{+}\cup \left(-{R}^{+}\right)$ (disjoint union)
2. If $\alpha \in R$ then $-\alpha \in R$, and ${m}_{\alpha }={m}_{-\alpha }$.
3. ${\alpha }_{i}\in R\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$ and ${m}_{{\alpha }_{i}}=1$.
4. If ${r}_{{\alpha }_{i}}\in R\phantom{\rule{0.5em}{0ex}}\left(r\in ℤ\right)$ then $r=±1$.
5. If $\alpha \in {R}^{+},\phantom{\rule{0.2em}{0ex}}\alpha \notin B$, then $\alpha -{\alpha }_{i}\in {R}^{+}$ for some $i$.

 Proof. (i), (ii) done above; (iii) because ${𝔤}_{{\alpha }_{i}}=k{e}_{i}$; (iv) is clear. (v) Let $\left[{e}_{{i}_{1}}\dots {e}_{{i}_{r}}\right]$ be a nonzero element of ${𝔤}_{\alpha }$, so that $\alpha ={\alpha }_{{i}_{1}}+\dots +{\alpha }_{{i}_{r}}$ (and $r\ge 2$). Then $\left[{e}_{{i}_{2}}\dots {e}_{{i}_{r}}\right]\ne 0$ and lies in ${𝔤}_{\beta }$, where $\beta ={\alpha }_{{i}_{2}}+\dots +{\alpha }_{{i}_{r}}$. Thus $\beta \in {R}^{+}$ and $\beta =\alpha -{\alpha }_{{i}_{1}}$ $\square$

## The dual root system

Recall from Chapter I that if $\left(𝔥,B,{B}^{\vee }\right)$ is a minimal realization of the matrix $A$, then $\left({𝔥}^{*},{B}^{\vee },B\right)$ is a minimal realization of the transposed matrix ${A}^{t}$. Let

${Q}^{\vee }=\sum _{i=1}^{n}ℤ{h}_{i}\subset 𝔥$

be the lattice in $𝔥$ spanned by ${h}_{1},\dots ,{h}_{n}$. Then ${Q}^{\vee }$ plays the role of $Q$ for the Lie algebra $𝔤\left({A}^{t}\right)$ (note that ${A}^{t}$ is also a Cartan matrix). The root system ${R}^{\vee }$ of $𝔤\left({A}^{t}\right)$ is called the dual of the root system $R$. The simple roots of ${R}^{\vee }$ are ${h}_{1},\dots ,{h}_{n}$, etc.

## The Weyl group W

For $1\le i\le n$ define ${w}_{i}:\phantom{\rule{0.2em}{0ex}}{𝔥}^{*}\to {𝔥}^{*}$ by

$\begin{array}{cc}{w}_{i}\left(\lambda \right)=\lambda -\lambda \left({h}_{i}\right){\alpha }_{i}& \left(1\right)\end{array}$

(2.2)

1. ${w}_{i}$ is an automorphism of the lattice $Q$.
2. ${w}_{i}\left({\alpha }_{i}\right)=-{\alpha }_{i};\phantom{\rule{0.5em}{0ex}}{w}_{i}^{2}=1;\phantom{\rule{0.5em}{0ex}}\text{det}\phantom{\rule{0.2em}{0ex}}\left({w}_{i}\right)=-1$.

 Proof. We have ${w}_{i}\left({\alpha }_{j}\right)={\alpha }_{j}-{\alpha }_{j}\left({h}_{i}\right){\alpha }_{i}={\alpha }_{j}-{a}_{ij}{\alpha }_{i}\in Q$. Hence ${w}_{i}Q\subset Q$. In particular, ${w}_{i}\left({\alpha }_{i}\right)={\alpha }_{i}-2{\alpha }_{i}=-{\alpha }_{i}$ (because ${a}_{ii}=2$). Next, ${w}_{i}^{2}\left(\lambda \right)={w}_{i}\left(\lambda \right)-\lambda \left({h}_{i}\right)w\left({\alpha }_{i}\right)=\lambda -\lambda \left({h}_{i}\right){\alpha }_{i}+\lambda \left({h}_{i}\right){\alpha }_{i}=\lambda$. Finally, ${w}_{i}$ fixes pointwise the hyperplane $\left\{\lambda \in {𝔥}^{*}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\lambda \left({h}_{i}\right)=0\right\}$. Hence all but one of its eigenvalues are equal to 1, and the remaining eigenvalue is $-1$ (from above). Hence $\text{det}\phantom{\rule{0.2em}{0ex}}\left({w}_{i}\right)=-1$. $\square$

Let $W$ denote the group of automorphisms of ${𝔥}^{*}$ generated by ${w}_{1},\dots ,{w}_{n}$. By (1.1) it depends only on the matrix $A:\phantom{\rule{0.2em}{0ex}}W=W\left(A\right)$ if we need to make the dependance explicit. It is called the Weyl group of $𝔤\left(A\right)$, or of $A$.

$W$ acts contragradiently on $𝔥$:

$\lambda \left(w·h\right)=\left({w}^{-1}\left(\lambda \right)\right)\left(h\right)\phantom{\rule{1em}{0ex}}\left(h\in 𝔥,\phantom{\rule{0.2em}{0ex}}\lambda \in {𝔥}^{*}\right)$

In particular we have

$\begin{array}{cc}{w}_{i}\left(h\right)=h-{\alpha }_{i}\left(h\right){h}_{i}& \left(2\right)\end{array}$

because

$\begin{array}{ccc}\lambda \left({w}_{i}h\right)& =& \left({w}_{i}\lambda \right)h\phantom{\rule{2em}{0ex}}\left(\text{since}\phantom{\rule{0.2em}{0ex}}{w}_{i}={w}_{i}^{-1}\phantom{\rule{0.2em}{0ex}}\text{by (2.2)}\right)\\ & =& \lambda \left(h\right)-\lambda \left({h}_{i}\right){\alpha }_{i}\left(h\right)\\ & =& \lambda \left(h-{\alpha }_{i}\left(h\right){h}_{i}\right)\end{array}$

for all $\lambda \in {𝔥}^{*}$.

From (1) and (2) it follows that $W$ acting on $𝔥$ is the Weyl group of ${A}^{t}$:

$W\left(A\right)\cong W\left({A}^{t}\right)$.

We want next to show that $W$ permutes the roots, ie that $wR=R$ for each $w\in W$. For this purpose we recall (1.19) that $\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i},\phantom{\rule{0.2em}{0ex}}\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}$ are locally nilpotent derivations of $𝔤\left(A\right)$, so that ${e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}},\phantom{\rule{0.2em}{0ex}}{e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}$ are automorphisms. Let us compute their effect on the generators of $𝔤\left(A\right)$:

If $h\in 𝔥$ we have $\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right)h=\left[{e}_{i},h\right]=-\left[h,{e}_{i}\right]=-{\alpha }_{i}\left(h\right){e}_{i}$, whence ${\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right)}^{2}h=0$ and therefore

$\begin{array}{cc}\left(\text{a}\right)& {e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}\left(h\right)=h-{\alpha }_{i}\left(h\right){e}_{i}\end{array}$

Likewise,

$\begin{array}{cc}\left({\text{a}}^{\prime }\right)& {e}^{-\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}\left(h\right)=h-{\alpha }_{i}\left(h\right){f}_{i}\end{array}$

Next, we have $\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right){f}_{i}={h}_{i},\phantom{\rule{0.2em}{0ex}}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right)}^{2}{f}_{i}=\left[{e}_{i},{h}_{i}\right]=-2{e}_{i},\phantom{\rule{0.2em}{0ex}}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right)}^{3}{f}_{i}=0$, whence

$\begin{array}{cc}\left(\text{b}\right)& {e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}\left({f}_{i}\right)={f}_{i}+{h}_{i}-{e}_{i}\end{array}$

and likewise

$\begin{array}{cc}\left({\text{b}}^{\prime }\right)& {e}^{-\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}\left({e}_{i}\right)={e}_{i}+{h}_{i}-{f}_{i}\text{.}\end{array}$

Now define

${\stackrel{\sim }{w}}_{i}={e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}{e}^{-\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}{e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}\phantom{\rule{2em}{0ex}}\left(1\le i\le n\right)$

(2.3) ${\stackrel{\sim }{w}}_{i}\left(h\right)={w}_{i}\left(h\right)$ for all $h\in 𝔥$ (which justifies the choice of notation)

 Proof is calculation. $\begin{array}{ccc}{\stackrel{\sim }{w}}_{i}\left(h\right)& =& {e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}{e}^{-\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}\left(h-{\alpha }_{i}\left(h\right){e}_{i}\right)\phantom{\rule{2em}{0ex}}\text{by (a)}\\ & =& {e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}\left(h-{\alpha }_{i}\left(h\right){f}_{i}-{\alpha }_{i}\left(h\right)\left({e}_{i}+{h}_{i}-{f}_{i}\right)\right)\phantom{\rule{2em}{0ex}}\text{by (a), (}{\text{b}}^{\prime }\text{)}\\ & =& {e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}\left({w}_{i}h-{\alpha }_{i}\left(h\right){e}_{i}\right)\\ & =& {w}_{i}h-{\alpha }_{i}\left({w}_{i}h\right){e}_{i}-{\alpha }_{i}\left(h\right){e}_{i}\phantom{\rule{2em}{0ex}}\text{by (a) again}\\ & =& {w}_{i}h\text{.}\end{array}$ $\square$

(2.4) Let $\alpha \in R$, then ${\stackrel{\sim }{w}}_{i}{𝔤}_{\alpha }={𝔤}_{{w}_{i}\alpha }$.

 Proof. Let $x\in {𝔤}_{\alpha },\phantom{\rule{0.2em}{0ex}}h\in 𝔥$. Then we have $\begin{array}{ccc}\left[h,{\stackrel{\sim }{w}}_{i}x\right]& =& {\stackrel{\sim }{w}}_{i}\left[{\stackrel{\sim }{w}}_{i}^{-1}h,x\right]\phantom{\rule{2em}{0ex}}\text{because}\phantom{\rule{0.2em}{0ex}}{\stackrel{\sim }{w}}_{i}\phantom{\rule{0.2em}{0ex}}\text{is an automorphism}\\ & =& {\stackrel{\sim }{w}}_{i}\left[{w}_{i}h,x\right]\phantom{\rule{1em}{0ex}}\text{by (2.3)}\\ & =& \alpha \left({w}_{i}h\right){\stackrel{\sim }{w}}_{i}\left(x\right)\phantom{\rule{1em}{0ex}}\text{because}\phantom{\rule{0.2em}{0ex}}x\in {𝔤}_{\alpha }\phantom{\rule{0.2em}{0ex}}\text{(1.7)}\\ & =& \left({w}_{i}\alpha \right)\left(h\right){\stackrel{\sim }{w}}_{i}\left(x\right)\end{array}$ This calculation shows that ${\stackrel{\sim }{w}}_{i}\left(x\right)\in {𝔤}_{{w}_{i}\alpha }$, by (1.7) again, hence that ${\stackrel{\sim }{w}}_{i}{𝔤}_{\alpha }\subset {𝔤}_{{w}_{i}\alpha }$. It follows that ${𝔤}_{{w}_{i}\alpha }\ne 0$, i.e. that ${w}_{i}\alpha$ is a root; and then, replacing $\alpha$ by ${w}_{i}\alpha$ we have ${\stackrel{\sim }{w}}_{i}{𝔤}_{{w}_{i}\alpha }\subset {𝔤}_{\alpha }$ (because ${w}_{i}^{2}=1$). Hence $\text{dim}\phantom{\rule{0.2em}{0ex}}{𝔤}_{\alpha }\le \text{dim}\phantom{\rule{0.2em}{0ex}}{𝔤}_{{w}_{i}\alpha }\le \text{dim}\phantom{\rule{0.2em}{0ex}}{𝔤}_{\alpha }$ so we have equality throughout, and hence ${\stackrel{\sim }{w}}_{i}{𝔤}_{\alpha }={𝔤}_{{w}_{i}\alpha }$. $\square$

(2.5) If $\alpha \in R$ and $w\in W$, then $w\alpha \in R$ and ${m}_{w\alpha }={m}_{\alpha }$.

 Proof. Enough to prove this when $w$ is a generator ${w}_{i}$ of $W$, and then it follows from (2.4). $\square$

(2.6) If $\alpha \in {R}^{+},\phantom{\rule{0.2em}{0ex}}\alpha \ne {\alpha }_{i}$, then ${w}_{i}\alpha \in {R}^{+}$. Thus ${w}_{i}$ permutes the set ${R}^{+}=\left\{{\alpha }_{i}\right\}$.

 Proof. Say $\alpha \in \sum _{j=1}^{n}{m}_{j}{\alpha }_{j}$, so that the ${m}_{j}$ are $\ge 0$ and some ${m}_{j},j\ne i$, is $>0$. Then ${w}_{i}\alpha =\alpha -\alpha \left({h}_{i}\right){\alpha }_{i}$ still has ${m}_{j}>0$ as coefficient of ${\alpha }_{j}$ (for this $j$), hence is a positive root. $\square$

(2.7) Let $w\in W$ be such that $w{\alpha }_{i}={\alpha }_{j}$. Then

1. $w{h}_{i}={h}_{j}$
2. $w{w}_{j}={w}_{j}w$.

 Proof. Say $w={w}_{{i}_{1}}\dots {w}_{{i}_{r}}$. Let $\phi ={\stackrel{\sim }{w}}_{{i}_{1}}\dots {\stackrel{\sim }{w}}_{{i}_{r}}\in \text{Aut}\phantom{\rule{0.2em}{0ex}}\left(𝔤\left(A\right)\right)$. By (2.3) we have $\phi \left(h\right)=w\left(h\right)$ for $h\in 𝔥$. We shall apply $\phi$ to the relation $\left[{e}_{i},{f}_{i}\right]={h}_{i}$. By (2.4), $\phi \left({e}_{i}\right)\in {𝔤}_{w{\alpha }_{i}}={𝔤}_{{\alpha }_{j}}$, hence $\phi \left({e}_{i}\right)=\lambda {e}_{j}\phantom{\rule{2em}{0ex}}\left(\text{some}\phantom{\rule{0.2em}{0ex}}\lambda \in k\right)$ Likewise, $\phi \left({f}_{i}\right)=\mu {f}_{j}\phantom{\rule{2em}{0ex}}\left(\text{some}\phantom{\rule{0.2em}{0ex}}\mu \in k\right)$ Hence $w{h}_{i}=\phi \left({h}_{i}\right)=\left[\phi {e}_{i},\phi {f}_{i}\right]=\left[\lambda {e}_{j},\mu {f}_{j}\right]=\lambda \mu {h}_{j}$ and it remains to see that $\lambda \mu =1$. But this is clear since $\left(w{\alpha }_{i}\right)\left(w{h}_{i}\right)={\alpha }_{i}\left({h}_{i}\right)=2$ and also $\left(w{\alpha }_{i}\right)\left(w{h}_{i}\right)={\alpha }_{j}\left(\lambda \mu {h}_{j}\right)=2\lambda \mu$. Let $h\in 𝔥$. Then $w{w}_{i}\left(h\right)=w\left(h-{\alpha }_{i}\left(h\right){h}_{i}\right)=wh-{\alpha }_{i}\left(h\right)w{h}_{i}$ and ${w}_{j}w\left(h\right)=wh-{\alpha }_{j}\left(wh\right){h}_{j}=wh-{\alpha }_{i}\left(h\right){h}_{j}$ so that (ii) follows from (i). $\square$

We shall next prove that $W$ is a Coxeter group, i.e. that it is generated by ${w}_{1},\dots ,{w}_{n}$ subject only to the relations of the form

${\left({w}_{i}{w}_{j}\right)}^{{m}_{ij}}=1\phantom{\rule{1em}{0ex}}\left(i\ne j\right)$

where ${m}_{ij}$ are positive integers (or $+\infty$). The proof will depend only on the last two propositions (2.6), (2.7).

Each element $w\in W$ can be written (in many ways) as a word in the generators ${w}_{i}$ (recall that ${w}_{i}={w}_{i}^{-1}$), say

$w={w}_{{i}_{1}}{w}_{{i}_{2}}\dots {w}_{{i}_{r}}\phantom{\rule{1em}{0ex}}\left(1\le {i}_{1},\dots ,{i}_{r}\le n\right)$

For a given $w$, the least value of $r$ is called the length $l\left(w\right)$ of $w$, and if $r=l\left(w\right)$, $\left({w}_{{i}_{1}},\dots ,{w}_{{i}_{r}}\right)$ is called a reduced word for $w$.

Observe that:

1. $\left({w}_{{i}_{p}},\dots ,{w}_{{i}_{q}}\right)$ is a reduced word, for all $p,q$ such that $1\le p\le q\le r$
2. $\left({w}_{{i}_{r}},\dots ,{w}_{{i}_{1}}\right)$ is a reduced word for ${w}^{-1}$ (so that $l\left({w}^{-1}\right)=l\left(w\right)$).

(2.8) Let $\left({w}_{{i}_{1}},\dots ,{w}_{{i}_{r}}\right)$ be a reduced word for $w\in W$. Then the set $S\left(w\right)$ of positive roots $\alpha$ such that ${w}^{-1}\alpha$ is negative is

$S\left(w\right)=\left\{{\gamma }_{1},\dots ,{\gamma }_{r}\right\}$

where

${\gamma }_{p}={w}_{{i}_{1}}\dots {w}_{{i}_{p-1}}{\alpha }_{{i}_{p}}\phantom{\rule{2em}{0ex}}\left(1\le p\le r\right)$

(so that ${\gamma }_{1}={\alpha }_{{i}_{1}}$).

 Proof. By induction or $r=l\left(w\right)$. For $r=1$ this is (2.6). Assume $r>1$ and write (for convenience of notation) ${s}_{p}$ for ${w}_{{i}_{p}}\phantom{\rule{0.2em}{0ex}}\left(1\le p\le r\right),\phantom{\rule{0.2em}{0ex}}{\beta }_{p}={\alpha }_{{i}_{p}}$, so that ${\gamma }_{p}={s}_{1}\dots {s}_{p-1}{\beta }_{p}$. Since $\left({s}_{1},\dots ,{s}_{r-1}\right)$ is a reduced word, we have ${\gamma }_{p}>0$ for $1\le p\le r-1$, by the inductive hypothesis. Consider ${\gamma }_{r}={s}_{1}\dots {s}_{r-1}{\beta }_{r}={s}_{1}{\gamma }_{r}^{\prime }\phantom{\rule{0.2em}{0ex}}\text{say.}$ Since $\left({s}_{2},\dots ,{s}_{r}\right)$ is a reduced word, we have ${\gamma }_{r}^{\prime }>0$ by ind. hyp., hence by (2.6) either ${\gamma }_{r}>0$ or ${\gamma }_{r}^{\prime }={\beta }_{1}$. But in the latter case ${s}_{2}\dots {s}_{r-1}{\beta }_{r}={\beta }_{1}$, whence by (2.7) ${s}_{2}\dots {s}_{r-1}{s}_{r}={s}_{1}{s}_{2}\dots {s}_{r-1}$ and therefore $w={s}_{1}\dots {s}_{r}={s}_{2}\dots {s}_{r-1}$, contradiction. Hence ${\gamma }_{r}>0$. We have ${w}^{-1}{\gamma }_{p}={s}_{r}\dots {s}_{1}{s}_{1}\dots {s}_{p-1}{\beta }_{p}={s}_{r}\dots {s}_{p}{\beta }_{p}=-{s}_{r}\dots {s}_{p+1}{\beta }_{p}$. But ${s}_{r}\dots {s}_{p+1}{\beta }_{p}>0$, by (i) applied to ${w}^{-1}={s}_{r}\dots {s}_{1}$. Hence ${w}^{-1}{\gamma }_{p}>0$. Conversely, suppose $\gamma >0$ and ${w}^{-1}\gamma <0$. Let ${w}^{\prime }={s}_{1}\dots {s}_{r-1}$. Then $w={w}^{\prime }{s}_{r}$, hence ${s}_{r}{w}^{\prime -1}\gamma <0$, whence (2.6) either (a) ${w}^{\prime -1}\gamma <0$, in which case (ind. hyp.) $\gamma$ is one of ${\gamma }_{1},\dots ,{\gamma }_{r-1}$; or (b) ${w}^{\prime -1}\gamma ={\beta }_{r}$, in which case $\gamma ={w}^{\prime }{\beta }_{r}={\gamma }_{r}$. So $\gamma$ is one of ${\gamma }_{1},\dots ,{\gamma }_{r}$. $\square$

As a corollary of (2.8) we have

(2.9) If $w\in W$ is such that $w{\alpha }_{i}>0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$, then $w=1$.

 Proof. For then $\alpha >0⇒w\alpha >0$, hence $s\left({w}^{-1}\right)$ is empty, so $r=0$ and therefore $w=1$. $\square$

In particular, it follows that $W$ acts faithfully as a group of permutations of $R$:

$W↪\text{Sym}\phantom{\rule{0.2em}{0ex}}\left(R\right)$.

(2.10)

1. $S\left(w\right)$ is a finite set of cardinality $l\left(w\right)$.
2. $l\left({w}_{j}w\right)=l\left(w\right)+1⇔{\alpha }_{j}\notin S\left(w\right)$,
$l\left({w}_{j}w\right)=l\left(w\right)-1⇔{\alpha }_{j}\in S\left(w\right)$.

 Proof. $S\left(w\right)$ is finite with $\le l\left(w\right)$ elements, by (2.8). It remains to show that ${\gamma }_{1},\dots ,{\gamma }_{r}$ (notation of (2.8)) are all distinct. Suppose then ${\gamma }_{p}={\gamma }_{q}$ for some pair $p. Then ${s}_{1}\dots {s}_{p-1}{\beta }_{p}={s}_{1}\dots {s}_{q-1}{\beta }_{q}$ so that ${\beta }_{p}={s}_{p}\dots {s}_{q-1}{\beta }_{q}$ and hence (2.7) ${s}_{p}\dots {s}_{q-1}{s}_{q}={s}_{p}·{s}_{p}\dots {s}_{q-1}={s}_{p+1}\dots {s}_{q-1}$, contradiction. Suppose $l\left({w}_{j}w\right)=l\left(w\right)+1$. If $w={s}_{1}\dots {s}_{r}$ as before, then ${w}_{j}{s}_{1}\dots {s}_{r}$ is reduced, hence by (2.8) ${w}^{-1}{w}_{j}{\alpha }_{j}<0$, i.e. ${w}^{-1}{\alpha }_{j}>0$, so that ${\alpha }_{j}\notin S\left(w\right)$. Suppose $l\left({w}_{j}w\right)=l\left(w\right)-1$. Put ${w}^{\prime }={w}_{j}w$, then by what we have just proved (with $w$ replaced by w′) ${w}^{\prime -1}{\alpha }_{j}>0$, hence ${w}^{-1}{\alpha }_{j}<0$, i.e. ${\alpha }_{j}\in S\left(w\right)$. But these are the only two possibilities, because clearly $l\left({w}_{j}w\right)\le l\left(w\right)+1$ and hence (replacing $w$ by ${w}_{j}w$) $l\left(w\right)\le l\left({w}_{j}w\right)+1$. But also $\text{det}\phantom{\rule{0.2em}{0ex}}\left(w\right)={\left(-1\right)}^{l\left(w\right)}$, $\text{det}\phantom{\rule{0.2em}{0ex}}\left({w}_{j}w\right)=-\text{det}\phantom{\rule{0.2em}{0ex}}\left(w\right)$ so that $l\left({w}_{j}w\right)\ne l\left(w\right)$. Hence $\mid l\left({w}_{j}w\right)-l\left(w\right)\mid =1$. $\square$

(2.11) Exchange Lemma Let $w={w}_{{i}_{1}}\dots {w}_{{i}_{r}}$ be a reduced expression, and suppose that $l\left({w}_{j}w\right). Then for some $p=1,\dots ,r$ we have

$w={w}_{j}{w}_{{i}_{1}}\dots \stackrel{\wedge }{{w}_{{i}_{p}}}\dots {w}_{{i}_{r}}$

i.e. we can "exchange" ${w}_{j}$ with some ${w}_{{i}_{p}}$.

 Proof. Induction on $r=l\left(w\right)$. If $r=1$ then $w={w}_{j}$ and there is nothing to prove. Assume that $r>1$, and let ${w}^{\prime }={w}_{{i}_{1}}\dots {w}_{{i}_{r-1}}$. If $l\left({w}_{j}{w}^{\prime }\right), apply the inductive hypothesis to ${w}^{\prime }$. If $l\left({w}_{j}{w}^{\prime }\right)>l\left({w}^{\prime }\right)$, then by (2.10) we have ${\alpha }_{j}\notin S\left({w}^{\prime }\right)$, i.e., ${w}^{\prime -1}{\alpha }_{j}>0$, and ${\alpha }_{j}\in S\left(w\right)$, i.e. ${w}_{{i}_{r}}{w}^{\prime -1}{\alpha }_{j}={w}^{-1}{\alpha }_{j}<0$. By (2.6) it follows that ${w}^{\prime -1}{\alpha }_{j}={\alpha }_{{i}_{r}}$, so that ${\alpha }_{j}={w}^{\prime }{\alpha }_{{i}_{r}}$, whence by (2.7) $w={w}^{\prime }{w}_{r}={w}_{j}{w}^{\prime }={w}_{j}{w}_{{i}_{1}}\dots {w}_{{i}_{r-1}}$. $\square$

(2.12) Theorem. $W=W\left(A\right)$ is a Coxeter group, generated by ${w}_{1},\dots ,{w}_{n}$ subject to the relations

${w}_{i}^{2}=1\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$
${\left({w}_{i}{w}_{j}\right)}^{{m}_{ij}}=1\phantom{\rule{0.5em}{0ex}}\left(i\ne j\right)$

where ${m}_{ij}\in \left[2,\infty \right]$ are given in terms of the Cartan matrix $A$ by the following table:

$\begin{array}{ccccccc}{a}_{ij}{a}_{ji}& 0& 1& 2& 3& \ge 4& \left(i\ne j\right)\\ {m}_{ij}& 2& 3& 4& 6& \infty \end{array}$

 Proof. The exchange lemma (2.11) implies that $W$ is a Coxeter group (proof in Bourbaki, LG + LA, Chapter IV). It remains to compute the order of ${w}_{i}{w}_{j}$ in $W$. For the moment, exclude the case ${a}_{ij}{a}_{ji}=4$. Let ${V}_{i}=\text{Ker}\phantom{\rule{0.2em}{0ex}}\left({\alpha }_{i}\right)\subset 𝔥$. Then $V={V}_{i}\cap {V}_{j}$ together with ${h}_{i}$ and ${h}_{j}$ span $𝔥$. For $h=\lambda {h}_{i}+\mu {h}_{j}\in V$ iff ${\alpha }_{i}\left(h\right)={\alpha }_{j}\left(h\right)=0$, i.e. iff $2\lambda +{a}_{ji}\mu =0$ ${a}_{ij}\lambda +2\mu =0$ and these equations have only the solution $\lambda =\mu =0$ (since we are assuming ${a}_{ij}{a}_{ji}\ne 4$). Now ${w}_{i}{w}_{j}$ fixes $V$ pointwise, and on the 2-plane $\pi$ spanned by ${h}_{i}$ and ${h}_{j}$ it acts as follows: $\begin{array}{ccc}{w}_{i}{w}_{j}\left({h}_{i}\right)& =& {w}_{i}\left({h}_{i}-{a}_{ij}{h}_{j}\right)=-{h}_{i}-{a}_{ij}\left({h}_{j}-{a}_{ji}{h}_{i}\right)\\ {w}_{i}{w}_{j}\left({h}_{j}\right)& =& -{w}_{i}{h}_{j}=-{h}_{j}+{a}_{ji}{h}_{i}\end{array}$ and therefore the matrix of ${w}_{i}{w}_{j}\mid \pi$ relative to the basis $\left({h}_{i},{h}_{j}\right)$ is $M=\left(\begin{array}{cc}-1+{a}_{ij}{a}_{ji}& -{a}_{ij}\\ {a}_{ji}& -1\end{array}\right)$ the eigenvalues of which are the roots of the quadratic equation ${\lambda }^{2}+\left(2-{a}_{ij}{a}_{ji}\right)\lambda +1=0$ so we have the following table, in which $\omega =\text{exp}\phantom{\rule{0.2em}{0ex}}\frac{2i\pi }{3}$: $\begin{array}{ccccccc}{a}_{ij}{a}_{ji}& 0& 1& 2& 3& 4& \ge 5\\ \text{eigenvalues}& -1,-1& \omega ,{\omega }^{2}& ±i& -\omega ,-{\omega }^{2}& 1,1& \text{not roots of unity}\\ {m}_{ij}& 2& 3& 4& 6\end{array}$ When ${a}_{ij}{a}_{ji}>4$ the eigenvalues $\lambda ,\stackrel{‾}{\lambda }$ satisfy $\lambda +\stackrel{‾}{\lambda }>2$, hence do not lie on the unit circle in $ℂ$, and therefore cannot be roots of unity; consequently ${w}_{i}{w}_{j}$ has infinite order in this case. Finally, when ${a}_{ij}{a}_{ji}=4$, the eigenvalues are both 1, but $M\ne {1}_{2}$ $\left({M}^{2}=M\right)$, hence ${w}_{i}{w}_{j}\mid \pi$ is unipotent and therefore again of infinite order. $\square$

Remark. We may summarize the table in (2.12) as follows:

$\text{cos}\phantom{\rule{0.2em}{0ex}}\frac{\pi }{{m}_{ij}}=\text{min}\phantom{\rule{0.2em}{0ex}}\left(1,\frac{1}{2}\sqrt{{a}_{ij}{a}_{ji}}\right)\phantom{\rule{2em}{0ex}}\left(i\ne j\right)$

In particular, $W$ is "crystallographic" – it acts faithfully on the lattice $Q$ hence embeds in $\text{GL}\phantom{\rule{0.2em}{0ex}}\left(n,ℤ\right)$.

## The Tits cone

Let $\left(V,B,{B}^{\vee }\right)$ be a (not necessarily minimal) realization of the Cartan matrix $A$ over $ℝ$. The fundamental chamber is

$C=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\alpha }_{i}\left(x\right)\ge 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)\right\}$;

the transforms $wC$ of $C$ under the elements of the Weyl group are the chambers; and the union of all the chambers

$X=\bigcup _{w\in W}wC$

is called the Tits cone.

Define a partial ordering on $V$:

$x\ge y\phantom{\rule{0.5em}{0ex}}\text{iff}\phantom{\rule{0.5em}{0ex}}x-y=\sum _{i=1}^{n}{\lambda }_{i}{h}_{i}\phantom{\rule{0.5em}{0ex}}\text{with all}\phantom{\rule{0.5em}{0ex}}{\lambda }_{i}\ge 0$.

(2.13)

1. $C$ is a fundamental Domain for the action of $W$ on $X$ (and hence $W$ acts simply transitively on the set of chambers).
2. Let $x\in C$ and let ${W}_{x}=\left\{w\in W\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}wx=x\right\}$ be the isotropy group of $x$ in WHAT?. Then ${W}_{x}$ is generated by the fundamental reflections ${w}_{i}$ it contains.
3. Let $x\in V$. Then $x\in C$ iff $x\ge wx$ for all $w\in W$.
4. Let $x\in V$. Then $x\in X$ iff $\alpha \left(x\right)\ge 0$ for almost all $\alpha \in {R}^{+}$ (i.e. for all but finitely many $\alpha \in {R}^{+}$). (Hence $X$ is a convex cone).

Proof.

(i) and (ii) will both follow from the

Lemma. Let $w\in W$, and let ${w}_{{i}_{1}}\dots {w}_{{i}_{r}}$ be a reduced word for $w$. If $x\in C$ and $wx\in C$, then ${w}_{{i}_{r}}x=x$.

 Proof. For we have, putting $y=wx$, $\begin{array}{cc}w{\alpha }_{{i}_{r}}\left(y\right)={\alpha }_{{i}_{r}}\left(x\right)\ge 0& \left(1\right)\end{array}$ since $x\in C$; but also $w{\alpha }_{{i}_{r}}=-{w}_{{i}_{1}}\dots {w}_{{i}_{r-1}}{\alpha }_{{i}_{r}}<0$ ((2.2), (2.8)), hence $\begin{array}{cc}w{\alpha }_{{i}_{r}}\left(y\right)\le 0& \left(2\right)\end{array}$ since $y\in C$. From (1) and (2) it follows that ${\alpha }_{{i}_{r}}\left(x\right)=0$, hence ${w}_{{i}_{r}}x=x$. $\square$

1. Clearly each $W$–orbit in $X$ meets $C$, by definition. If $x\in C$ and $y=wx\in C$ (with $w={w}_{{i}_{1}}\dots {w}_{{i}_{r}}$ as above) then $y={w}_{{i}_{1}}\dots {w}_{{i}_{r-1}}x$ by the lemma. By induction on $r=l\left(w\right)$ we conclude that $y=x$.
2. Let $w\in {W}_{x}$, with $w$ as above. Then by the lemma ${w}_{{i}_{r}}\in {W}_{x}$, and hence by induction on $r=l\left(w\right)$ we have ${w}_{{i}_{1}},\dots ,{w}_{{i}_{r}}\in {W}_{x}$.
3. Suppose $x\in C$. Induction on $l\left(w\right)$ (again). If $l\left(w\right)=1$ then $w={w}_{i}$, and $x-{w}_{i}x={\alpha }_{i}\left(x\right){h}_{i}\ge 0$. Now let $l\left(w\right)>1$, then we can write $w={w}^{\prime }{w}_{i}$ with $l\left({w}^{\prime }\right)=l\left(w\right)-1$, and we have

$\begin{array}{ccc}x-wx& =& \left(x-{w}^{\prime }x\right)x+{w}^{\prime }\left(x-{w}_{i}x\right)\\ & =& \left(x-{w}^{\prime }x\right)x+{\alpha }_{i}\left(x\right){w}^{\prime }{h}_{i}\text{.}\end{array}$

By (2.8) (applied to the dual root system) we have ${w}^{\prime }{h}_{i}\ge 0$, whence ${\alpha }_{i}\left(x\right){w}^{\prime }{h}_{i}\ge \text{WHAT?}$ also $x-{w}^{\prime }\left(x\right)\ge 0$ by the inductive hypothesis, hence $x-wx\ge 0$ as desired.

Conversely, if $x\ge wx$ for all $w\in W$, then in particular $x\ge {w}_{i}x\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$, whence ${\alpha }_{i}\left(x\right){h}_{i}\ge 0$ therefore ${\alpha }_{i}\left(x\right)\ge 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$, i.e. $x\in C$.

4. For each $x\in V$ let

$M\left(x\right)=\left\{\alpha \in {R}^{+}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\alpha \left(x\right)<0\right\}$

and let ${X}^{\prime }=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}M\left(x\right)\phantom{\rule{0.2em}{0ex}}\text{is finite}\right\}$. We have to prove that ${X}^{\prime }=X$

Let $x\in V,\phantom{\rule{0.2em}{0ex}}w\in W$. If $\alpha \in M\left(wx\right)$, then $\alpha \in {R}^{+}$ and $\left({w}^{-1}\alpha \right)\left(x\right)=\alpha \left(wx\right)<0.$ Either ${w}^{-1}\alpha >0$, in which case ${w}^{-1}\alpha \in M\left(x\right)$, i.e. $\alpha \in wM\left(x\right)$; or else ${w}^{-1}\alpha <0$. i.e. $\alpha \in S\left(w\right)$ in the notation of (2.8). Thus

$M\left(wx\right)\subset wM\left(x\right)\cup S\left(w\right)$.

Now $S\left(w\right)$ is finite (2.10), hence $x\in {X}^{\prime }⇒M\left(x\right)$ finite $⇒M\left(wx\right)$ finite $⇒wx\in {X}^{\prime }$. Thus ${X}^{\prime }$ is $W$–stable; but clearly $C\subset {X}^{\prime }$, hence $X\subset {X}^{\prime }$.

Conversely, let $x\in {X}^{\prime }$. We shall show that $x\in X$ by induction on $r=\text{Card}\phantom{\rule{0.2em}{0ex}}M\left(x\right)$. If $r=0$ then $\alpha \left(x\right)\ge 0$ for all $\alpha \in {R}^{+}$, hence $x\in C\subset X$. If $r\ge 1$ we have ${\alpha }_{i}\left(x\right)<0$ for some index $i$, i.e. ${\alpha }_{i}\in M\left(x\right)$. But then

$\begin{array}{ccc}{w}_{i}\alpha \in M\left({w}_{i}x\right)& ⇔& {w}_{i}\alpha >0\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}\alpha \left(x\right)<0\\ & ⇔& \alpha \in {R}^{+},\phantom{\rule{0.2em}{0ex}}\alpha \ne {\alpha }_{i},\phantom{\rule{0.2em}{0ex}}\alpha \left(x\right)<0\phantom{\rule{1em}{0ex}}\text{by (2.6)}\\ & ⇔& \alpha \in M\left(x\right),\phantom{\rule{0.2em}{0ex}}\alpha \ne {\alpha }_{i}\end{array}$

from which it follows that $\mid M\left({w}_{i}x\right)\mid =\mid M\left(x\right)\mid -1$. By the inductive hypothesis, ${w}_{i}x\in X$, hence $x\in {w}_{i}X=X$.

$\square$

(2.14) The following conditions are equivalent:

1. $W$ is finite;
2. $X=V$;
3. $R$ is finite;
4. ${R}^{\vee }$ is finite.

 Proof. (i) $⇒$ (ii) Let $x\in V$. The orbit $Wx$ of $x$ is finite; Let $y=wx$ be a maximal element of this orbit for the ordering on $V$. I claim that $y\in C$; for if not, then ${\alpha }_{i}\left(y\right)<0$ for some $i$, whence ${w}_{i}y=y-{\alpha }_{i}\left(y\right){h}_{i}>y$, impossible. Thus $wx\in C$ and therefore $x\in {w}^{-1}C\subset X$. Thus $X=V$. (ii) $⇒$ (iii) Choose $x\in V$ such that ${\alpha }_{i}\left(x\right)<0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$. Then $\alpha \left(x\right)<0$ for all $x\in {R}^{+}$; but $x\in X$, hence $\alpha \left(x\right)\ge 0$ for almost all $\alpha \in {R}^{+}$. It follows that ${R}^{+}$ is finite, hence so is $R$. (iii) $⇒$ (i) because $W$ acts faithfully on $R$: $W\subset \text{Sym}\phantom{\rule{0.2em}{0ex}}\left(R\right)$ (remark following (2.9)). (i) $⇔$ (iv) because $R,{R}^{\vee }$ have the same Weyl group. $\square$

Let $J$ be any subset of the index set $\left\{1,2,\dots ,n\right\}$. Then if ${B}_{J}=\left\{{\alpha }_{j}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}j\in J\right\}$, ${B}_{J}^{\vee }=\left\{{h}_{j}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}j\in J\right\},\phantom{\rule{0.5em}{0ex}}\left(V,{B}_{J},{B}_{J}^{\vee }\right)$ is a realization of the principal submatrix $A$. Define ${C}_{J},{X}_{J},{W}_{J}$ in the obvious way:

$\begin{array}{ccc}{C}_{J}& :=& \left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\alpha }_{j}\left(x\right)\ge 0\phantom{\rule{0.2em}{0ex}}\text{for}\phantom{\rule{0.2em}{0ex}}j\in J\right\}\\ {W}_{J}& :=& ⟨{w}_{j}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}j\in J⟩,\\ {X}_{J}& :=& \bigcup _{w\in {W}_{J}}w{C}_{J}\text{.}\end{array}$

Also let

${V}^{J}:=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\alpha }_{j}\left(x\right)=0,\phantom{\rule{0.2em}{0ex}}\text{all}\phantom{\rule{0.2em}{0ex}}j\in J\right\}={V}^{{W}_{J}}$.

We have then

(2.15)

1. ${C}_{J}=C+{V}^{J}$;
2. ${X}_{J}=X+{V}^{J}$.

 Proof. Let $x\in {C}_{J}$. There exists $y\in C$ such that ${\alpha }_{j}\left(y\right)={\alpha }_{j}\left(x\right)$, all $j\in J$, so that $x-y\in {V}^{J}$; thus $x=y+\left(x-y\right)\in C+{V}^{J}$. The reverse inclusion is obvious, because ${V}^{J}$ and $C$ are both contained in ${C}_{J}$. If $w\in {W}_{J}$, then $w{C}_{J}=wC+w{V}^{J}\phantom{\rule{0.2em}{0ex}}\text{(by (a))}\phantom{\rule{0.2em}{0ex}}=wC+{V}^{J}\subset X+{V}^{J}$, hence certainly ${X}_{J}\subset X+{V}^{J}$. Conversely, let $x\in X,v\in {V}^{J}$; then $\alpha \left(x\right)\ge 0$ for almost all $\alpha \in {R}^{+}$, (2.13) and $\beta \left(v\right)=0$ for all $\beta \in {R}_{J}^{+}$. Hence $\beta \left(x+v\right)\ge 0$ for almost all $\beta \in {R}_{J}^{+}$, hence by (2.13)(iv) again $x+v\in {X}_{J}$. $\square$

(2.16) Let $x\in X$. Then $x\in \stackrel{˚}{X}$ iff ${W}_{x}$ is finite. ($\stackrel{˚}{X}=$ interior of $X$)

 Proof. We may assume that $x\in C$, because ${W}_{wx}=w{W}_{x}{w}^{-1}\left(w\in W\right)$. Let $J=\left\{j\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\alpha }_{j}\left(x\right)=0\right\}$, then ${W}_{x}={W}_{J}$ by (2.13)(ii), and $x\in {V}^{J}$. Suppose $x\in \stackrel{˚}{X}$. Let $v\in V$, then $y=x+\lambda v\in X$ for some $\lambda \ne 0$, hence $\lambda v=y-x\in X+{V}^{J}={X}_{J}$ by (2.15). It follows that ${X}_{J}=J$ and hence by (2.14) that ${W}_{x}={W}_{J}$ is finite. Conversely, suppose that ${W}_{x}$ is finite, so that (2.14) ${X}_{J}=J$. Let $U=\left\{y\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\alpha }_{i}\left(wy\right)>0\phantom{\rule{0.2em}{0ex}}\text{for all}\phantom{\rule{0.2em}{0ex}}w\in {W}_{J}\phantom{\rule{0.2em}{0ex}}\text{and all}\phantom{\rule{0.2em}{0ex}}i\notin J\right\}$. Then $x\in U$, because ${\alpha }_{i}\left(wx\right)={\alpha }_{i}\left(x\right)>0$ (since $x\in C$ and $i\notin J$); $U$ is open, because it is a finite intersection of open half-spaces; $U\subset X$. For if $y\in U$, then $y\in w{C}_{J}$ for some $w\in {W}_{J}$ (because ${X}_{J}=V$) we have $z={w}^{-1}y\in U\cap {C}_{J}$, whence ${\alpha }_{i}\left(z\right)>0$ for $i\notin J$, and ${\alpha }_{j}\left(z\right)\ge 0$ for $j\in J$. Thus $z\in C$ and therefore $y=wz\in X$. Hence $x$ is an interior point of $X$. $\square$

## Classification of Cartan matrices

Let $V$ be a finite-dimensional real vector space. Recall that a non-empty subset $K$ of $V$ is a convex cone if $K$ is closed under addition and multiplication by non-negative scalars:

$x,y\in K⇒x+y\in K\phantom{\rule{1em}{0ex}};\phantom{\rule{1em}{0ex}}x\in K,\lambda \ge 0⇒\lambda x\in K$.

Examples

1. Any vector subspace of $V$ is a convex cone.
2. Let ${x}_{1},\dots ,{x}_{r}\in V$. Then the set $K$ of all linear combinations $\sum _{i=1}^{r}{\lambda }_{i}{x}_{i}$ with scalars ${\lambda }_{i}\ge 0$ is a closed convex cone, the cone generated by ${x}_{1},\dots ,{x}_{r}$.

Let $K$ be a closed convex cone in $V$, and let ${V}^{*}$ be the dual of $V$. The dual cone ${K}^{*}\subset {V}^{*}$ is defined by

${K}^{*}=\left\{\xi \in {V}^{*}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\xi \left(x\right)\ge 0\phantom{\rule{0.2em}{0ex}}\text{all}\phantom{\rule{0.2em}{0ex}}x\in K\right\}$

Clearly ${K}^{*}$ is a closed convex cone (it is an intersection of closed half-spaces). The basic fact we shall need is the

Dualilty theorem $K$ is the dual of ${K}^{*}$, i.e.

$K=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\xi \left(x\right)\ge 0\phantom{\rule{0.2em}{0ex}}\text{for all}\phantom{\rule{0.2em}{0ex}}\xi \in {K}^{*}\right\}$.

 Proof. Let ${K}^{\prime }$ denote the r.h.s. above. Clearly $x\in K⇒x\in {K}^{\prime }$. The nontrivial part is to prove that $x\notin K⇒x\notin {K}^{\prime }$, i.e. that if $x\notin K$ there exists $\xi \in {K}^{*}$ such that $\xi \left(x\right)<0$; or equivalently that if $x\notin K$ there exists a hyperplane $H=\text{Ker}\phantom{\rule{0.2em}{0ex}}\left(\xi \right)$ in $V$ which seperates $x$ and $K$. Let $⟨y,z⟩$ be a positive definite scalar product on $V$: write $‖𝔤‖={⟨𝔤,𝔤⟩}^{\frac{1}{2}}$ and $d\left(y,z\right)=‖y-z‖$, the usual Euclidean metric. We shall show that there is a unique point $z\in K$ for which $d\left(x,z\right)$ is minimal, and that the hyperplane (through 0) perpendicular to $x-z$ seperates $x$ from $K$. Let $S$ be the unit sphere in $V$. For each $t\in S$, let $\phi \left(t\right)$ denote the shortest distance from $x$ to the ray ${ℝ}^{+}t$, so that by Pythagoras $\phi \left(t\right)=\left\{\begin{array}{ccc}\sqrt{{‖x‖}^{2}-{⟨x,t⟩}^{2}}& & \text{if}\phantom{\rule{0.2em}{0ex}}⟨x,t⟩\ge 0\\ ‖x‖& & \text{if}\phantom{\rule{0.2em}{0ex}}⟨x,t⟩\le 0\end{array}$ $x$ $z$ $\phi \left(t\right)$ $0$ $t$ Clearly $\phi$ is a continuous function on $S$. Now $S\cap K$ is compact (because $S$ is compact and $K$ is closed), hence $\phi$ attains its lower bound on $S\cap K$, i.e. there exists $z\in K$ for which $d\left(x,z\right)$ is a minimum ($>0$, since $x\notin K$). Moreover this $z$ is unique, for if $d\left(x,{z}_{1}\right)$ and $d\left(x,{z}_{2}\right)$ are both minimal, and ${z}_{1}\ne {z}_{2}$ then $d\left(x,{z}_{3}\right)$ is strictly smaller, where ${z}_{3}=\frac{1}{2}\left({z}_{1}+{z}_{2}\right)\in K$. Let $y=x-z$, then $⟨x,y⟩={‖y‖}^{2}>0$. I claim that $⟨u,y⟩\le 0$ for all $u\in K$. For if $u\in K$ is such that $⟨u,y⟩>0$, then the angle $xzu$ is acute; hence if ${z}^{\prime }$ is the foot of the perpendicular form $x$ to the segment $zu$, we have ${z}^{\prime }\in K$ (convexity) and $d\left(x,{z}^{\prime }\right), contradiction. $z$ ${z}^{\prime }$ $u$ $x$ Now define $\xi \in {V}^{*}$ by $\xi \left(u\right)=-⟨u,y⟩$; then $\xi \in {K}^{*}$ and $\xi \left(x\right)<0$. $\square$

We shall require the following consequence of the duality theorem:

Lemma 1 Let $A=\left({a}_{ij}\right)$ be any real $m×n$ matrix. Consider the systems of linear inequalities $\left(1\le i\le m$, $1\le j\le n\right)$

$\begin{array}{cc}\text{(1)}& {x}_{j}>0,\phantom{\rule{1em}{0ex}}\sum _{j}{a}_{ij}{x}_{j}<0;\\ \text{(2)}& {y}_{i}\ge 0,\phantom{\rule{1em}{0ex}}\sum _{i}{a}_{ij}{y}_{i}\ge 0\text{.}\end{array}$

Then either (1) has a solution or (2) has a nontrivial solution.

 Proof. Let ${e}_{0},\dots ,{e}_{n}$ be the standard basis of ${ℝ}^{n+1}$ and let $K$ be the closed convex cone generated by the $m+n$ vectors of ${e}_{0}+{e}_{j}\phantom{\rule{0.2em}{0ex}}\left(1\le j\le n\right),\phantom{\rule{0.2em}{0ex}}{e}_{0}-\sum _{j}{a}_{ij}{e}_{j}\phantom{\rule{0.2em}{0ex}}\left(1\le i\le m\right)$. We have $\begin{array}{ccc}{e}_{0}\in K& ⇔& \exists \phantom{\rule{0.2em}{0ex}}\text{scalars}\phantom{\rule{0.2em}{0ex}}{\lambda }_{i},{\mu }_{j}\ge 0\phantom{\rule{0.2em}{0ex}}\text{such that}\phantom{\rule{0.2em}{0ex}}{e}_{0}=\sum _{j}{u}_{j}\left({e}_{0}+{e}_{j}\right)+\sum _{i}{\lambda }_{i}\left({e}_{0}-\sum _{j}{a}_{ij}{e}_{j}\right)\\ & ⇔& \exists \phantom{\rule{0.2em}{0ex}}\text{scalars}\phantom{\rule{0.2em}{0ex}}{\lambda }_{i},{\mu }_{j}\ge 0\phantom{\rule{0.2em}{0ex}}\text{such that}\phantom{\rule{0.2em}{0ex}}\sum _{i}{a}_{ij}{\lambda }_{i}={\mu }_{j},\phantom{\rule{0.2em}{0ex}}\sum {\lambda }_{i}+\sum {\mu }_{j}=1\\ & ⇔& \text{(2) has a nontrivial solution.}\end{array}$ If on the other hand ${e}_{0}\notin K$, then by the duality theorem there is a linear form $\xi$ on ${ℝ}^{n+1}$ which is negative at ${e}_{0}$ and $\ge 0$ at all points of $K$. But then ${x}_{j}=\xi \left({e}_{j}\right)\phantom{\rule{0.2em}{0ex}}\left(1\le j\le n\right)$ satisfy the inequalities (1). $\square$

Until further notice, $A=\left({a}_{ij}\right)$ will be a real $n×n$ matrix satisfying

$\begin{array}{cc}\left(✶\right)& {a}_{ij}\le 0\phantom{\rule{0.2em}{0ex}}\text{if}\phantom{\rule{0.2em}{0ex}}i\ne j;\phantom{\rule{0.5em}{0ex}}{a}_{ij}=0⇒{a}_{ji}=0\text{.}\end{array}$

If $A$ satisfies these conditions, so does its transpose ${A}^{t}$.

Assume also that $A$ is indecomposable.

Let $V={ℝ}^{n}$ be the space of column vectors $x={\left({x}_{1},\dots ,{x}_{n}\right)}^{t}$. Write $x\ge 0$ (resp. $x>0$) to mean ${x}_{i}\ge 0$ (resp. ${x}_{i}>0$) for $1\le i\le n$. Define the closed convex cones

$\begin{array}{ccc}P& =& \left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}x\ge 0\right\}\phantom{\rule{1em}{0ex}}\text{(positive octant?)}\\ K& =& \left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}Ax\ge 0\right\}\end{array}$

The interior of $P$ is

$\stackrel{˚}{P}=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}x>0\right\}$.

Lemma 2 $K\cap P\subset \stackrel{˚}{P}\cup \left\{0\right\}$.

 Proof. Let $x\in K\cap P$; suppose ${x}_{i}=0$ for $i\in I$, ${x}_{i}>0$ for $i\in J$ (where $I,J$ are complementary subsets of $\left\{1,2,\dots ,n\right\}$). Then we have $\sum _{j\in J}{a}_{ij}{x}_{j}=\sum _{j=1}^{n}{a}_{ij}{x}_{j}\ge 0$ for all $i$, in particular for $i\in I$. But ${x}_{j}>0$ and ${a}_{ij}\le 0$, whence ${a}_{ij}=0$ for all $\left(i,j\right)\in I×J$. Since $A$ is indecomposable, it follows that either $I=\varnothing$, in which case $x\in \stackrel{˚}{P}$; or $J=\varnothing$, in which case $x=0$. $\square$

As regards $K$ and $P$, there are two possibilities:

1. $K\cap P\ne \left\{0\right\}$;
2. $K\cap P=\left\{0\right\}$

Case (a).$\phantom{\rule{1em}{0ex}}$ By Lemma 2 we have $K\cap \partial P=\left\{0\right\}$. Now $K$ is connected (because convex) and we have

$K=\left(K\cap \stackrel{˚}{P}\right)\cup \left(K\cap \partial P\right)\cup \left(K\cap {P}^{\prime }\right)\phantom{\rule{1em}{0ex}}\text{(disjoint union)}$

where ${P}^{\prime }$ is the complement of $P$ in $V$, so that

$K-\left\{0\right\}=\left(K\cap \stackrel{˚}{P}\right)\cup \left(K\cap {P}^{\prime }\right)$

as a union of disjoint relatively open sets, of which $K\cap \stackrel{˚}{P}$ is not empty (by Lemma 2). Hence there are two possibilities:

$\left({\text{a}}^{\prime }\right)\phantom{\rule{0.5em}{0ex}}K-\left\{0\right\}$ is connected, in which case $K\cap {P}^{\prime }=\varnothing$, i.e. $K\subset P$ so that (Lemma 2) $K\subset \stackrel{˚}{P}\cup \left\{0\right\}$

$\left({\text{a}}^{\prime \prime }\right)\phantom{\rule{0.5em}{0ex}}K-\left\{0\right\}$ is not connected, in which case $K$ is a line in $V$ (i.e. a $1-\text{dim}\phantom{\rule{0.2em}{0ex}}\text{WHAT?}$ subspace). (For if $x,y$ lie in different components of $K-\left\{0\right\}$, the line segment $\left(xy\right)$ is contained in $K$, hence must pass through 0, whence $y=-\lambda x$ for some $\lambda >0$. If now $z\in K-\left\{0\right\}$, then either $z$ and $x$ lie in different components, whence (as above) $z=-\mu x,\mu >0$; or $z$ and $y$ lie in different components, in which case $z=-\mu y=\lambda \mu x$. Thus $K=ℝx$.)

Let $N=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}Ax=0\right\}$ be the null-space of $A$. Clearly $N\subset K$.

In case $\left({\text{a}}^{\prime }\right)$, $K$ contains no vector subspace $\ne 0$ of $V$, because $P$ clearly doesn't. Hence $N=0$, i.e. $A$ is nonsingular. In this case we say that $A$ is of positive type.

In case $\left({\text{a}}^{\prime \prime }\right)$ we have $K=N$. For if $x\in K$, then $Ax\ge 0$; but also $-x\in K$, whence $Ax\le 0$ and therefore $Ax=0$, i.e. $x\in N$. So $K\subset N$ and therefore $K=N$. Hence $A$ is singular, of rank $n-1$. We say that $A$ is of zero type.

In either case $\left({\text{a}}^{\prime }\right)$ or $\left({\text{a}}^{\prime \prime }\right)$, the inequalities

$x>0,\phantom{\rule{0.5em}{0ex}}Ax<0$

have no solution. For if $Ax<0$ then $-x\in K$, which in case $\left({\text{a}}^{\prime }\right)$ implies either $x=0$ or $-x>0$, and in case $\left({\text{a}}^{\prime \prime }\right)$ implies $x\in N$, i.e. $Ax=0$. By Lemma 1, therefore, the inequalities

$x\ge 0,\phantom{\rule{0.5em}{0ex}}{A}^{t}x\ge 0$

have a nontrivial solution, i.e. we have

${K}^{t}\cap P\ne \left\{0\right\}$

where ${K}^{t}=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{A}^{t}x\ge 0\right\}$. It follows that the matrix ${A}^{t}$ satisfies (a), and is therefore of positive or zero type according as $A$ is (because $\text{det}\phantom{\rule{0.2em}{0ex}}{A}^{t}=\text{det}\phantom{\rule{0.2em}{0ex}}A$). Hence conditions (a), $\left({\text{a}}^{\prime }\right)$ and therefore also (b) are stable under transposition.

Case (b).$\phantom{\rule{1em}{0ex}}$In this case $K\cap P=\left\{0\right\}$ and therefore also ${K}^{t}\cap P=\left\{0\right\}$, i.e. the inequalities

$x\ge 0,\phantom{\rule{0.5em}{0ex}}{A}^{t}x\ge 0$

have only the trivial solution. Hence, by Lemma 1, the inequalities

$x>0,\phantom{\rule{0.5em}{0ex}}Ax<0$

have a solution. We say that $A$ (and ${A}^{t}$) is of negative type.

To summarize:

(2.17) Theorem (Vinberg) Each indecomposable real $n×n$ matrix $A=\left({a}_{ij}\right)$ satisfying the conditions

$\begin{array}{cc}\left(✶\right)& {a}_{ij}\le 0\phantom{\rule{0.2em}{0ex}}\text{if}\phantom{\rule{0.2em}{0ex}}i\ne j;\phantom{\rule{0.5em}{0ex}}{a}_{ij}=0⇔{a}_{ji}=0\end{array}$

belongs to exactly one of the following there categories:

 $\left(+\right)$ $A$ is nonsingular, and $Ax\ge 0⇒x>0$ or $x=0$ (positive type) (0) $\text{rank}\phantom{\rule{0.2em}{0ex}}\left(A\right)=n-1$, and $Ax\ge 0⇒Ax=0$ (zero type) $\left(-\right)$ $\exists {x}_{0}>0$ such that $A{x}_{0}<0$; $x\ge 0$ and $Ax\ge 0⇒x=0$ (negative type)

The transposed matrix ${A}^{t}$ belongs to the same category as $A$.

Moreover, $A$ is of positive (resp. zero, negative) type iff $\exists {x}_{0}>0$ such that $A{x}_{0}>0$ (resp. $A{x}_{0}=0$, $A{x}_{0}<0$).

 Proof. Only the last sentence requires comment. Observe that $K$ is a finite intersection of closed half-spaces and therefore its interior $\stackrel{˚}{K}$ is the intersection of the corresponding open half-spaces, i.e. $\stackrel{˚}{K}=\left\{x\in V\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}Ax>0\right\}$, and is non empty if $A$ is nonsingular (because the columns of $A$ are then a basis of $V$). If $A$ is of positive type, we have $K\subset \stackrel{˚}{P}\cup \left\{0\right\}$, hence $\stackrel{˚}{K}\subset \stackrel{˚}{P}$, and $\stackrel{˚}{K}$ is nonempty, whence $\exists {x}_{0}>0$ with $A{x}_{0}>0$. If $A$ is of zero type, then $N=K$ intersects $\stackrel{˚}{P}$, hence $\exists {x}_{0}>0$ with $A{x}_{0}=0$. Finally, if $A$ is of negative type, then $\exists {x}_{0}>0$ with $A{x}_{0}<0$, from above. Conversely, let $x>0$. If $Ax>0$, then $A$ cannot be of zero or negative type, hence is of positive type. If $Ax=0$, then $A$ cannot be of negative type or of positive type (because $\text{det}\phantom{\rule{0.2em}{0ex}}\left(A\right)=0$), hence is of zero type. Finally, if $Ax<0$, the vector $y=-x$ satisfies $Ay>0,y<0$, whence $A$ is not of positive or zero type, hence of negative type. $\square$

(2.18) If $A$ is indecomposable and of positive or zero type, then for every proper subset $J$ of $\left\{1,2,\dots ,n\right\}$ the principal submatrix ${A}_{J}$ has all its components of positive type.

 Proof. We may assume that ${A}_{J}$ is indecomposable. By (2.17) $\exists x\in V$ such that $x>0$ and $Ax\ge 0$. Let ${x}_{J}={\left({x}_{j}\right)}_{j\in J}$; then ${x}_{J}>0$, and ${A}_{J}{x}_{J}$ has components $\sum _{j\in J}{a}_{ij}{x}_{j}=\sum _{j=1}^{n}{a}_{ij}{x}_{j}+\sum _{j\notin J}\left(-{a}_{ij}\right){x}_{j}\phantom{\rule{2em}{0ex}}\left(i\in J\right)$. The first sum on the right is $\ge 0$, and so is the second, since $-{a}_{ij}\ge 0$ for $i\in J$, $j\notin J$, and ${x}_{j}>0$. Moreover the second sum is 0 iff ${a}_{ij}=0$ for all $j\notin J$. Since $A$ is indecomposable, $\exists i\in J$ and $j\notin J$ such that ${a}_{ij}\ne 0$, hence at least one of the components of the vector ${A}_{J}{x}_{J}$ is $>0$. Thus we have ${x}_{J}>0,\phantom{\rule{0.5em}{0ex}}{A}_{J}{x}_{J}\ge 0,\phantom{\rule{0.5em}{0ex}}{A}_{J}{x}_{J}\ne 0$ whence by (2.17) ${A}_{J}$ is of positive type. $\square$

## Symmetrizability

A matrix $A$ satisfying the conditions $\left(✶\right)$ is said to be symmetrizable if there exists a diagonal matrix $D=\left(\begin{array}{ccc}{d}_{1}& & \\ & \ddots & \\ & & {d}_{n}\end{array}\right)$ with ${d}_{i}>0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$ such that $DA{D}^{-1}$ is symmetric.

$\begin{array}{cc}\text{(a)}& \begin{array}{ccc}DA{D}^{-1}\phantom{\rule{0.2em}{0ex}}\text{symmetric}& ⇔& {D}^{2}A\phantom{\rule{0.2em}{0ex}}\left(=D·DA{D}^{-1}·D\right)\phantom{\rule{0.2em}{0ex}}\text{symmetric}\\ & ⇔& A{D}^{-2}\phantom{\rule{0.2em}{0ex}}\text{symmetric}\end{array}\end{array}$

Hence $A$ is symmetrizable iff $A$ can be made symmetric by multiplication (on the left or on the right) by a positive diagonal matrix;

(b) If $A$ is symmetrizable and $B=DA{D}^{-1}$ is symmetric, then $B$ is uniquely determined by $A$. For

${b}_{ij}={d}_{i}{a}_{ij}{d}_{j}^{-1}={b}_{ji}={d}_{j}{a}_{iji}{d}_{i}^{-1}$

so that

${b}_{ij}^{2}={a}_{ij}{a}_{ji}$

and therefore (as ${b}_{ij}\le 0$ if $i\ne j$)

${b}_{ij}=-\sqrt{{a}_{ij}{a}_{ji}}\phantom{\rule{0.5em}{0ex}}\left(i\ne j\right);\phantom{\rule{0.5em}{0ex}}{b}_{ii}={a}_{ii}$.

Call $B$ the symmetrization of $A$: notation ${A}^{s}$.

(c) If $A$ is indecomposable and symmetrizable, the diagonal matrix $D$ above is unique up to a positive scalar multiple. For if $DA{D}^{-1}={D}^{\prime }A{D}^{\prime -1}$, then with $E={D}^{-1}{D}^{\prime }$ we have $EA{E}^{-1}=A$, i.e. ${e}_{i}{a}_{ij}{e}_{j}^{-1}={a}_{ij}$ and therefore ${e}_{i}={e}_{j}$ whenever ${a}_{ij}\ne 0$, i.e. whenever $i$ and $j$ are joined by an edge in the graph $\Gamma$ of $A$. Since $\Gamma$ is connected (1.11) it follows that all the ${e}_{i}$ are equal, i.e. $E=\lambda {1}_{n}$.

Let $A=\left({a}_{ij}\right)$ be a matrix satisfying $\left(✶\right)$, $\Gamma$ its graph. Let

$p=\left({i}_{0},{i}_{1},\dots ,{i}_{r}\right)$

be a path in $\Gamma$, i.e. ${i}_{0},\dots ,{i}_{r}$ are vertices of $\Gamma$ such that ${i}_{s-1}{i}_{s}\phantom{\rule{0.2em}{0ex}}\left(1\le s\le r\right)$ is an edge, so that ${i}_{s-1}\ne {i}_{s}$ and ${a}_{{i}_{s-1}{i}_{s}}\ne 0$. Define

${a}_{p}={a}_{{i}_{0}{i}_{1}}{a}_{{i}_{1}{i}_{2}}\dots {a}_{{i}_{r-1}{i}_{r}}$

so that ${a}_{p}\ne 0$ and has the sign of ${\left(-1\right)}^{r}$. If ${p}^{-1}=\left({i}_{r},{i}_{r-1},\dots ,{i}_{0}\right)$ is the reverse path, then ${a}_{p}/{a}_{p-1}$ is positive.

Notice that

1. ${a}_{pq}={a}_{p}{a}_{q}$ whenever $pq$ is defined, i.e. whenever the endpoint of $p$ is the origin of $q$;
2. if $B=DA{D}^{-1}$ ($D$ diagonal, as above), i.e. ${b}_{ij}={d}_{i}{a}_{ij}{d}_{j}^{-1}$, then ${b}_{p}={d}_{{i}_{0}}{a}_{p}{d}_{{i}_{r}}^{-1}$. In particular, ${b}_{p}={a}_{p}$ if ${i}_{r}={i}_{0}$, i.e. if $p$ is a loop.

(2.19) $A$ is symmetrizable $⇔{a}_{p}={a}_{{p}^{-1}}$ for each loop $p$ in $\Gamma$.

In particular, $A$ is symmetrizable if $\Gamma$ is a tree.

 Proof. $⇒\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\exists$ diagonal matrix $D$ such that $B=DA{D}^{-1}$ is symmetric: ${b}_{ij}={d}_{i}{a}_{ij}{d}_{j}^{-1}$ whence ${a}_{p}={b}_{p}={b}_{{p}^{-1}}={a}_{{p}^{-1}}$ for any loop $p$ in $\Gamma$ (the middle equality because $B$ is symmetric). $⇐\phantom{\rule{0.2em}{0ex}}$: Fix an index ${i}_{0}$. For each $i\in \left[1,n\right]$ there is at least one path $p$ from ${i}_{0}$ to $i$ in $\Gamma$ (we may assume $A$ is indecomposable, i.e. $\Gamma$ connected). If $q$ is another such path, then $p{q}^{-1}$ is a loop, hence ${a}_{p}{a}_{{q}^{-1}}={a}_{a{q}^{-1}}={a}_{q{p}^{-1}}={a}_{q}{a}_{{p}^{-1}}$ so that ${a}_{p}/{a}_{{p}^{-1}}={a}_{q}/{a}_{{q}^{-1}}$. Hence we may define unambiguously ${e}_{i}={a}_{p}/{a}_{{a}^{-1}}>0$ (in particular, ${e}_{{i}_{0}}=1$). I claim that ${e}_{i}{a}_{ij}={e}_{j}{a}_{ji}$. This is trivially true if $i=j$ or if $i$ and $j$ are not lined by an edge in $\Gamma$, for then both sides are 0. If $\left(ij\right)$ is an edge in $\Gamma$, and if $p=\left({i}_{0},\dots ,i\right)$ is a path from ${i}_{0}$ to $i$ in $\Gamma$, then $q=\left({i}_{0},\dots ,i,j\right)$ is a path from ${i}_{0}$ to $j$, and ${a}_{q}={a}_{p}{a}_{ij}$ so that ${e}_{j}={a}_{q}/{a}_{{q}^{-1}}={a}_{p}{a}_{ij}/{a}_{ji}{a}_{{p}^{01}}={e}_{i}{a}_{ij}/{a}_{ji}$. Hence the matrix $\left({e}_{i}{a}_{ij}\right)$ is symmetric, i.e. $A$ is symmetrizable. $\square$

Terminology. For a Cartan matrix (indecomposable)

$\begin{array}{ccc}\text{finite type}& =& \text{positive type}\\ \text{affine type}& =& \text{zero type}\\ \text{indefinite type}& =& \text{negative type}\end{array}$

Assume from now on that $A$ is an indecomposable Cartan matrix. Then all the previous results are applicable.

(2.20) Let $A$ be an indecomposable Cartan matrix of finite or affine type. Then either the graph $\Gamma$ of $A$ is a tree, or else $\Gamma$ is a cycle with $n\ge 3$ vertices and

$A=\left(\begin{array}{ccccc}2& -1& & & -1\\ -1& 2& -1& & \\ & -1& 2& & \\ & & & \ddots & -1\\ -1& & & -1& 2\end{array}\right)$

(i.e. ${a}_{ij}=0$ if $\mid i-j\mid \ge 2;\phantom{\rule{0.2em}{0ex}}-1\phantom{\rule{0.2em}{0ex}}\text{if}\phantom{\rule{0.2em}{0ex}}\mid i-j\mid =1;\phantom{\rule{0.2em}{0ex}}2\phantom{\rule{0.2em}{0ex}}\text{if}\phantom{\rule{0.2em}{0ex}}i=j$: indices mod $n$). This $A$ is of affine/zero type.

 Proof. Suppose $\Gamma$ contains a cycle. By considering a cycle in $\Gamma$ with the least possible number of vertices, we see that $A$ has a principal submatrix of the form $B=\left(\begin{array}{cccccc}2& {a}_{12}& & & & {a}_{1m}\\ {a}_{21}& 2& {a}_{23}& & & \\ & {a}_{32}& 2& & & \\ & & & \ddots & & \\ & & & & & {a}_{m-1,m}\\ {a}_{m1}& & & & {a}_{m,m-1}& 2\end{array}\right)$ for some $m\le n$ (after a permutation of the indices) with ${a}_{ij}\le -1$ when $\mid i-j\mid =1\phantom{\rule{0.2em}{0ex}}\left(\text{mod}\phantom{\rule{0.2em}{0ex}}m\right)$. By (2.18), $B$ is of finite or affine type, hence $\exists x>0$ such that $Bx\ge 0$: i.e. $\exists {x}_{i}>0$ $\left(1\le i\le m\right)$ such that ${a}_{i,i-1}{x}_{i-1}+2{x}_{i}+{a}_{i,i+1}{x}_{i+1}\ge 0\phantom{\rule{1em}{0ex}}\left(1\le i\le m\right)$ (where suffixes are read modulo $m$). Add these inequalities: $\begin{array}{cc}\sum _{i=1}^{m}\left({a}_{i+1,i}+2+{a}_{i-1,i}\right){x}_{i}\ge 0& \text{(1)}\end{array}$ Now the $a$'s are integers $\le -1$, hence ${a}_{i+1,1}+2+{a}_{i-1,i}\le 0$. Since the ${x}_{i}$ are $>0$ we must have equality throughout; so the $a$'s are all $-1$, and $Bx=0$, whence $B$ is of affine/zero type and therefore (2.18) $B=A$. $\square$

(2.21) Let $A$ be an indecomposable Cartan matrix of finite/positive or affine/zero type. Then $A$ is symmetrizable, and its symmetrization ${A}^{s}$ is positive definite or positive semidefinite (of corank 1) according as $A$ is of finite/positive or affine/zero type.

 Proof. The first assertion follows from (2.19) and (2.20). By (2.17) $\exists x>0$ such that $Ax\ge 0$, hence if ${A}^{s}=DA{D}^{-1}$ we have $y=Dx>0$ and ${A}^{s}y\ge C$, more precisely, ${A}^{s}y>0$ if $A$ is of positive type, ${A}^{s}y=0$ if $A$ is of zero type. It follows that $\left({A}^{2}+\lambda {1}_{k}\right)y>0$ for all scalars $\lambda >0$, hence ${A}^{s}+\lambda {1}_{k}$ is of positive type for all $\lambda >0$, hence is nonsingular (2.17). Thus the eigenvalue of ${A}^{s}$ (which are real, because ${A}^{s}$ is a real symmetric matrix) are all $\ge 0$. If $A$ is of positive type, they are all $>0$, whence ${A}^{s}$ is positive definite; and if $A$ is of zero type, 0 is an eigenvalue of ${A}^{s}$ with multiplicity 1, and all other eigenvalues of $A$ are $>0$. $\square$

Remark. If $A$ is a symmetrizable indecomposable Cartan matrix of negative type, then ${A}^{s}$ is indefinite. For since ${a}_{ii}=2$ the quadratic form ${x}^{t}{A}^{s}x$ takes positive values, and since by (2.17) $\exists x>0$ with $Ax<0$, the vector $y=Dx$ satisfies $y>0$ and ${A}^{s}y<0$, whence ${y}^{t}{A}^{s}y<0$. Hence, for symmetrizable $A$

$\begin{array}{ccc}A\phantom{\rule{0.2em}{0ex}}\text{is of positive type}& ⇔& {A}^{s}\phantom{\rule{0.2em}{0ex}}\text{is positive definite}\\ A\phantom{\rule{0.2em}{0ex}}\text{is of zero type}& ⇔& {A}^{s}\phantom{\rule{0.2em}{0ex}}\text{is positive semidefinite (corank 1)}\\ A\phantom{\rule{0.2em}{0ex}}\text{is of negative type}& ⇔& {A}^{s}\phantom{\rule{0.2em}{0ex}}\text{is indefinite.}\end{array}$

(2.23) Let $A$ be an indecomposable Cartan matrix in which all proper principal minors (i.e. $\text{det}\phantom{\rule{0.2em}{0ex}}{A}_{J}$ for all $J\ne \left\{1,2,\dots ,n\right\}$) are positive. Then

1. all cofactors of $A$ are $>0$
2. $A$ is of positive (zero, negative) type according as $\text{det}\phantom{\rule{0.2em}{0ex}}A$ is positive (zero, negative).

 Proof. (i) Let ${A}_{ij}$ be the cofactor of ${a}_{ij}$ in $A$. We may assume that $i\ne j$. The expansion of $\text{det}\phantom{\rule{0.2em}{0ex}}\left(A\right)$ is $\begin{array}{cc}\text{det}\phantom{\rule{0.2em}{0ex}}\left(A\right)=\sum _{w\in {S}_{n}}\text{sym}\phantom{\rule{0.2em}{0ex}}\left(w\right){a}_{1w\left(1\right)}\dots {a}_{nw\left(n\right)}& \text{(1)}\end{array}$ summed over all permutations of $\left\{1,2,\dots ,n\right\}$. Write $w$ as a product of disjoint cycles, say $w={c}_{1}{c}_{2}\dots$. Now ${A}_{ij}$ is the coefficient of ${a}_{ji}$ in the expansion (1), hence we have to consider all cycles $c$ such that $c\left(j\right)=i$: say $C=\left(j,i,{i}_{1},\dots ,{i}_{r-1}\right)$, and correspondingly the product ${\left(-1\right)}^{r}{a}_{i{i}_{1}}{a}_{{i}_{1}{i}_{2}}\dots {a}_{{i}_{r-1}j}$. Thus product will be zero unless $p=\left(i,{i}_{1},\dots ,{i}_{r-1},j\right)$ is a simple path (i.e. with no repeated vertices) from $i$ to $j$ in the graph $\Gamma$ of $A$; and then it will be equal to ${\left(-1\right)}^{r}a\left(p\right)=\mid a\left(p\right)\mid >0$, in the notation of (2.20). Thus it follows from (1) that $\begin{array}{cc}{A}_{ij}=\sum _{p}\mid a\left(p\right)\mid \text{det}\phantom{\rule{0.2em}{0ex}}\left({A}_{J\left(p\right)}\right)& \text{(2)}\end{array}$ where the sum is over all simple paths $p$ from $i$ to $j$ in $\Gamma$, and $J\left(p\right)$ is the complement of the set of vertices of $p$. Each term in the sum (2) is $>0$, and the sum is not empty, because $\Gamma$ is connected. Hence ${A}_{ij}>0$. (ii) By (2.17), $A$ is of positive (zero, negative) type according as $\exists x>0$ with $Ax>0\phantom{\rule{0.2em}{0ex}}\left(=0,<0\right)$. Let $y=Ax$, then premultiplication by the matrix of cofactors gives $\sum _{j=1}^{n}{A}_{ij}{y}_{j}={x}_{i}\phantom{\rule{0.2em}{0ex}}\text{det}\phantom{\rule{0.2em}{0ex}}A$ Since ${x}_{i}>0$ and ${A}_{ij}>0$, this gives the result. $\square$

(2.24) Let $A$ be an indecomposable Cartan matrix. Then the following are equivalent:

1. $A$ is of positive (resp. zero) type
2. all proper principal minors of $A$ are $>0$, and $\text{det}\phantom{\rule{0.2em}{0ex}}A>0$ (resp. $\text{det}\phantom{\rule{0.2em}{0ex}}A=0$)

 Proof. (a) $⇒$ (b) If $A$ is of positive type, its symmetrization $\stackrel{\sim }{A}$ is positive definite. Hence each principal minor of $A$, being equal to the corresponding principal minor of $\stackrel{\sim }{A}$, is positive. In particular, $\text{det}\phantom{\rule{0.2em}{0ex}}A>0$. If $A$ is of zero type, we know (2.17) that $\text{det}\phantom{\rule{0.2em}{0ex}}\left(A\right)=0$ and (2.19) that each indecomposable proper principal submatrix ${A}_{J}$ is of positive type, hence $\text{det}\phantom{\rule{0.2em}{0ex}}{A}_{J}t0$ from above. (b) $⇒$ (a) follows from (2.23). $\square$

Terminology:

$\begin{array}{ccc}\text{positive type}& =& \text{finite type}\\ \text{zero type}& =& \text{affine}\\ \text{negative type}& =& \text{indefinite}\end{array}$

## Classification of indecomposable Cartan matrices of positive or zero type

If $A=\left({a}_{ih}\right)$ is an indecomposable Cartan matrix of positive or zero type, then for each pair $i,j$ $\left(i\ne j\right)$ we have by (2.18) and (2.24)

$\mid \begin{array}{cc}2& {a}_{ij}\\ {a}_{ji}& 2\end{array}\mid \ge 0$

(with equality only when $n=2$ and $A$ is of zero type). Thus

$0\le {a}_{ij}{a}_{ji}\le 4$.

## Dynkin diagram

This is a fancier version of the graph $\Gamma$ of $A$ that we defined earlier. In the Dynkin diagram $\Delta$ of $A$ the vertices $i,j$ $\left(i\ne j\right)$ are connected by $\text{max}\phantom{\rule{0.2em}{0ex}}\left(\mid {a}_{ij}\mid ,\mid {a}_{ji}\mid \right)$ lines, with an arrow ponting towards $i$ if $\mid {a}_{ij}\mid >\mid {a}_{ji}\mid$. Thus the possibilites are given in the following table:

 $\mid {a}_{ij}\mid$ $\mid {a}_{ji}\mid$ $i$ $j$ $0$ $0$ $1$ $1$ $1$ $2$ $2$ $1$ $1$ $3$ $3$ $1$ $1$ $4$ $2$ $2$ $4$ $1$

We do not attempt to define a Dynkin diagram for Cartan matrices in which ${a}_{ij}{a}_{ji}>4$ for some pairs $i,j$.

The table above shows that $\Delta$ determines $A$ uniquely. Observe also that the Dynkin diagram of ${A}^{t}$ is obtained from that of $A$ by reversing all arrows; also that $A$ is indecomposable iff $\Delta$ is connected.

We shall say that $\Delta$ is of finite type (resp. affine type) according as $A$ is.

(2.25) Theorem.

1. The connected Dynkin diagrams of finite type (resp. affine type) are exactly those listed in Table F (resp. Table A).
2. The integers ${a}_{i}$ attached to the vertices of the diagrams in Table A are the components of the unique vector $\delta =\left({a}_{1},\dots ,{a}_{n}\right)>0$ such that $A\delta =0$ and the ${a}_{i}$ are positive relatively prime integers.

 Proof. We begin by verifying the last statement. The equation $A\delta =0$, i.e. $\sum _{j=1}^{n}{a}_{ij}{a}_{j}=0\phantom{\rule{1em}{0ex}}\left(1\le i\le n\right)$ can be rewritten as followss: $2{a}_{i}=\sum _{i}{m}_{ij}{a}_{j}$ where the sum is over all the vertices j in $\Delta$ joined directly to $i$, and [except if $A=\left(\begin{array}{cc}2& -2\\ -2& 2\end{array}\right),\Delta =$ $1$ $1$ .] Then (ii) is easily checked diagram by diagram. It follows from (2.17) that all the diagrams in Table A are of affine type. Since each diagram in Table F occurs as a subdiagram of one in table A, it follows from (2.18) that all diagrams in Table F are of finite type. $\square$

Table F

 ${A}_{l}$ $\dots$ $\left(l\ge 1\right)$ ${B}_{l}$ $\dots$ $\left(l\ge 3\right)$ ${C}_{l}$ $\dots$ $\left(l\ge 2\right)$ ${D}_{l}$ $\dots$ $\left(l\ge 4\right)$ ${E}_{l}$ $\dots$ $\left(l=6,7,8\right)$ ${F}_{4}$ ${G}_{2}$

(The number of vertices is $n=l$.)

Table A

 ${\stackrel{\sim }{A}}_{1}$ $1$ $1$ ${\stackrel{\sim }{A}}_{l}$ $1$ $1$ $1$ $1$ $1$ $\left(l\ge 2\right)$ ${\stackrel{\sim }{B}}_{l}$ $\dots$ $1$ $1$ $2$ $2$ $2$ $2$ $\left(l\ge 3\right)$ ${\stackrel{\sim }{B}}_{l}^{\vee }$ $\dots$ $1$ $1$ $2$ $2$ $2$ $1$ $\left(l\ge 3\right)$ ${\stackrel{\sim }{C}}_{l}$ $\dots$ $1$ $2$ $2$ $2$ $1$ $\left(l\ge 2\right)$ ${D}_{l+1}^{\left(2\right)}$ ${\stackrel{\sim }{C}}_{l}^{\vee }$ $\dots$ $1$ $1$ $1$ $1$ $1$ $\left(l\ge 2\right)$ ${\stackrel{\sim }{D}}_{l}$ $\dots$ $1$ $1$ $2$ $2$ $2$ $1$ $1$ $\left(l\ge 4\right)$ ${\stackrel{\sim }{E}}_{6}$ $1$ $2$ $3$ $2$ $1$ $2$ $1$ ${\stackrel{\sim }{E}}_{7}$ $1$ $2$ $3$ $4$ $3$ $2$ $1$ $2$ ${\stackrel{\sim }{E}}_{8}$ $1$ $2$ $3$ $4$ $5$ $6$ $4$ $2$ $3$ ${\stackrel{\sim }{F}}_{4}$ $1$ $2$ $3$ $4$ $2$ ${E}_{6}^{\left(2\right)}$ ${\stackrel{\sim }{F}}_{4}^{\vee }$ $1$ $2$ $3$ $2$ $1$ ${\stackrel{\sim }{G}}_{2}$ $1$ $2$ $3$ ${D}_{4}^{\left(3\right)}$ ${\stackrel{\sim }{G}}_{2}^{\vee }$ $1$ $2$ $1$ ${A}_{2}^{\left(2\right)}$ ${\stackrel{\sim }{BC}}_{1}$ $1$ $2$ ${A}_{2l}^{\left(2\right)}$ ${\stackrel{\sim }{BC}}_{l}$ $\dots$ $1$ $2$ $2$ $2$ $2$ $\left(l\ge 2\right)$ (The number of vertices is $n=l+1$).

If $A$ is of (finite) type $X$, then $\text{det}\phantom{\rule{0.2em}{0ex}}\left(A\right)$ is the number of 1's in the diagram $\stackrel{\sim }{X}$. The 1's form a single orbit under the group $\text{Aut}\phantom{\rule{0.2em}{0ex}}\left(\stackrel{\sim }{X}\right)$.

We now have to prove the converse, namely that every connected diagram $\Delta$ of finite or affine type occurs in Table F or Table A. Let $n$ be the number of vertices in $\Delta$.

If $n=1$, the only possibility is $\Delta ={A}_{1}$.

If $n=2$, we have already enumerated the possibilities: ${A}_{2},{C}_{2},{G}_{2}$ of finite type, and ${\stackrel{\sim }{A}}_{1}$, ${\stackrel{\sim }{BC}}_{1}$ of affine type.

If $n=3$, either $\Delta$ is a tree or $\Delta$ is ${\stackrel{\sim }{A}}_{2}$ (by (2.21)). If $\Delta$ is a tree then

$A=\left(\begin{array}{ccc}2& -a& 0\\ -b& 2& -c\\ 0& -d& 2\end{array}\right)$

where $a,b,c,d$ are positive integers; $ab\le 3,cd\le 3$ by (2.19) and $\text{det}\phantom{\rule{0.2em}{0ex}}A\ge 0$, so that $ab+cd\le 4$; more precisely, $ab+cd=2$ or 3 if $\Delta$ is of finite type, $ab+cd=4$ if $\Delta$ is of affine type. So the possibilities are

 $a$ $b$ $c$ $d$ $ab+cd$ 1 1 1 1 2 ${A}_{3}$ 1 1 1 2 3 ${B}_{3}$ 1 1 2 1 3 ${C}_{3}$ 1 1 1 3 4 ${\stackrel{\sim }{G}}_{2}$ 1 1 3 1 4 ${\stackrel{\sim }{G}}_{2}^{\vee }$ 1 2 1 2 4 ${\stackrel{\sim }{BC}}_{2}$ 1 2 2 1 4 ${\stackrel{\sim }{C}}_{2}$ 2 1 1 2 4 ${\stackrel{\sim }{C}}_{2}^{\vee }$

From (2.19) we have:

$\left(✶\right)$ If a subdiagram of $\Delta$ occurs in Table A, then it is the whole of $\Delta$. Hence to show that $\Delta \in$ Table A it is enough to show that some subdiagram of $\Delta$ is in Table A.

1. If $\Delta$ is not a tree then by (2.20) $\Delta ={\stackrel{\sim }{A}}_{l}$ $\left(l\ge 2\right)$. So we may assume that $\Delta$ is a tree, and that $n\ge 4$.

2. If $\Delta$ contains multiple bonds, they are double bonds . For otherwise $\Delta$ contains either ${\stackrel{\sim }{A}}_{1}$ or ${\stackrel{\sim }{BC}}_{1}$, in which case $n=2$ by $\left(✶\right)$; or $\Delta$ contains a connected proper subdiagram of 3 nodes containing a triple bond, which (see above) can only be ${\stackrel{\sim }{G}}_{2}$ or ${\stackrel{\sim }{G}}_{2}^{\vee }$, and therefore $n=3$, by $\left(✶\right)$ again.

If $\Delta$ contains two or more double bonds, it contains a subdiagram of type ${\stackrel{\sim }{C}}_{l}$ or ${\stackrel{\sim }{C}}_{l}^{\vee }$ or ${\stackrel{\sim }{BC}}_{l}$ $\left(l\ge 2\right)$, hence by $\left(✶\right)$ this subdiagram is the whole of $\Delta$.

So we may assume that $\Delta$ contains at most one double bond.

3. Suppose now that $\Delta$ has at least one branch point. If it contains a double bond as well, then it contains a subdiagram of type ${\stackrel{\sim }{B}}_{2}$ or ${\stackrel{\sim }{B}}_{l}^{\vee }$, and again by $\left(✶\right)$ this subdiagram is the whole of $\Delta$. So assume now that $\Delta$ is simple-laced (i.e. no multiple bonds). If there is more then one branch point, then $\Delta$ contains ${\stackrel{\sim }{D}}_{l}$ $\left(l\ge 5\right)$ as a subdiagram, hence again this is the whole of $\Delta$. So we may assume that $\Delta$ has only one branch point. If there are $\ge 4$ edges issuing from this point, then $\Delta$ contains ${\stackrel{\sim }{D}}_{4}$ as a subdiagram, which is therefore the whole of $\Delta$. If there are 3 edges issuing from the branch point, let $p,q,r$ be the number of vertices of $\Delta$ on the three arms, where $p\le q\le r$ (and $p+q+r=n\mp 1$).

If $p\ge 2$ then $\Delta$ contains ${\stackrel{\sim }{E}}_{6}$, hence by $\left(✶\right)$ $\Delta ={\stackrel{\sim }{E}}_{6}$.
If $p=1,q\ge 3$ then $\Delta$ contains ${\stackrel{\sim }{E}}_{7}$, hence $\Delta ={\stackrel{\sim }{E}}_{7}$.
If $p=1,q=2,r\ge 5$ then $\Delta$ contains ${\stackrel{\sim }{E}}_{8}$, hence $\Delta ={\stackrel{\sim }{E}}_{8}$.
If $p=1,q=2,r=2,3,4$ then $\Delta ={E}_{6},{E}_{7},{E}_{8}$ respectively.
If $p=q=1,r\ge 1$ then $\Delta ={D}_{l}$ $\left(l\ge 4\right)$.

4. Finally assume $\Delta$ has no branch points, hence is a chain. If $\Delta$ is simply-laced, then $\Delta ={A}_{l}$ $\left(l\ge 4\right)$. The remaining possibility is that $\Delta$ contains just one double bond. Suppose there are $p$ nodes on one side of the double bond, $q$ on the other, where $p\ge q$.

If $q\ge 2$ then $\Delta$ contains ${\stackrel{\sim }{F}}_{4}$ as a proper subdiagram, contradicting $\left(✶\right)$.
If $q=1,p\ge 2$ then $\Delta$ contains ${\stackrel{\sim }{F}}_{4}$ or ${\stackrel{\sim }{F}}_{4}^{\vee }$ as a subdiagram. Now apply $\left(✶\right)$.
If $q=1,p=1$ we obtain $\Delta ={F}_{4}$; and if $q=0,p\le 0$ we obtain ${B}_{l}$ or ${C}_{l}$.

Explanation of notation: Let $X$ be any of the symbols ${A}_{n},\dots ,{G}_{2};\phantom{\rule{0.2em}{0ex}}R$ a (finite) root system of type $X$; ${\alpha }_{1},\dots ,{\alpha }_{l}$ the simple roots, ${\alpha }_{0}$ the lowest root (i.e. $\text{ht}\phantom{\rule{0.2em}{0ex}}\left({\alpha }_{0}\right)$ is minimum). Let

${a}_{ij}=⟨{\alpha }_{i}^{\vee },{\alpha }_{j}⟩\phantom{\rule{1em}{0ex}}\left(0\le i,j\le l\right)$

Then $A={\left({a}_{ij}\right)}_{1\le i,j\le l}$ is the Cartan matrix of type $X$ (finite type) and $\stackrel{\sim }{A}={\left({a}_{ij}\right)}_{0\le i,j\le l}$ is the Cartan matrix of type $\stackrel{\sim }{X}$ (affine type).

We have ${\alpha }_{0}=-\sum _{1}^{l}{a}_{j}{\alpha }_{j}$ say, with the ${a}_{i}$ positive integers, i.e.

$\sum _{0}^{l}{a}_{j}{\alpha }_{j}=0$

if we define ${a}_{0}=1$; but then

$\sum _{0}^{l}{a}_{j}{a}_{ij}=\sum _{0}^{l}{a}_{j}{\alpha }_{j}\left({h}_{i}\right)=0\phantom{\rule{1em}{0ex}}\left(0\le i\le l\right)$

showing that $\stackrel{\sim }{A}$ is of affine type and the ${a}_{i}\phantom{\rule{0.2em}{0ex}}\left(0\le i\le l\right)$ are the labels attached to the nodes of the diagram of type $\stackrel{\sim }{X}$ in Table A.

Remark. The classification theorem (2.22) gives a complete list of the indecomposable Cartan matrices of finite or affine type. Any Cartan matrix not in this list is therefore of indefinite (i.e. negative) type, so the Cartan matrices of indefinite type are the pink-heap. However, there are two subclasses which can be explicitly classified: an indecomposable Cartan matrix $A$ of indefinite type is said to be hyperbolic (resp. strictly hyperbolic) if every proper principal submatrix ${A}_{J}$ has all its components of finite or affine type (resp. of finite type). All the $2×2$ Cartan matrices $\left(\begin{array}{cc}2& {a}_{12}\\ {a}_{21}& 2\end{array}\right)$ with ${a}_{12}{a}_{21}>4$ are of strictly hyperbolic type; apart from these there are only finitely many, and they all have Dynkin diagrams.

Exercise. If $A$ is an indecomposable $n×n$ Cartan matrix, then

1. $n\le 5$ if $A=$ is strictly hyperbolic
2. $n\le 10$ if $A$ is hyperbolic.

Here are two with $n=10$: $\phantom{\rule{1em}{0ex}}$(probably the only two)

and (I think) the only strictly hyperbolic matrix with $n=5$ has diagram

If $A$ is hyperbolic and ${A}_{J}$ is affine ($J$ connected) then $\mid J\mid =n-1$. For otherwise we should have $J\subset K\subset \left\{1,\dots ,n\right\}$ with $K$ connected and both inclusions strict, and then ${A}_{J}$ would not be either affine or finite type.

(2.26) Let $A$ be a Cartan matrix. Then $A$ is symmetrizable iff there exists a $W$–invariant symmetric bilinear form $⟨x,y⟩$ on $𝔥$ (with values in $k$), such that $⟨{h}_{i},{h}_{i}⟩$ is positive rational for all $i$. Moreover such a form is nondegenerate.

 Proof. Suppose $A$ is symmetrizable, then $\exists {\epsilon }_{j}>0$ such that ${a}_{ij}{\epsilon }_{j}={a}_{ji}{\epsilon }_{i}$ for all $i,j$. Since the ${a}_{ij}$ are integers we may assume that the ${\epsilon }_{j}$ are rational (or even positive integers). As before, let ${𝔥}^{\prime }=\sum _{1}^{n}k{h}_{i}\subset 𝔥$, and let ${𝔥}^{\prime \prime }$ be a vector space complement of ${𝔥}^{\prime }$ in $𝔥$. Define $⟨x,y⟩$ as follows: $\begin{array}{ccc}⟨x,{h}_{i}⟩& =& ⟨{h}_{i},x⟩={\epsilon }_{i}{\alpha }_{i}\left(x\right)\phantom{\rule{2em}{0ex}}\left(x\in 𝔥\right)\\ ⟨x,y⟩& =& 0\phantom{\rule{1em}{0ex}}\text{if}\phantom{\rule{0.2em}{0ex}}x,y\in {𝔥}^{\prime \prime }\end{array}$ (Notice that $⟨{h}_{i},{h}_{j}⟩={\epsilon }_{i}{\alpha }_{i}\left({h}_{j}\right)={a}_{ji}{\epsilon }_{i}$ and also $⟨{h}_{j},{h}_{i}⟩={\epsilon }_{j}{\alpha }_{j}\left({h}_{i}\right)={a}_{ij}{\epsilon }_{j}$ in particular, therefore, $⟨{h}_{i},{h}_{i}⟩=2{\epsilon }_{i}>0$ so that the above definition is unambiguous.) Now we have $\begin{array}{cc}\begin{array}{ccc}⟨{w}_{i}x,y⟩& =& ⟨x,y⟩-{\alpha }_{i}\left(x\right)⟨{h}_{i},h⟩\\ & =& ⟨x,y⟩-{\epsilon }_{i}{\alpha }_{i}\left(x\right){\alpha }_{i}\left(y\right)\end{array}& \text{(1)}\end{array}$ which is symmetrical in $x$ and $y$, so that $⟨{w}_{i}x,y⟩=⟨{w}_{i}y,x⟩=⟨x,{w}_{i}y⟩$ from which it follows that $⟨wx,y⟩=⟨x,{w}^{-1}y⟩$ for all $w\in W$, by induction on $l\left(w\right)$. This $⟨x,y⟩$ is $W$–invariant. Conversely, if we have such a $W$–invariant form on $𝔥$, then from (1) it follows that ${\alpha }_{i}\left(x\right)⟨{h}_{i},y⟩={\alpha }_{i}\left(y\right)⟨{h}_{i}x⟩$ for all $x,y\in 𝔥$: taking $x={h}_{i}$, $y={h}_{j}$ we obtain $2⟨{h}_{i},{h}_{j}⟩={a}_{ji}⟨{h}_{i},{h}_{i}⟩$ and therefore, putting ${\epsilon }_{i}=\frac{1}{2}⟨{h}_{i},{h}_{i}⟩>0$, ${a}_{ji}{\epsilon }_{i}=⟨{h}_{i},{h}_{j}⟩=⟨{h}_{j},{h}_{j}⟩={a}_{ij}{\epsilon }_{j}$ which shows that $A$ is symmetrizable. Finally, if $⟨h,x⟩=0$ for all $x\in 𝔥$ then in particular ${\epsilon }_{i}{\alpha }_{i}\left(h\right)=⟨h,{h}_{i}⟩=0$, so that $h\in \bigcap _{i=1}^{n}\text{Ker}\phantom{\rule{0.2em}{0ex}}{\alpha }_{i}=𝔠\subset {𝔥}^{\prime }$ (1.10); but then $h=\sum {\lambda }_{i}{h}_{i}$ say, and $0=⟨h,x⟩=\sum {\lambda }_{i}{\epsilon }_{i}{\alpha }_{i}\left(x\right)\phantom{\rule{1em}{0ex}}\left(x\in 𝔥\right)$ so that $\sum {\lambda }_{i}{\epsilon }_{i}{\alpha }_{i}=0$ in ${𝔥}^{*}$ and therefore ${\lambda }_{i}=\dots ={\lambda }_{n}=0,\phantom{\rule{0.2em}{0ex}}h=0$. $\square$

From (2.26) it follows that the mapping $\theta :\phantom{\rule{0.2em}{0ex}}𝔥\to {𝔥}^{*}$ defined by $\theta \left(x\right)\left(y\right)=⟨x,y⟩$ is an isomorphism, and we can therefore transport the scalar product $⟨x,y⟩$ to 𝔥:

$⟨\lambda ,\mu ⟩=⟨{\theta }^{-1}\left(\lambda \right),{\theta }^{-1}\left(\mu \right)⟩$.

Let ${\alpha }_{i}^{\vee }=\theta \left({h}_{i}\right)\in {𝔥}^{\prime }$. Then

${\alpha }_{i}^{\vee }\left(x\right)=⟨x,{h}_{i}⟩={\epsilon }_{i}{\alpha }_{i}\left(x\right)$

so that ${\alpha }_{i}^{\vee }={\epsilon }_{i}$; taking $x={h}_{i}$ we obtain (since ${\alpha }_{i}\left({h}_{i}\right)={a}_{ii}=2$)

$2{\epsilon }_{i}=⟨{h}_{i},{h}_{i}⟩=⟨{\alpha }_{i}^{\vee },{\alpha }_{i}^{\vee }⟩={\epsilon }_{i}^{2}⟨{\alpha }_{i},{\alpha }_{i}⟩$

giving

${\epsilon }_{i}=2/⟨{\alpha }_{i},{\alpha }_{i}⟩,$
${\alpha }_{i}^{\vee }=\frac{2{\alpha }_{i}}{⟨{\alpha }_{i},{\alpha }_{i}⟩}$

and dually

${\alpha }_{i}=\frac{2{\alpha }_{i}^{\vee }}{⟨{\alpha }_{i}^{\vee },{\alpha }_{i}^{\vee }⟩}$.

${\alpha }_{i}^{\vee }$ is the coroot of ${\alpha }_{i}$.

Finally note that ${a}_{ij}={\alpha }_{j}\left({h}_{i}\right)=⟨{\alpha }_{i}^{\vee },{\alpha }_{j}⟩$

Action of $W$:

${w}_{i}\left(h\right)=h-{\alpha }_{i}\left(h\right){h}_{i}$

For each ${\alpha }_{i}$, let ${H}_{i}=\text{Ker}\phantom{\rule{0.2em}{0ex}}\left({\alpha }_{i}\right)$ be the hyperplane in $𝔥\perp {h}_{i}$. Then ${w}_{i}$ is reflection in this hyperplane: because if $m$ is the midpoint of $h,{w}_{i}h$ then

$\begin{array}{ccc}\alpha \left(m\right)& =& \frac{1}{2}{\alpha }_{i}\left(h+{w}_{i}h\right)\\ & =& \frac{1}{2}⟨{\alpha }_{i}\left(h\right)+{w}_{i}{\alpha }_{i}\left(h\right)⟩=0\end{array}$

and $h={w}_{i}h={\alpha }_{i}\left(h{h}_{i}\right)$ is a scalar multiple of ${\alpha }_{i}$.

So $W$ is realized as a group generated by reflections.

We can now characterize the algebras $𝔤\left(A\right)$ for which $A$ is a Cartan matrix of finite type:

(2.27) Let $A$ be an indecomposable Cartan matrix. Then the following conditions are equivalent:

1. $A$ is of finite type;
2. $A$ is symmetrizable, and the bilinear form $⟨x,y⟩$ of (2.26) (with $k=ℝ$) is positive definite;
3. $W$ is finite;
4. $R$ is finite;
5. $𝔤\left(A\right)$ is a finite-dimensional simple Lie algebra.

 Proof. (i) $⇔$ (ii) by (2.26) and (2.21) (ii) $⇒$ (iii) (Here $k=ℝ$). The matrix $A$ is nonsingular, hence ${h}_{1},\dots ,{h}_{n}$ is a basis of $𝔥$ and therefore ${Q}^{\vee }=\sum _{q}^{n}ℤ{h}_{i}$ is a lattice in $𝔥$. Consequently $\text{End}\phantom{\rule{0.2em}{0ex}}\left({Q}^{\vee }\right)$ is a lattice in the real vector space $\text{End}\phantom{\rule{0.2em}{0ex}}\left(𝔥\right)$. Let $O$ be the orthogonal group of the form $⟨x,y⟩$, acting on $𝔥$. $O$ is compact, hence a bounded subset of $\text{End}\phantom{\rule{0.2em}{0ex}}\left(𝔥\right)$; $W$ is a subgroup of $O$ and preserves the lattice ${Q}^{\vee }$. i.e. $W\subset O\cap \text{End}\phantom{\rule{0.2em}{0ex}}\left({Q}^{\vee }\right)$, which is finite. (iii) $⇒$ (ii) Let $\left(x,y\right)$ be any positive definite scalar product on $𝔥$. Then $⟨x,y⟩=\sum _{w\in W}\left(wx,wy\right)$ is $W$–invariant and positive definite. Hence $A$ is symmetrizable, by (2.23). (iii) $⇔$ (iv) proved earlier (2.14) (iv) $⇔$ (v) because $𝔤\left(A\right)=𝔥+\sum _{\alpha \in R}{𝔤}_{\alpha }$ (direct sum) (1.7); and $𝔤\left(A\right)$ is simpe by (1.13). $\square$

Now suppose that $A={\left({a}_{ij}\right)}_{1\le i,j\le n}$ is an indecomposable Cartan matrix of affine type. By (2.17) there is a unique vector

$a={\left({a}_{1},\dots ,{a}_{n}\right)}^{t}$

with components ${a}_{i}$ which are mutually prime positive integers, such that

$\begin{array}{cc}\sum _{j=1}^{n}{a}_{ij}{a}_{j}=0\phantom{\rule{1em}{0ex}}\left(1\le i\le n\right)& \text{(1)}\end{array}$

i.e.

$Aa=0$.

Dually, the matrix ${A}^{t}$ is also of affine type, hence there is a unique vector

${a}^{\vee }={\left({a}_{1}^{\vee },\dots ,{a}_{n}^{\vee }\right)}^{t}$

with components ${a}_{i}^{\vee }$ which are mutually prime positive integers, such that

$\begin{array}{cc}\sum _{i=1}^{n}{a}_{i}^{\vee }{a}_{ij}=0\phantom{\rule{1em}{0ex}}\left(1\le j\le n\right)& \text{(2)}\end{array}$

i.e.,

${A}^{t}{a}^{\vee }=0$.

Now $A$ is symmetrizable (2.21), hence there is a diagonal matrix $E=\left(\begin{array}{ccc}{\epsilon }_{1}& & \\ & \ddots & \\ & & {\epsilon }_{n}\end{array}\right)$ (with positive diagonal entries ${\epsilon }_{i}$) such that $AE$ is symmetric, i.e.

$\begin{array}{cc}AE=E{A}^{t}\text{.}& \text{(3)}\end{array}$

But then $AE{a}^{\vee }=E{A}^{t}{a}^{\vee }=0$ by (2) and (3), so that $a=\lambda E{a}^{\vee }$ from (1), for some scalar $\lambda \ne 0$. Replacing $E$ by ${\lambda }^{-1}E$ we have then $a=E{a}^{\vee }$, i.e.,

$\begin{array}{cc}{a}_{i}={\epsilon }_{i}{a}_{i}^{\vee }\phantom{\rule{1em}{0ex}}\left(1\le i\le n\right)& \text{(4)}\end{array}$

Now define

$\delta =\sum _{j=1}^{n}{a}_{j}{\alpha }_{j}\in {𝔥}^{*}$

Then we have

$\begin{array}{cc}\delta \left({h}_{i}\right)=0\phantom{\rule{1em}{0ex}}\left(1\le i\le n\right)& \text{(5)}\end{array}$

(We shall see later that $\delta$ is a root) for $\delta \left({h}_{i}\right)=\sum _{j=1}^{n}{a}_{j}{a}_{ij}=0$ by (1). Hence $W$ fixes $\delta$. Dually define

$c=\sum _{i=1}^{n}{a}_{i}^{\vee }{h}_{i}\in 𝔥$;

then we have

$\begin{array}{cc}{\alpha }_{j}\left(c\right)=0\phantom{\rule{1em}{0ex}}\left(1\le j\le n\right)& \text{(6)}\end{array}$

for ${\alpha }_{j}\left(c\right)=\sum _{i=1}^{n}{a}_{i}^{\vee }{\alpha }_{j}\left({h}_{i}\right)=\sum _{i}{a}_{i}^{\vee }{a}_{ij}=0$ by (2). Hence $W$ fixes $c$.

Recall that $𝔠=\bigcap _{1}^{n}\text{Ker}\phantom{\rule{0.2em}{0ex}}{\alpha }_{i}$ is the centre of $𝔤\left(A\right)$, and that $\text{dim}\phantom{\rule{0.2em}{0ex}}𝔠=n-l=1$ he. Thus $𝔠=ℝc$ (we are taking $k=ℝ$ here): $c$ is the canonical central element.

Next we shall construct the scalar product on $𝔥$ as in (2.26). We have $\text{dim}\phantom{\rule{0.2em}{0ex}}𝔥=2n-l=n+1$, so we can take as a basis of $𝔥$ the elements ${h}_{1},\dots ,{h}_{n}$ and $d$ say, where $\delta \left(d\right)=1$. (By (5) we must have $\delta \left(d\right)\ne 0$).

Remark. this of course does not determine $d$ uniquely: we could add on any linear combination of ${h}_{1},\dots ,{h}_{n}$. At this stage, however, that doesn't matter.

We have then

$\begin{array}{cc}⟨x,{h}_{i}⟩={\epsilon }_{i}{\alpha }_{i}\left(x\right)\phantom{\rule{0.5em}{0ex}};\phantom{\rule{0.5em}{0ex}}⟨d,d⟩=0& \text{(7)}\end{array}$

From (6) and (7), therefore,

$\begin{array}{cc}⟨c,{h}_{i}⟩=0\phantom{\rule{1em}{0ex}}\left(1\le i\le n\right)& \text{(8)}\end{array}$

and in particular

$\begin{array}{cc}⟨c,c⟩=0;& \text{(9)}\end{array}$

moreover

$\begin{array}{cc}⟨c,d⟩=1& \text{(10)}\end{array}$

because

$\begin{array}{ccc}⟨c,d⟩=⟨d,c⟩& =& \sum {a}_{i}^{v}⟨d,{h}_{i}⟩=\sum {a}_{i}^{\vee }{\epsilon }_{i}{\alpha }_{i}\left(d\right)\\ & =& \sum {a}_{i}{\alpha }_{i}\left(d\right)=\delta \left(d\right)=1\text{.}\end{array}$

As in (2.26) let $\theta :\phantom{\rule{0.2em}{0ex}}𝔥\stackrel{\sim }{\to }{𝔥}^{*}$ be the isomorphism defined by the scalar product, so that $\theta \left(x\right)\left(y\right)=⟨x,y⟩$. Then

$\theta \left(c\right)=\delta$

because $\theta \left({h}_{i}\right)={\epsilon }_{i}{\alpha }_{i}$, so that

$\theta \left(c\right)=\sum {a}_{i}^{\vee }\theta \left({h}_{i}\right)=\sum {a}_{i}^{\vee }{\epsilon }_{i}{\alpha }_{i}=\sum {a}_{i}{\alpha }_{i}=\delta$.

Now consider the action of the Weyl group $W$. We shall show that $W$ acts as a group generated by reflections in a real Euclidean space of dimension $l$. Since $\text{dim}\phantom{\rule{0.2em}{0ex}}𝔥=l+2$, we have to cut down the number of dimensions by 2. We do this in two stages.

(1) Since $W$ fixes $c$ it follows that $W$ acts (faithfully) on $\stackrel{\sim }{𝔥}=𝔥/𝔠$. Each $\lambda \in {𝔥}^{*}$ such that $\lambda \left(c\right)=0$ defines a linear form on $\stackrel{\sim }{𝔥}$, which we denote by $\stackrel{\sim }{\lambda }$. Thus we have ${\stackrel{\sim }{\alpha }}_{1},\dots ,{\stackrel{\sim }{\alpha }}_{n},\stackrel{\sim }{\delta }$. Notice that $W$ (in its action on ${\stackrel{\sim }{𝔥}}^{*}$) fixes $\stackrel{\sim }{\delta }$. If $p:\phantom{\rule{0.2em}{0ex}}𝔥\to \stackrel{\sim }{𝔥}$ is the projection, the image $\stackrel{\sim }{C}=p\left(C\right)$ of the fundamental chamber $C$ is the set of $\stackrel{\sim }{h}\in \stackrel{\sim }{𝔥}$ such that ${\stackrel{\sim }{\alpha }}_{i}\left(\stackrel{\sim }{h}\right)\ge 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$ and the image $p\left(X\right)=\stackrel{\sim }{X}$ of the Tits cone $X$ is the union of the $w\stackrel{\sim }{C}$, $w\in W$.

(2) For each real number $t\ge 0$ let

${E}_{t}=\left\{x\in \stackrel{\sim }{𝔥}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\stackrel{\sim }{\delta }\left(x\right)=t\right\}$

If $t>0$ this is an affine hyperplane in $\stackrel{\sim }{𝔥}$, hence of dimension $l$. If $t=0$, ${E}_{0}=\text{Ker}\phantom{\rule{0.2em}{0ex}}\left(\stackrel{\sim }{\delta }\right)={𝔥}^{\prime }/𝔠$, where as usual ${𝔥}^{\prime }$ is the subspace of $𝔥$ spanned by ${h}_{1},\dots ,{h}_{n}$. (For $\delta \left({h}_{i}\right)=0,1\le i\le n$.) Each ${E}_{t},t>0$ is stable under the action of $W$ because $W$ fixes $\stackrel{\sim }{\delta }$. Moreover $W$ acts faithfully on ${E}_{t}\phantom{\rule{0.2em}{0ex}}\left(t>0\right)$. For if $w\in W$ fixes ${E}_{t}$ pointwise, it fixes all points of $\stackrel{\sim }{𝔥}$ not in ${E}_{0}$, hence fixes all points of $\stackrel{\sim }{𝔥}$ (for the fixed point set of $w$ is in any case a vector subspace of $\stackrel{\sim }{𝔥}$), whence $w$ is the identity. Thus we have realized $W$ as a group of affine-linear transformations of ${E}_{t}$ (any $t>0$). These actions of $W$ are all essentially the same, so we may as well take $𝔠=1$ and concentrate attention on the affine space ${E}_{1}$.

Now the restriction to ${𝔥}^{\prime }$ of the scalar product $⟨x,y⟩$ is the positive semidefinite, or rank $n-1$ (because $⟨{h}_{i},{h}_{j}⟩={a}_{ij}{\epsilon }_{j}$, and the matrix $AE$ is positive semidefinite of rank $n-1$); also $⟨x,c⟩=0$ for all $x\in {𝔥}^{\prime }$, by (8). Hence we have a positive definite scalar product on ${E}_{0}={𝔥}^{\prime }/𝔠$, and therefore ${E}_{1}$ has the structure of a Euclidean space of dimension $l$. The restriction of ${\stackrel{\sim }{\alpha }}_{i}$ to ${E}_{1}$ is an affine-linear function on this Euclidean space, and ${H}_{i}=\left\{x\in {E}_{1}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\stackrel{\sim }{\alpha }}_{i}\left(x\right)=0\right\}$ is an affine hyperplane in ${E}_{1}$; these hyperplanes are the focus of an $l$–simplex $S$ in ${E}_{1}$, namely $S=\stackrel{\sim }{C}\cap {E}_{1}$. The generator ${w}_{i}$ of $W$ acts on ${E}_{1}$ as reflection in the hyperplane ${H}_{i}\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$. Thus $W$ is realized as a group generated by reflections in the Euclidean space $E$subscript?.

The transforms $wS\phantom{\rule{0.2em}{0ex}}\left(w\in W\right)$ of the "fundamental alcove" $S$ therefore fill up the space ${E}_{1}$. Consequently the union $\stackrel{\sim }{X}$ of the chambers $w\stackrel{\sim }{C}$ for all $w\in W$ is the open half space $\left\{x\in \stackrel{\sim }{𝔥}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\stackrel{\sim }{\delta }\left(h\right)>0\right\}$ together with the origin. Pulling back to $𝔥$, we see that the Tits cone is

$X=U\cup 𝔠$

where $U=\left\{h\in 𝔥\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\delta \left(h\right)>0\right\}$.

## Real and imaginary roots

By (2.1) and (2.5) each $\alpha \in w{\alpha }_{i}\phantom{\rule{0.2em}{0ex}}\left(w\in W,1\le i\le n\right)$ is a root of multiplicity ${m}_{\alpha }=1$. Kac calls these the real roots. In the classical case, where the Cartan matrix is of finite type, all the roots are real (proof later). In general however there will be other roots as well, whcih Kac calls imaginary roots. (The justification for this terminology will be apparent shortly.)

Let ${R}_{\text{re}},\phantom{\rule{0.2em}{0ex}}{R}_{\text{im}}$ denote the sets of real and imaginary roots, respectively. Also put

$\begin{array}{ccc}{R}_{\text{re}}^{+}& =& {R}_{\text{re}}\cap {R}^{+}\phantom{\rule{1em}{0ex}}\text{positive real roots}\\ {R}_{\text{im}}^{+}& =& {R}_{\text{im}}\cap {R}^{+}\phantom{\rule{1em}{0ex}}\text{positive imaginary roots}\end{array}$

So by definition

${R}_{\text{re}}=\bigcup _{i=1}^{n}W{\alpha }_{i}$.

Likewise for the dual root system ${R}^{\vee }$, with simple roots ${h}_{i}\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$:

${R}_{\text{re}}^{\vee }={\bigcup }_{i=1}^{n}W{h}_{i}$.

Consider the real roots first. If $\alpha =w{\alpha }_{i}\in {R}_{\text{re}}$ define the coroot of $\alpha$ to be

${h}_{\alpha }=w{h}_{i}$

This definition is justified by (2.7), because if also $\alpha ={w}^{\prime }{\alpha }_{j}$, then we have ${\alpha }_{i}={w}^{-1}{w}^{\prime }{\alpha }_{j}$ and therefore ${h}_{i}={w}^{-1}{w}^{\prime }{h}_{j}$, i.e., $w{h}_{i}={w}^{\prime }{h}_{j}$.

We have then

$\alpha \left({h}_{\alpha }\right)=\left(w{\alpha }_{i}\right)\left(w{h}_{i}\right)={\alpha }_{i}\left({h}_{i}\right)=2$.

Next, for a real root $\alpha$ we define ${w}_{\alpha }$ (acting on $𝔥$ and on ${𝔥}^{*}$) by the formula

$\begin{array}{ccc}{w}_{\alpha }\left(h\right)& =& h-\alpha \left(h\right){h}_{\alpha }\phantom{\rule{2em}{0ex}}\left(h\in 𝔥\right)\\ {w}_{\alpha }\left(\lambda \right)& =& \lambda -\lambda \left({h}_{\alpha }\right)\alpha \phantom{\rule{2em}{0ex}}\left(\lambda \in {𝔥}^{\prime }\right)\end{array}$

and we verify easily that

1. if $\alpha =w{\alpha }_{i}$, then ${w}_{\alpha }=w{w}_{i}{w}^{-1}\in W$;
2. ${w}_{\alpha }^{2}=1,\phantom{\rule{0.5em}{0ex}}\text{det}\phantom{\rule{0.2em}{0ex}}\left({w}_{\alpha }\right)=-1;$
3. ${w}_{\alpha }\left(\alpha \right)=-\alpha ,\phantom{\rule{0.5em}{0ex}}{w}_{\alpha }\left({h}_{\alpha }\right)=-{h}_{\alpha }$.

(2.28) The mapping $\alpha ↦{h}_{\alpha }$ is a bijection of ${R}_{\text{re}}$ into ${R}_{\text{re}}^{\vee }$ such that

1. ${h}_{{\alpha }_{i}}={h}_{i}\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$
2. ${h}_{w.\alpha }=w{h}_{\alpha }\phantom{\rule{0.5em}{0ex}}\left(\alpha \in {R}_{\text{re}},w\in W\right)\phantom{\rule{0.5em}{0ex}}$ i.e. it is $W$–equivalent
3. $\alpha >0⇔{h}_{\alpha }>0$.

 Proof. (i), (ii) are clear. As to (iii), let $\alpha =w{\alpha }_{i}$, then since $\alpha >0$ we have ${\alpha }_{i}\notin S\left({w}^{-1}\right)$, hence by (2.10) $l\left({w}_{i}{w}^{-1}\right)=l\left({w}^{-1}\right)-1$, hence again by (2.10) (applied this time to ${R}^{\vee }$) $w{h}_{i}>0$, i.e. ${h}_{\alpha }>0$. $\square$

Next we consider the imaginary roots.

(2.29)

1. If $\alpha$ is real (resp. imaginary) so is $-\alpha$.
2. $\alpha \in {R}_{\text{im}}^{+}⇔W\alpha \in {R}_{\text{im}}^{+}$. Thus the set of positive imaginary roots is table under $W$.
3. If $\alpha$ is a real root, the only multiples of $\alpha$ which are roots are $±\alpha$.
4. If $\alpha$ is an imaginary root, then ${r}_{\alpha }$ is an (imaginary) root for all integers $r\ne 0$.

 Proof. Let $\alpha =w{\alpha }_{i}$, then $-\alpha =w{w}_{i}{\alpha }_{i}$ is real. Let $\alpha \in {R}_{\text{im}}^{+}$. Then $\alpha \ne {\alpha }_{i}$, hence ${w}_{i}\alpha >0$ by (2.5), hence ${w}_{i}\alpha \in {R}_{\text{im}}^{+}$. Hence $W\alpha \subset {R}_{\text{im}}^{+}$. Conversely, let $\alpha \in {R}_{\text{re}}^{+}$, say $w=w{\alpha }_{i}$. Then ${w}_{i}{w}^{-1}\alpha =-\alpha <0$. If $\alpha =w{\alpha }_{i}$ and $r\alpha$ is a root, then $r{\alpha }_{i}={w}^{-1}\left(r\alpha \right)$ is a root, hence $r=$MISSING (2.1). Proof later. $\square$

The next proposition justifies the names "real" and "imaginary".

(2.30) Assume that the Cartan matrix $A$ is symmetrizable, and let $⟨\lambda ,\mu ⟩$ be a $W$–invariant symmetric bilinear form on ${𝔥}^{*}$, as in (2.26). If $\alpha$ is a root then

1. $\alpha$ is real $⇔⟨\alpha ,\alpha ⟩>0$.
2. $\alpha$ is imaginary $⇔⟨\alpha ,\alpha ⟩\le 0$.

 Proof. If $\alpha$ is real, say $\alpha =w{\alpha }_{i}$, then $⟨\alpha ,\alpha ⟩=⟨{\alpha }_{i},{\alpha }_{i}⟩>0$ (by our choice of scalar product). Conversely, suppose $\alpha \in {R}_{\text{im}}^{+}$. By (2.29) we have $w\alpha >0$ for all $w\in W$, and $⟨w\alpha ,w\alpha ⟩=⟨\alpha ,\alpha ⟩$. Hence we may assume that $\alpha$ has minimum height in its orbit $W\alpha$, i.e. that $\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)\le \text{ht}\phantom{\rule{0.2em}{0ex}}\left(w\alpha \right)$ for all $w\in W$. Since ${w}_{i}\alpha =\alpha -\alpha \left({h}_{i}\right){\alpha }_{i}$, we have $\text{ht}\phantom{\rule{0.2em}{0ex}}\left({w}_{i}\alpha \right)=\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)-\alpha \left({h}_{i}\right)$ and therefore $\alpha \left({h}_{i}\right)\le 0$ $\left(1\le i\le n\right)$, i.e. $⟨\alpha ,{\alpha }_{i}^{\vee }⟩\le 0$ and therefore also $⟨\alpha ,{\alpha }_{i}⟩\le 0$. But $\alpha =\sum {m}_{i}{\alpha }_{i}$, say, with coefficients ${m}_{i}\ge 0$, hence $⟨\alpha ,\alpha ⟩=\sum _{i=1}^{n}{m}_{i}⟨\alpha ,{\alpha }_{i}⟩\le 0$. $\square$

## Root-strings

(2.31) Let $\beta \in R$ and let ${\alpha }_{i}$ be a simple root such that $\beta \ne ±{\alpha }_{i}$. Then the set $S$ of integers $r$ such that $\beta +r{\alpha }_{i}$ is a root is a finite interval $\left[-p,q\right]$ in $ℤ$, where $p,q\ge 0$ and $p-q=\beta \left({h}_{i}\right)$. (If $\beta$) is positive all these roots are positive.

 Proof. Without loss of generality we can assume $\beta >0$, say $\beta =\sum {m}_{j}{\alpha }_{j}$ with coefficients ${m}_{j}\ge 0$. If $\beta +r{\alpha }_{i}$ is a root, we must have ${m}_{i}+r\ge 0$ (because some ${m}_{j},j\ne i,$ is $>0$, since $\beta \ne {\alpha }_{i}$), i.e. $r\ge -{m}_{i}$. It follows that the set $S$ is bounded below (and is not empty, because $0\in S$). $S$ is in any case a disjoint union of intervals in $ℤ$. Suppose $I$ is one of these intervals, and consider the vector space $V=\sum _{r\in I}{𝔤}_{\beta +r{\alpha }_{i}}$. Then $V$ is stable under $\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}$ and $\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}$ – for example, $\left[{e}_{i},{𝔤}_{\beta +r{\alpha }_{i}}\right]\subset {𝔤}_{\beta +\left(r+1\right){\alpha }_{i}}$ which is either contained in $V$ or is zero. Hence $V$ is stable under ${\stackrel{\sim }{w}}_{i}={e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}{e}^{-\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}{a}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}}$. From (2.5) it follows that the set $\left\{\beta +r{\alpha }_{i}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}r\in I\right\}$ is stable under ${w}_{i}$. But now, since ${w}_{i}\left(\beta +r{\alpha }_{i}\right)=\beta -\left(\beta \left({h}_{i}\right)+r\right){\alpha }_{i}$ it follows that the mapping $r↦-\left(r+\beta \left({h}_{i}\right)\right)$ maps the interval $I$ onto itself. Since $I$ is bounded below (because $S$ is), $I$ must be a finite interval with midpoint $-\frac{1}{2}\beta \left({h}_{i}\right)$. Hence all the component intervals of $I$ have the same midpoint, hence there is only one component, i.e. $S=I$ is an interval $\left[-p,q\right]$ with midpoint $\frac{1}{2}\left(-p+q\right)=-\frac{1}{2}\beta \left({h}_{i}\right)$, i.e. $p-q=\beta \left({h}_{i}\right)$ (and $p,q\ge 0$ because $O\in S$). $\square$

The set $\left\{\beta +r{\alpha }_{i}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}-p\le r\le q\right\}$ is the ${\alpha }_{i}$–string through $\beta$.

Corollary. If $\beta +{\alpha }_{i}$ is not a root, then $\beta \left({h}_{i}\right)\ge 0$. (For $q=0$ in (2.31).)

Remarks.

1. This result enables us to list the roots systematically (though not their multiplicities). Clearly it is enough to consider positive roots. Suppose that we have listed all the positive roots of heigh $\le m$. Each root of height $m+1$ is of the form $\beta +{\alpha }_{i}$, where $\beta$ is a root of height $m$, by (2.1)(v). By assumption, the negative values of $r$ for which $\beta +r{\alpha }_{i}$ is a root are known, hence in the notation of (2.28) $p$ is known, hence also $q=p-\beta \left({h}_{i}\right)$. So for each root $\beta$ of height $m$ and each simple root ${\alpha }_{i}$ we can decide whether or not $\beta +{\alpha }_{i}$ is a root. So we could define $R$ axiomatically in this way.
2. (2.31) valid for any real root $\alpha$: if $\beta \in R$, $\alpha \in {R}_{\text{re}}$, then the $\alpha$–string through $\beta$ is

$\beta -p\alpha ,\dots ,\beta +q\alpha$

where $p,q\ge 0$ and $p-q=\beta \left({h}_{\alpha }\right)$.

For $\alpha =w{\alpha }_{i}$; now apply (2.31) to ${w}^{-1}\beta$ and ${\alpha }_{i}$: $\beta +r\alpha =w\left({w}^{-1}\beta +r{\alpha }_{i}\right)$, ${w}^{-1}\beta \left({h}_{i}\right)=\beta \left(w{h}_{i}\right)=\beta \left({h}_{\alpha }\right)$.

The following results should have occurred in Chapter I: they are valid for any matrix $A$ (satisfying the condition ${a}_{ji}=0$ iff ${a}_{ij}=0$, so that the graph $\Gamma$ of $A$ is defined). If $J$ is any subset of $\left\{1,2,\dots ,n\right\}$ we have the principal submatrix ${A}_{J}$ and its graph ${\Gamma }_{J}$, which is the full subgraph of $\Gamma$ obtained by deleting the vertices of $\Gamma$ not belonging to $J$. If ${\gamma }_{J}$ is connected we shall say simply that $J$ is connected.

Now let $\alpha =\sum _{1}^{n}{m}_{i}{\alpha }_{i}$ be any element of $Q$. The support of $\alpha$, denoted by $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$, is defined to be the set

$\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)=\left\{i\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{m}_{i}\ne 0\right\}$.

(2.32) Let $\alpha \in R$. Then $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$ is connected.

 Proof. Let $J=\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$, then $\alpha \in {R}_{J}$, the root system of $𝔤\left({A}_{J}\right)$. If $J$ is not connected then ${A}_{J}$ is decomposable, say ${A}_{J}=\left(\begin{array}{cc}{A}_{{J}_{1}}& 0\\ 0& {A}_{{J}_{2}}\end{array}\right)$ and hence (1.12) $𝔤\left({A}_{J}\right)=𝔤\left({A}_{{J}_{1}}\right)\oplus 𝔤\left({A}_{{J}_{2}}\right)$. Hence the root space ${𝔤}_{\alpha }$ lies in either $𝔤\left({A}_{{J}_{1}}\right)$ or $𝔤\left({A}_{{J}_{2}}\right)$; in either case, $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)\ne J$: contradiction. $\square$

(2.33) Suppose $R$ is infinite. Then for each $\alpha \in {R}^{+}\exists u$ such that $\alpha +{\alpha }_{i}\in {R}^{+}$.

 Proof. Suppose not, then $\exists \alpha \in {R}^{+}$ such that $\alpha +{\alpha }_{i}\notin {R}^{+}$ $\left(1\le i\le n\right)$. Let $x\in {𝔤}_{\alpha }$, $x\ne 0$. Then $\left[x,{e}_{i}\right]=0$ $\left(1\le i\le n\right)$ (because ${𝔤}_{\alpha +{\alpha }_{i}}=0$), from which it follows that $U\left({𝔫}_{+}\right)·x=kx$ and therefore the ideal $𝔞=U\left(𝔤\right)·x$ generated by $x$ in $𝔤\left(A\right)$ is $𝔞=U\left({𝔫}_{-}\right)U\left(𝔥\right)U\left({𝔫}_{+}\right)·x=U\left({𝔫}_{-}\right)·x$ Hence ${𝔞}_{\beta }=0$ unless $\beta \le \alpha$. But by (1.13) we have $𝔞\supset {𝔤}^{\prime }\left(A\right)$ (because clearly $𝔞\not\subset 𝔠$), hence in particular $𝔞\supset {𝔫}_{+}$, i.e. ${𝔞}_{\beta }={𝔤}_{\beta }$ for all positive roots $\beta$. Hence all $\beta \in {R}^{+}$ satisfy $\beta \le \alpha$, whence ${R}^{+}$ (and therefore $R$) is finite. $\square$

## Minimal imaginary roots

Let $\alpha$ be a positive imaginary root. By (2.29) $w\alpha$ is positive for all $w\in W$. We shall say that $\alpha$ is minimal if $\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)\le \text{ht}\phantom{\rule{0.2em}{0ex}}\left(w\alpha \right)$ for all $w\in W$.

This implies in particular that $\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)\le \text{ht}\phantom{\rule{0.2em}{0ex}}\left({w}_{i}\alpha \right)\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$; since ${w}_{i}\alpha =\alpha -\alpha \left({h}_{i}\right){\alpha }_{i}$, it follows that $\alpha \left({h}_{i}\right)\le 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$, i.e. that $-\alpha \in {C}^{\vee }$, the dual fundamental chamber. If also $w\alpha$ is minimal, then $-w\alpha \in {C}^{\vee }$, and therefore by (2.13)(i) (applied in the dual situation, i.e. to ${A}^{t}$) we have $\alpha =w\alpha$. Thus each $W$–orbit of positive imaginary roots has a unique minimal element.

Conversely, if $\alpha \in {R}^{+}$ is such that $\alpha \left({h}_{i}\right)\le 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$, then $-\alpha \in {C}^{\vee }$ and hence by (2.13)(iii) we have $-\alpha \ge -w\alpha$ for all $w\in W$, i.e. $\alpha \le w\alpha$; hence $w\alpha >0$, and therefore $\alpha$ is a minimal positive imaginary root.

If $\alpha =\sum {m}_{j}{\alpha }_{j}$, then $\alpha \left({h}_{i}\right)=\sum {m}_{j}{\alpha }_{j}\left({h}_{i}\right)\phantom{\rule{0.2em}{0ex}}\sum {a}_{ij}{m}_{j}$. Thus for al minimal positive imaginary root the vector $m={\left({m}_{1},\dots ,{m}_{n}\right)}^{t}$ satisfies

$m\ge 0,\phantom{\rule{0.2em}{0ex}}m\ne 0,\phantom{\rule{0.2em}{0ex}}Am\le 0$

from which it follows from (2.17) that the Cartan matrix $A$ cannot be of finite type. Thus if $A$ is of finite type there are no imaginary roots.

From these remarks and (2.32) it follows that the minimal imaginary root $\alpha$ satisfies

(i) $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$ is connected; (ii) $\alpha \left({h}_{i}\right)\le 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$.

In fact these necessary conditions are also sufficient:

(2.34) Let $\alpha \in {Q}^{+},\alpha \ne 0$. Then the following conditions are equivalent:

1. $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$ is connected, and $\alpha \left({h}_{i}\right)\le 0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$;
2. $\alpha$ is a minimal positive imaginary root.

 Proof. We have just observed that (ii) $⇒$ (i). (i) $⇒$ (ii). From the remarks above, it is enough to prove that $\alpha$ is a root. Suppose then that $\alpha$ is not a root. Let $J=\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$, so that say $\alpha =\sum _{j\in J}{k}_{j}{\alpha }_{j}$ Since $\alpha \left({h}_{j}\right)\le 0$ for all $j$ it follows as above that the matrix ${A}_{J}$ is not of finite type. Choose a positive root $\beta \le \alpha$ of maximal height, say $\beta =\sum _{j\in J}{m}_{j}{\alpha }_{j}$. Let $\gamma =\alpha -\beta$, say $\gamma =\sum _{j\in J}{n}_{j}{\alpha }_{j}\phantom{\rule{1em}{0ex}}\left({n}_{j}={k}_{j}-{m}_{j}\right)$ $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)=J$. For if not we can choose $j\in \text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)$ and $i\in J-\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)$ such that ${a}_{ij}\ne 0$, because $J$ is connected. We have ${m}_{i}=0,\phantom{\rule{0.2em}{0ex}}{k}_{i}\ge 1$, hence $\beta +{\alpha }_{i}\le \alpha$, so that (by the maximality of $\beta$) $\beta +{\alpha }_{i}$ is not a root, hence by (2.31) $\beta \left({h}_{i}\right)\ge 0$. But now $\beta \left({h}_{i}\right)=\sum _{k\in \text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)}{m}_{k}{\alpha }_{k}\left({h}_{i}\right)=\sum _{k}{a}_{ik}{m}_{k}$; the ${m}_{k}$ are $>0$, the ${a}_{ik}$ are $\le 0$ (because $i\ne k$), and at least one (namely ${a}_{ij}$) is $<0$. Hence $\beta \left({h}_{i}\right)<0$, contradiction. $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\gamma \right)\ne J$. Since ${A}_{J}$ is not of finite type, the corresponding root system ${R}_{J}$ is infinite (2.27), hence $\beta +{\alpha }_{i}\in {R}_{J}$ for some $i\in J$ by (2.33). Again by the maximality of $\beta$ it follows that $\beta +{\alpha }_{i}\nleqq \alpha$, hence ${m}_{i}={k}_{i}$ and therefore ${n}_{i}=0$, i.e. $i\notin \text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\gamma \right)$. Let $S$ be a connected component of $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\gamma \right)$ and set ${\beta }^{\prime }=\sum _{j\in S}{m}_{j}{\alpha }_{j},\phantom{\rule{0.5em}{0ex}}{\beta }^{\prime \prime }=\sum _{j\in J-S}{m}_{j}{\alpha }_{j}$ (so that $\beta ={\beta }^{\prime }+{\beta }^{\prime \prime }$). Firstly, if $i\in S$ we have ${n}_{i}>0$, i.e. ${k}_{i}>{m}_{i}$, hence $\beta +{\alpha }_{i}\le \alpha$, hence $\beta +{\alpha }_{i}\notin R$ (by the maximality of $\beta$ again) hence $\begin{array}{cc}\beta \left({h}_{i}\right)\ge 0,\phantom{\rule{0.2em}{0ex}}\text{all}\phantom{\rule{0.2em}{0ex}}i\in S& \text{(1)}\end{array}$ j (2.31). On the other hand, for all maht i∈S we have ${\beta }^{\prime \prime }\left({h}_{i}\right)=\sum _{j\notin S}{m}_{j}{a}_{ij}\le 0$ and for some $i\in S$ we have ${\beta }^{\prime \prime }\left({h}_{i}\right)<0$, otherwise ${a}_{ij}=0$ for all $i\in S$ and all $j\in J-S$, impossible since $J$ is connected: $\begin{array}{cc}\begin{array}{ccc}{\beta }^{\prime \prime }\left({h}_{i}\right)\le 0,& & \text{all}\phantom{\rule{0.2em}{0ex}}i\in S\\ {\beta }^{\prime \prime }\left({h}_{i}\right)<0,& & \text{some}\phantom{\rule{0.2em}{0ex}}i\in S\end{array}\right\}& \text{(2)}\end{array}$ From (1) and (2) it follows that $\begin{array}{cc}\begin{array}{ccc}{\beta }^{\prime }\left({h}_{i}\right)\ge 0,& & \text{all}\phantom{\rule{0.2em}{0ex}}i\in S\\ {\beta }^{\prime }\left({h}_{i}\right)>0,& & \text{some}\phantom{\rule{0.2em}{0ex}}i\in S\end{array}\right\}& \text{(3)}\end{array}$ Let ${m}_{S}$ denote the vector ${\left({m}_{j}\right)}_{j\in S}$; then (3) says that ${m}_{S}>0m\phantom{\rule{0.5em}{0ex}}{A}_{S}{m}_{S}\ge 0m\phantom{\rule{0.5em}{0ex}}{A}_{S}{m}_{S}\ne 0$ from which we conclude (2.17) that the matrix ${A}_{S}$ is of finite type. Now consider ${\gamma }^{\prime }=\sum _{i\in S}{n}_{j}{\alpha }_{j}$. Since $S$ is a component of $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\gamma \right)$, we have ${a}_{ij}={\alpha }_{j}\left({h}_{i}\right)=0$ for all $i\in S$ and $j\in \text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\gamma \right)-S$, whence for $i\in S$, ${\gamma }^{\prime }\left({h}_{i}\right)=\gamma \left({h}_{i}\right)=\alpha \left({h}_{i}\right)-\beta \left({h}_{i}\right)\le 0$ by (1): $\begin{array}{cc}{\gamma }^{\prime }\left({h}_{i}\right)\le 0,\phantom{\rule{0.2em}{0ex}}\text{all}\phantom{\rule{0.2em}{0ex}}i\in S& \text{(4)}\end{array}$ Let ${n}_{S}$ denote the vector ${\left({n}_{j}\right)}_{j\in S}$; then (4) says that ${n}_{S}>0,\phantom{\rule{0.5em}{0ex}}{A}_{S}{n}_{S}\le 0$ and hence by (2.17) ${A}_{S}$ is not of finite type. This contradiction completes the proof. $\square$

An immediate corollary of (2.34) is the last part of (2.29): if $\alpha$ is an imaginary root, then so is $r\alpha$ for all integers $r\ne 0$.

For we may assume that $\alpha$ is positive and minimal, and then $r\alpha$ satisfies the conditions of (2.34) for any integer $r\ge 1$.

(2.35) Let $A$ be an indecomposable Cartan matrix.

1. If $A$ is of finite type, there are no imaginary roots.
2. If $A$ is of affine type, the imaginary roots are $m\delta \phantom{\rule{0.2em}{0ex}}\left(m\in ℤ,m\ne 0\right)$, where as before $\delta =\sum _{1}^{n}{a}_{i}{\alpha }_{i}$ (and the ${a}_{i}$ are the labels in Table A).
3. If $A$ is of indefinite type, there exist positive imaginary roots $\alpha =\sum _{1}^{n}{k}_{i}{\alpha }_{i}$ such that ${k}_{i}>0$ and $\alpha \left({h}_{i}\right)<0$ for $1\le i\le n$.

 Proof. already observed. We have $\delta \left({h}_{i}\right)=0$ $\left(1\le i\le n\right)$, and $\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\delta \right)$ is connected (all the ${a}_{i}$ are $>0$). Hence by (2.34) $\delta$ is an imaginary root, hence so is $m\delta \phantom{\rule{0.2em}{0ex}}\left(m\in ℤ,m\ne 0\right)$ by (2.29)(iv). Conversely, let $\alpha$ be a minimal positive imaginary root; then $\alpha \left({h}_{i}\right)\le 0$ $\left(1\le i\le n\right)$, hence by (2.17) $\alpha \left({h}_{i}\right)=0$ $\left(1\le i\le n\right)$, whence $\alpha$ is a scalar multiple of $\delta$. By (2.17) the inequalities $x>0,\phantom{\rule{0.2em}{0ex}}Ax<0$ have a solution; hence the cone $P\cap \left(-K\right)$ has non empty interior, and therefore contains points of the integer lattice ${ℤ}^{n}$. In other words the inequalities $x>0,\phantom{\rule{0.2em}{0ex}}Ax<0$ have a solution $x\in {ℤ}^{n}$, say $x={\left({k}_{1},\dots ,{k}_{n}\right)}^{t}$. Let $\alpha =\sum {k}_{i}{\alpha }_{i}$, then ${k}_{i}>0$ and $\alpha \left({h}_{i}\right)<0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$; and by (2.31) $\alpha$ is a (minimal) positive imaginary root. $\square$

Let $A$ be a symmetrizable Cartan matrix. The standard bilinear form $⟨\lambda ,\mu ⟩$ on ${𝔥}^{*}$ may clearly be chosen so that the scalar products $⟨{\alpha }_{i},{\alpha }_{j}⟩$ are integers. It follows that the number

$a=\text{min}\phantom{\rule{0.2em}{0ex}}\left\{{\mid \alpha \mid }^{2}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\alpha \in Q,\phantom{\rule{0.2em}{0ex}}{\mid \alpha \mid }^{2}>0\right\}$

exists and is $>0$. (Notation: ${\mid \alpha \mid }^{2}=⟨\alpha ,\alpha ⟩$.)

(2.36) Let $A$ be an indecomposable Cartan matrix of finite or affine or symmetrizable hyperbolic type. If $\alpha \in Q$ and ${\mid \alpha \mid }^{2}\le a$, then $\alpha \in {Q}^{+}$ or $-\alpha \in {Q}^{+}$.

 Proof. Suppose not, then $\alpha ={\beta }_{1}-{\beta }_{2}$, where ${\beta }_{1},{\beta }_{2}\in {Q}^{+}-\left\{0\right\}$, and the support ${S}_{1},S2$ of ${\beta }_{1},{\beta }_{2}$ respectively are disjoint. We have $\begin{array}{cc}a\ge {\mid {\beta }_{1}-{\beta }_{2}\mid }^{2}={\mid {\beta }_{1}\mid }^{2}+{\mid {\beta }_{2}\mid }^{2}-2⟨{\beta }_{1},{\beta }_{2}⟩\text{.}& \text{(1)}\end{array}$ Suppose first that all components of ${S}_{1}$ and of ${S}_{2}$ are of finite type. Then ${\mid {B}_{1}\mid }^{2}$ and ${\mid {B}_{2}\mid }^{2}$ are $>0$, hence $\ge a$; also $⟨{B}_{1},{B}_{2}⟩\le 0$ (because $⟨{\alpha }_{i},{\alpha }_{j}⟩\le 0$ if $i\ne j$). But this contradicts (1). Suppose then that ${S}_{1}$ (say) has a component of affine type. Then this component is the whole of ${S}_{1}$, and ${S}_{2}$ consists of a single vertex $j$, by virtue of (2.18); moreover by connectedness we have $⟨{\alpha }_{i},{\alpha }_{j}⟩<0$ for some $i\in {S}_{1}$, whence $⟨{\beta }_{1},{\beta }_{2}⟩<0$. But this time we have ${\mid {B}_{1}\mid }^{2}\ge 0$ and ${\mid {B}_{2}\mid }^{2}\ge a$, whence again (1) is contradicted. $\square$

(2.37) Let $A$ be as in (2.36).

1. Let $\alpha \in Q$ be such that ${\mid \alpha \mid }^{2}=a$. Then $\alpha$ is a real root, and hence

$a=\underset{1\le i\le n}{\text{min}}{\mid {\alpha }_{i}\mid }^{2}\phantom{\rule{1em}{0ex}}\text{(short roots)}$

2. Let $b=\underset{1\le i\le n}{\text{max}}{\mid {\alpha }_{i}\mid }^{2}$, and let $\alpha =\sum {m}_{i}{\alpha }_{i}\in Q$ be such that ${\mid \alpha \mid }^{2}=b$. Then $\alpha$ is a root iff ${m}_{i}{\mid {\alpha }_{i}\mid }^{2}/b\in ℤ\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)\phantom{\rule{1em}{0ex}}\text{(long roots)}$
3. Let $\alpha \in Q$, $\alpha \ne 0$. Then $\alpha \in {R}_{\text{im}}$ iff ${\mid \alpha \mid }^{2}\le 0$.

 Proof. (i) We have ${\mid w\alpha \mid }^{2}=a$ for all $w\in W$, hence $w\alpha \in {Q}^{+}\cup -{Q}^{+}$ by (2.36). Replacing $\alpha$ by $-\alpha$ if necessary, we may assume that $\alpha \in {Q}^{+}$. Let $\beta$ be of minimal height in $W\alpha \cap {Q}^{+}$, say $\beta =\sum {m}_{i}{\alpha }_{i}$. Then we have $a=⟨\beta ,\beta ⟩=\sum {m}_{i}⟨{\alpha }_{i},\beta ⟩,$ so that $⟨{\alpha }_{i},\beta ⟩>0$ for some index $i$, i.e. $\beta \left({h}_{i}\right)>0$. But then ${w}_{i}\beta =\beta -\beta \left({h}_{i}\right){\alpha }_{i}$ has height less than $\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)$, hence ${w}_{i}\beta \in -{Q}^{+}$ and therefore ${m}_{j}=0$ for $j\ne i$, i.e. $\beta ={m}_{i}{\alpha }_{i}$; but then ${m}_{i}=1$, because $a={\mid \beta \mid }^{2}={m}_{i}^{2}{\mid {\alpha }_{i}\mid }^{2}\ge {m}_{i}^{2}a$. Hence $\beta ={\alpha }_{i}$ and therefore $\alpha$ is a real root, and $a={\mid {\alpha }_{i}\mid }^{2}\underset{1\le j\le n}{\text{min}}{\mid {\alpha }_{j}\mid }^{2}$. (ii) Suppose $\alpha$ is a long (real) root of $R$. Then ${\alpha }^{\vee }$ is a short real root of ${R}^{\vee }$; but ${\alpha }^{\vee }=2\alpha /{\mid \alpha \mid }^{2}=\sum \frac{{m}_{i}{\mid {\alpha }_{i}\mid }^{2}}{b}·{\alpha }_{i}^{\vee }$, so that ${m}_{i}{\mid {\alpha }_{i}\mid }^{2}/b\in ℤ$ for all $i$. Conversely if this contradiction is satisfied, then ${\alpha }^{\vee }\in {Q}^{\vee }$ and ${\mid {\alpha }^{\vee }\mid }^{2}=4/b=a$ hence ${\alpha }^{\vee }\in {R}^{\vee }$ and therefore $\alpha \in R$. (iii) If $\alpha \in {R}_{\text{im}}$, then ${\mid \alpha \mid }^{2}\le 0$ by (2.30). Consequently, suppose $\alpha \in Q-\left\{0\right\}$ and ${\mid \alpha \mid }^{2}\le 0$. By (2.36) we may assume $\alpha \in {Q}^{+}$. Again choose $\beta$ of minimal height in $W\alpha \cap {Q}^{+}$, say $\beta =\sum {m}_{i}{\alpha }_{i}$. This time $\sum {m}_{i}⟨\beta ,{\alpha }_{i}⟩=⟨\beta ,\beta ⟩={\mid \alpha \mid }^{2}\le 0$. Suppose $⟨\beta ,{\alpha }_{j}⟩\ge 0$ for some index $j$; then $\text{ht}\phantom{\rule{0.2em}{0ex}}\left({w}_{j}\beta \right)<\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)$, whence ${w}_{j}\beta \in -{Q}^{+}$, which as before implies that $\beta ={m}_{j}{\alpha }_{j}$, whence ${\mid \beta \mid }^{2}>0$. Consequently $⟨\beta ,{\alpha }_{j}⟩\le 0$, i.e. $\beta \left({h}_{j}\right)\le 0$, for $1\le j\le n$. To complete the proof, by (2.34) it is enough to show that $S=\text{Supp}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)$ is connected. If not, let ${S}_{1}$ be a component of $S$, and ${S}_{2}=S-{S}_{1}$, and put ${\beta }_{i}=\sum _{j\in {S}_{i}}{m}_{j}{\alpha }_{j}\phantom{\rule{0.2em}{0ex}}\left(i=1,2\right)$. Then ${S}_{1},{S}_{2}$ are of finite or affine type, whence ${\mid {\beta }_{1}\mid }^{2}\ge 0$ and ${\mid {\beta }_{2}\mid }^{2}\ge 0$; also $⟨{\beta }_{1},{\beta }_{2}⟩=0$ (because ${a}_{ij}=0$ for all $\left(i,j\right)\in {S}_{1}×{S}_{2}$). Hence ${\mid \beta \mid }^{2}={\mid {\beta }_{1}\mid }^{2}+{\mid {\beta }_{2}\mid }^{2}\ge 0$, contradiction. Hence $\beta$ is a minimal imaginary root, hence $\alpha \in {R}_{\text{im}}^{+}$. $\square$

We can now describe the affine root systems explicitly.

Assume that the Dynkin diagram $\Delta$ is not of type ${\stackrel{\sim }{BC}}_{l}\phantom{\rule{0.2em}{0ex}}\left(l\ge 1\right)$. Then there exists an index $i$ such that ${a}_{i}={a}_{i}^{\vee }=1$, by inspection of Table A. Denote this index by 0 and the others by $1,2,\dots ,l$, where $l=\text{rank}\phantom{\rule{0.2em}{0ex}}\left(A\right)=n-1$. Let ${Q}_{0}=\sum _{1}^{l}ℤ{\alpha }_{i},\phantom{\rule{0.5em}{0ex}}{R}_{0}=R\cap {Q}_{0},\phantom{\rule{0.5em}{0ex}}{\Delta }_{0}$ the subdiagram of $\Delta$ obtained by erasing the vertex 0 from $\Delta$. Then ${\Delta }_{0}$ is of finite type and ${R}_{0}$ is a root system with ${\Delta }_{0}$ as its Dynkin diagram, hence finite by (2.27). As before let

$a=\underset{0\le i\le l}{\text{min}}{\mid {\alpha }_{i}\mid }^{2}=\underset{1\le i\le l}{\text{min}}{\mid {\alpha }_{i}\mid }^{2}$
$b=\underset{0\le i\le l}{\text{max}}{\mid {\alpha }_{i}\mid }^{2}=\underset{1\le i\le l}{\text{max}}{\mid {\alpha }_{i}\mid }^{2}$

Then $b/a=1,2$ or 3, and ${\mid \alpha \mid }^{2}=a$ or $b$ for all real roots $\alpha \in R$ (again by inspection of Table A). (This is not true for the excluded case ${\stackrel{\sim }{BC}}_{l}$, where there are roots of 3 lengths.) Let ${R}_{\text{re}}^{\left(s\right)}=\left\{\alpha \in {R}_{\text{re}}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\mid \alpha \mid }^{2}=a\right\},\phantom{\rule{0.2em}{0ex}}{R}_{\text{re}}^{\left(l\right)}=\left\{\alpha \in {R}_{\text{re}}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{\mid \alpha \mid }^{2}=b\right\}$; likewise ${R}_{0}^{\left(s\right)}$, ${R}_{0}^{\left(l\right)}$ (short roots and long roots). (If $a=b$ then ${R}_{\text{re}}^{\left(s\right)}={R}_{\text{re}}^{\left(l\right)}={R}_{\text{re}}$.) Finally let

$k=b/{\mid {\alpha }_{0}\mid }^{2}=1,2\phantom{\rule{0.2em}{0ex}}\text{or}\phantom{\rule{0.2em}{0ex}}3$

so that $k=1$ if either $\Delta$ is simply-laced or ${\alpha }_{0}$ is a long root.

(2.38) If $R$ is of affine type (Table A) but not of type ${\stackrel{\sim }{BC}}_{l}\phantom{\rule{0.2em}{0ex}}\left(l\ge 1\right)$, then

1. ${R}_{\text{re}}^{\left(S\right)}={R}_{0}^{\left(S\right)}+ℤ\delta$;
2. ${R}_{\text{re}}^{\left(l\right)}={R}_{0}^{\left(l\right)}+ℤk\delta$.

 Proof. Let $\alpha =\sum _{0}^{l}{m}_{i}{\alpha }_{i}\in Q$, then $\beta =\alpha -{m}_{0}\delta \in {q}_{0}$ (because ${a}_{0}=1$). We have ${\mid \alpha \mid }^{2}={\mid \beta \mid }^{2}$ and $\beta =\sum _{1}^{l}{n}_{i}{\alpha }_{i}$, where ${n}_{i}={m}_{i}-{m}_{0}{a}_{i}\phantom{\rule{1em}{0ex}}\left(1\le i\le l\right)$. $\begin{array}{ccc}\alpha \in {R}_{\text{re}}^{\left(S\right)}& ⇔& {\mid \alpha \mid }^{2}=a\phantom{\rule{0.2em}{0ex}}⇔\phantom{\rule{0.2em}{0ex}}{\mid \beta \mid }^{2}=a\phantom{\rule{0.2em}{0ex}}⇔\phantom{\rule{0.2em}{0ex}}\beta \in {R}_{0}^{\left(S\right)}\phantom{\rule{2em}{0ex}}\text{(2.34)(i)}\end{array}$ $\begin{array}{ccc}\alpha \in {R}_{\text{re}}^{\left(l\right)}& ⇔& {\mid \alpha \mid }^{2}=b\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{m}_{i}{\mid {\alpha }_{i}\mid }^{2}/b\in ℤ\phantom{\rule{0.2em}{0ex}}\left(0\le i\le l\right)\phantom{\rule{2em}{0ex}}\text{(2.34)(ii)}\\ & ⇔& {\mid \beta \mid }^{2}=b,\phantom{\rule{0.2em}{0ex}}{m}_{0}\in kℤ\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}{n}_{i}{\mid {\alpha }_{i}\mid }^{2}/b\in ℤ\phantom{\rule{0.2em}{0ex}}\left(1\le i\le l\right)\\ & ⇔& {m}_{0}\in kℤ\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}\beta \in {R}_{0}^{\left(l\right)}\text{.}\end{array}$ $\square$

Finally, if $R$ is of type ${\stackrel{\sim }{BC}}_{l}$ $\left(l\ge 1\right)$, choose ${a}_{0}=1$ (as before, but this time ${a}_{0}^{\vee }=2$); then ${R}_{0}$ is of type ${B}_{l}$, so that explicitly the roots $\beta$ in ${R}_{0}$ are

$±{\epsilon }_{i},\phantom{\rule{0.5em}{0ex}}±{\epsilon }_{i}±{\epsilon }_{j}\phantom{\rule{1em}{0ex}}\left(i

where $⟨{\epsilon }_{i},{\epsilon }_{j}⟩={\delta }_{ij}$, so that ${\mid \beta \mid }^{2}=1$ or 2.

In $R$ we have ${\mid \alpha \mid }^{2}=1,2$ or 4 (for a real root): short, medium and long. One finds

$\begin{array}{ccc}{R}_{\text{re}}^{\left(s\right)}& =& {R}_{0}^{\left(s\right)}+ℤ\delta \\ {R}_{\text{re}}^{\left(m\right)}& =& {R}_{0}^{\left(l\right)}+ℤ\delta \phantom{\rule{2em}{0ex}}\text{(empty if}\phantom{\rule{0.2em}{0ex}}l=1\text{)}\\ {R}_{\text{re}}^{\left(l\right)}& =& 2{R}_{0}^{\left(s\right)}+\left(2ℤ+1\right)\delta =2{R}_{\text{re}}^{\left(2\right)}+\delta \text{.}\end{array}$

(2.39) Let $A$ be an indecomposable Cartan matrix, $X\subset 𝔥$ the Tits cone (here $k=ℝ$). Then the closure of $X$ (in the usual topology of $h$) is given by

$h\in \stackrel{‾}{X}⇔\alpha \left(h\right)\ge 0\phantom{\rule{0.2em}{0ex}}\text{for all}\phantom{\rule{0.2em}{0ex}}\alpha \in {R}_{\text{im}}^{+}$.

 Proof. This is clear if $A$ is of finite type, for then $\stackrel{‾}{X}=X=𝔥$, and ${R}_{\text{im}}^{+}=\varnothing$. If $A$ is of affine type we have seen earlier that $h\in X$ iff either $h\in 𝔠$ or $\delta \left(h\right)>0$, so that $h\in \stackrel{‾}{X}$ iff $\delta \left(h\right)\ge 0$; and the positive imaginary roots are positive integer multiples of $\delta$ (2.33). So assume that $A$ is of indefinite type, and let ${X}^{\prime }=\left\{h\in 𝔥\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\alpha \left(h\right)\ge 0,\phantom{\rule{0.2em}{0ex}}\text{all}\phantom{\rule{0.2em}{0ex}}\alpha \in {R}_{\text{im}}^{+}\right\}$ Clearly the fundamental chamber $C\subset {X}^{\prime }$, and ${X}^{\prime }$ is $W$–stable by virtue of (2.29)(i or ii?). Hence $wC\subset {X}^{\prime }$ for all $w\in W$, and therefore $X\subset {X}^{\prime }$; moreover ${X}^{\prime }$ is closed, because it is an intersection of closed half-spaces, so that $\stackrel{‾}{X}\subset {X}^{\prime }$. Conversely, let $h\in {X}^{\prime }$ and assume first that ${\alpha }_{i}\left(h\right)\in ℤ\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$. Choose a positive imaginary root $\beta$ such that $\beta \left({h}_{i}\right)<0\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$ (2.35)(iii). To show that $h\in X$ it is enough by (2.13)(iv) to show that there are only finitely many positive real roots $\alpha$ such that $\alpha \left(h\right)<0$, i.e. such that $\alpha \left(h\right)\le -1$. For such an $\alpha$ we have ${w}_{\alpha }\left(\beta \right)=\beta -\beta \left({h}_{\alpha }\right)\alpha =\beta +r\alpha$ say where $r=-\beta \left({h}_{\alpha }\right)\ge \text{ht}\phantom{\rule{0.2em}{0ex}}\left({h}_{\alpha }\right)$ (because $\beta \left({h}_{i}\right)\le -1$). Since ${w}_{\alpha }\left(\beta \right)$ is a positive imaginary root and $h\in {X}^{\prime }$, we have $\left(\beta +r\alpha \right)\left(h\right)\ge 0$ and therefore $\beta \left(h\right)\ge -r\alpha \left(h\right)\ge r\ge \text{ht}\phantom{\rule{0.2em}{0ex}}\left({h}_{\alpha }\right)$ So ${h}_{\alpha }$ has height $\le \beta \left(h\right)$, and therefore there are only finitely many possibilities for $\alpha$. Hence $h\in X$. By replacing $h$ by a rational scalar multiple of $h$, it follows that $h\in {X}^{\prime },\phantom{\rule{0.2em}{0ex}}{\alpha }_{i}\left(h\right)\in Q\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)⇒h\in X$. But these $h$ are dense in ${X}^{\prime }$, hence ${X}^{\prime }\supset \stackrel{‾}{X}$. $\square$

(2.33) has the following geometrical interpretation. Let $Z$ be the positive imaginary cone, i.e. the cone in ${𝔥}^{*}$ generated by the positive imaginary roots, i.e.,$Z$ is the set of all finite linear combinations $\sum {c}_{i}{\beta }_{i}$ with ${c}_{i}\ge 0$ and ${\beta }_{i}\in {R}_{\text{im}}^{+}$. Then

(2.40) The cones $\stackrel{‾}{X}$ in $𝔥$ and $\stackrel{‾}{Z}$ in ${𝔥}^{*}$ are duals of each other.

When the Cartan matrix is of indefinite type, the only case (to my knowledge) in which the closure of $\stackrel{‾}{X}$ of the Tits cone can be explicitly described is that in which $A$ is hyperbolic and symmetrizable.