## Kac-Moody Lie AlgebrasChapter IV: Affine Lie algebras

Last update: 7 October 2012

Abstract.
This is a typed version of I.G. Macdonald's lecture notes on Kac-Moody Lie algebras from 1983.

## Loop algebras

Let $A$ be an indecomposable Cartan matrix of finite type, so that $𝔤\left(A\right)$ is finite-dimensional and simple.

Let $L=k\left[t,{t}^{-1}\right]$ denote the algebra of Laurent polynomials in one variable over $k$.

The loop algebra of $𝔤$ is defined to be

$L(𝔤)=L⊗k𝔤= ⨁m∈ℤtm⊗𝔤$

i.e. it is constructed from $𝔤$ by extension of scalars from $k$ to $L\text{.}$ I shall drop the tensor product notation and write ${t}^{m}x$ in place of ${t}^{m}\otimes x$ $\left(m\in ℤ,x\in 𝔤\right)\text{.}$ Then the Lie bracket in $L\left(𝔤\right)$ is defined by

$[ tmx,tny ] 0 =tm+n [x,y] (1)$

$\left(m,n\in ℤ;x,y\in 𝔤\right)\text{.}$

Recall that $A$ is symmetrizable and hence that $𝔤$ carries an invariant scalar product $⟨x,y⟩\text{.}$ We extend this to $L\left(𝔤\right)$ as follows:

$⟨ tmx,tny ⟩ = { ⟨x,y⟩ , ifm+n=0 , 0 , otherwise. (2)$

One verifies immediately that this scalar product on $L\left(𝔤\right)$ is still invariant.

Finally we define a derivation $d$ of $L\left(𝔤\right)$ by

$d(tmx)=m tmx (m∈ℤ,x∈𝔤) (3)$

i.e., $d=t\frac{d}{dt}\text{.}$ That $d$ is a derivation is immediate from (1).

The next stage is to construct a 1-dimensional central extension of $L\left(𝔤\right)\text{.}$

## Central extensions of Lie algebras

In general let

$0⟶𝔞⟶𝔤1⟶p⟶𝔤⟶0$

be an exact sequence of Lie algebras with $𝔞=\text{Ker}\phantom{\rule{0.2em}{0ex}}\left(p\right)$ contained in the centre of $𝔤$ i.e. $\left[𝔞,{𝔤}_{1}\right]=0\text{.}$ Choose a section $s:\phantom{\rule{0.2em}{0ex}}𝔤\to {𝔤}_{1},$ i.e. a $k\text{-linear}$ map such that $p\circ s={1}_{𝔤}\text{.}$ Then for $x,y\in 𝔤$

$ψ(x,y)= [sx,sy]- s[x,y]∈𝔞 (1)$

(because it is killed by $p\text{);}$ the function $\psi :\phantom{\rule{0.2em}{0ex}}𝔤×𝔤\to 𝔞$ is bilinear, skew-symmetric and satisfies $\delta \psi =0,$ where

$δψ(x,y,z) = ψ([x,y],z)+ ψ([y,z],x)+ ψ([z,x],y) (2)$

For we have

$ψ([x,y],z) = [s[x,y],sz]-s [[x,y],z] = [[sx,sy],sz]-s [[x,y],z]$

by (1) and the centrality of $𝔞;$ now apply the Jacobi identity.

In other words, $\psi$ is a 2-cocycle on $𝔤$ with values in $𝔞$ (with trivial $𝔤\text{-action).}$

Conversely, given a 2-cocycle $\psi :\phantom{\rule{0.2em}{0ex}}𝔤×𝔤\to 𝔞,$ define ${𝔤}_{1}=𝔤×𝔞$ (direct product of vector spaces) with Lie bracket given by

$[ (x,a), (y,b) ] = ( [x,y],ψ (x,y) ) (3)$

$\left(x,y\in 𝔤;\phantom{\rule{0.2em}{0ex}}a,b\in 𝔞\right)\text{.}$ Then the Jacobi identity holds in ${𝔤}_{1}$ by virtue of (2): for we have

$[ [ (x,a), (y,b) ] , (z,c) ] = [ ( [x,y],ψ(x,y) ) ,(z,c) ] = ( [[x,y],z], ψ([x,y],z) )$

so that cyclic summation gives 0 by virtue of (2) and the Jacobi identity in $𝔤\text{.}$ (The motive for the definition (3) is that in the original context we have $\left[sx+a,sy+b\right]=\left[sx,sy\right]=s\left[x,y\right]+\psi \left(x,y\right)\text{.)}$

Thus ${𝔤}_{1}$ is a Lie algebra, $𝔞$ is a central ideal in $𝔤,$ and $p:\phantom{\rule{0.2em}{0ex}}{𝔤}_{1}/𝔞\to 𝔤$ is an isomorphism of Lie algebras $\text{(}𝔤$ is not a subalgebra of ${𝔤}_{1}\text{).}$

In the present context we define $\psi :\phantom{\rule{0.2em}{0ex}}L\left(𝔤\right)×L\left(𝔤\right)\to k$ by

$ψ(ξ,η)= ⟨dξ,η⟩ (ξ,η∈L(𝔤))$

Explicitly, if $\xi ={t}^{m}x,$ $\eta ={t}^{n}y\text{.}$ $\left(m,n\in ℤ;\phantom{\rule{0.2em}{0ex}}x,y\in 𝔤\right)$ then

$ψ(ξ,η) = ⟨ mtmx,tny ⟩ = { m⟨x,y⟩ , ifm+n=0 , 0 , otherwise,$

from which it follows that $\psi \left(\eta ,\xi \right)=-\psi \left(\xi ,\eta \right)\text{.}$

Next we verify that $\delta \psi \left(\xi ,\eta ,\zeta \right)=0\text{.}$ By linearity we may assume that $\xi ={t}^{p}x,$ $\eta ={t}^{q}y,$ $\zeta ={t}^{r}z$ $\left(p,q,r\in ℤ;\phantom{\rule{0.2em}{0ex}}x,y,z\in 𝔤\right)\text{.}$ If $p+q+r\ne 0$ then certainly $\delta \psi =0,$ and if $p+q+r=0$ we have

$δψ(ξ,η,ζ) = (p+q) ⟨[x,y],z⟩+ (q+r) ⟨[y,z],x⟩+ (r+p) ⟨[z,x],y⟩ = p⟨x,[y,z]⟩+ q⟨[x,y],z⟩+ (q+r) ⟨x,[y,z]⟩+ (r+p) ⟨z,[x,y]⟩ = 0(sincep+q+r=0).$

So we construct a 1-dimensional central covering $\stackrel{\sim }{L}\left(𝔤\right)$ of $L\left(𝔤\right)$ as follows:

$L∼(𝔤) = L(𝔤)⊕kc$

with Lie bracket defined by

$[ ξ+λc,η+μc ] = [ξ,η]0+ψ (ξ,η)c = [ξ,η]0+ ⟨dξ,η⟩c$

Notice that $𝔤$ is a subalgebra of $\stackrel{\sim }{L}\left(𝔤\right),$ since $d\xi =0$ for $\xi \in 𝔤\text{.}$

We extend the derivation $d$ to $\stackrel{\sim }{L}\left(𝔤\right)$ by requiring that $dc=0\text{.}$ We ought to verify that $d,$ extended in this way, is still a derivation. On the one hand we have

$d [ ξ+λc,η+μc ] = d[ξ,η]0$

and on the other hand

$[ d(ξ+λc) ,η+μc ] + [ ξ+λc,d(η+μc) ] = [dξ,η+μc]+ [ξ+λc,dη] = [dξ,η]0+ψ (dξ,η)+ [ξ,dη]0+ψ (ξ,dη) = d[ξ,η]0+ψ (ξ,dη)-ψ (η,dξ) = d[ξ,η]0+ ⟨dξ,dη⟩- ⟨dη,dξ⟩ = d[ξ,η]0.$

Finally we construct the semidirect product

$L^(𝔤) = L∼(𝔤)⋊kd= L∼(𝔤)⊕kd$

with bracket

$[ ξ+λ1d,η+μ1d ] = [ξ,η]+λ1dη- μ1dξ$

$\text{(}\xi ,\eta \in \stackrel{\sim }{L}\left(𝔤\right);\phantom{\rule{0.2em}{0ex}}{\lambda }_{1},{\mu }_{1}\in k\text{).}$ So altogether

$L^(𝔤) = L(𝔤)⊕kc⊕kd$

and

$[ ξ+λc+λ1d, η+μc+μ1d ] = [ξ,η]0+λ1 dη-μ1dξ+ ⟨dξ,η⟩c.$

Our aim is to show that $\stackrel{^}{L}\left(𝔤\right)\cong 𝔤\left({A}^{\left(1\right)}\right),$ where ${A}^{\left(1\right)}$ is an indecomposable Cartan matrix of affine type. To construct ${A}^{\left(1\right)}$ we need the following lemma:

(4.1) The root system $R$ of $𝔤\left(A\right)$ has a unique highest root $\phi$ such that $\phi \ge \alpha$ for all $\alpha \in R$ (i.e. $\phi -\alpha \in {Q}^{+}\text{).}$ We have $\phi =\sum _{i=1}^{\ell }{a}_{i}{\alpha }_{i}$ with each coefficient ${a}_{i}\ge 1;$ $\phi \left({h}_{i}\right)\ge 0$ for all $i,$ and $\phi \left({h}_{i}\right)>0$ for some $i\text{.}$

 Proof. Since $A$ is of finite type, $R$ is finite. Let $\phi =\sum {a}_{i}{\alpha }_{i}$ be a maximal element of $R$ (with respect to the partial ordering $\ge \text{).}$ Since ${w}_{i}\phi =\phi -\phi \left({h}_{i}\right){\alpha }_{i},$ it follows that $\phi \left({h}_{i}\right)\ge 0$ for all $i\text{.}$ If $\phi \left({h}_{i}\right)=0$ for all $i,$ then $⟨\phi ,{\alpha }_{i}⟩=0$ for all $i$ and therefore $⟨\phi ,\phi ⟩=0,$ whence $\phi =0\text{.}$ Hence $\phi \left({h}_{i}\right)>0$ for at least one value of $i\text{.}$ Clearly $\phi \in {R}^{+}$ (otherwise $-\phi >\phi \text{),}$ hence the coefficients ${a}_{i}$ are all $\ge 0\text{.}$ Let $J=\text{supp}\phantom{\rule{0.2em}{0ex}}\left(\phi \right)=\left\{i\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}{a}_{i}\ne 0\right\}\text{.}$ If $s\ne \left[1,\ell \right]$ then by connectedness there exists $j\in J$ and $k\notin J$ such that ${a}_{kj}<0,$ whence $φ(hk)= ∑i∈Jaiαi (hk)=∑i∈J aiaki<0$ (because all the terms are $\le 0,$ and at least one is $<0\text{).}$ This is a contradiction hence all the ${a}_{i}$ are $\ge 1\text{.}$ Finally let ${\phi }^{\prime }\ne \phi$ be another maximal root. Then from above (with $\phi$ replaces by ${\phi }^{\prime }\text{)}$ we have $⟨{\phi }^{\prime },{\alpha }_{j}⟩\ge 0$ for all $j,$ and $⟨{\phi }^{\prime },{\alpha }_{j}⟩>0$ for some $j,$ whence $⟨{\phi }^{\prime },\phi ⟩=\sum {a}_{j}⟨{\phi }^{\prime },{\alpha }_{j}⟩>0$ and therefore also ${\phi }^{\prime }\left({h}_{\phi }\right)>0\text{.}$ By (2.31) (root strings) ${\phi }^{\prime }-\phi \in R\text{.}$ Hence either ${\phi }^{\prime }>\phi$ or $\phi >{\phi }^{\prime },$ neither of which is possible, and therefore $\phi$ is unique. $\square$

Now let ${e}_{i},{f}_{i}$ $\left(1\le i\le \ell \right)$ as usual be the generators of $𝔤=𝔤\left(A\right),$ and let $𝔥$ be the Cartan subalgebra. Normalize the scalar product on $𝔤$ so that $⟨\phi ,\phi ⟩=2,$ and choose ${e}_{\phi }\in {𝔤}_{\phi },$ ${f}_{\phi }\in {𝔤}_{-\phi }$ such that

$[eφ,𝔤φ] = hφ (1)$

(or equivalently) such that $⟨{e}_{\phi },{f}_{\phi }⟩=1\text{.}$ (Recall that $\left[{e}_{\phi },{f}_{\phi }\right]=⟨{e}_{\phi },{f}_{\phi }⟩{h}_{\phi }^{\vee }$ and that ${h}_{\phi }^{\vee }={h}_{\phi }$ because $\phi ={\phi }^{\vee },$ by our choice of scalar product.)

Define

$e0=tfφ, f0=t-1eφ, h0=-hφ+c (2)$

and let $\stackrel{^}{𝔥}=𝔥\oplus kc\oplus kd\text{.}$ We extend each root $\alpha \in R$ to a linear form (also denoted by $\alpha \text{)}$ on $\stackrel{^}{𝔥}$ by setting $\alpha \left(c\right)=\alpha \left(d\right)=0;$ also define $\delta \in {\stackrel{^}{𝔥}}^{*}$ by

$δ∣(𝔥⊕kc)=0, δ(d)=1$

Finally set

$α0=δ-φ (4)$

so that

$∑i=0ℓaiαi =δ (5)$

where ${a}_{0}=1,$ and ${a}_{1},\dots ,{a}_{\ell }$ are the coefficients of $\phi$ (4.1).

Let

$aij=αj (hi) (0≤i,j≤ℓ) (6)$

and let ${A}^{\text{(1)}}={\left({a}_{ij}\right)}_{0\le i,j\le \ell }\text{.}$ The matrix ${A}^{\text{(1)}}$ has $A$ as a principal submatrix

(4.2) ${A}^{\text{(1)}}$ is an indecomposable Cartan matrix of affine type.

 Proof. We calculate: $α0(h0)= (δ-φ)(c-hφ) =φ(hφ)=2; α0(hi)= (δ-φ)(hi)= -φ(hi)≤0 by (4.1) αj(h0)=αj (c-hφ)=-αj (hφ)=- ⟨φ,αj⟩$ which is a positive scalar multiple of $-⟨\phi ,{\alpha }_{j}^{\vee }⟩=-\phi \left({h}_{j}\right),$ hence also $\le 0\text{.}$ Hence ${A}^{\text{(1)}}$ is a Cartan matrix, and is indecomposable because $A$ is indecomposable and ${a}_{0i}=-\phi \left({h}_{i}\right)<0$ for some $i\ne 0,$ again by (4.1). Also from (3) and (5) we have $∑j=0ℓaijaj= ∑j=0ℓajαj (hi)=δ(hi)=0$ for $0\le i\le \ell ,$ so that by (2.17) ${A}^{\text{(1)}}$ is of affine type. $\square$

If $A$ is of type $X$ $\text{(}={A}_{n},\dots ,{G}_{2}:$ see table F), then ${A}^{\text{(1)}}$ is of type $\stackrel{\sim }{X}$ (see table A; the integers ${\alpha }_{i}$ are the labels attached to the vertices there).

(4.3) Theorem

$L^(𝔤) ≅ 𝔤(A(1)) (withei,fi, 𝔥^as generators) L∼(𝔤) ≅ 𝔤′(A(1)) L(𝔤) ≅ 𝔤′‾(A(1)).$

 Proof. The proof is a sequence of verifications: $\left(\stackrel{^}{𝔥},{\left({h}_{i}\right)}_{0\le i\le \ell },{\left({\alpha }_{i}\right)}_{0\le i\le \ell }\right)$ is a minimal realization of the Cartan matrix ${A}^{\text{(1)}}\text{.}$ Well, $\text{dim}\phantom{\rule{0.2em}{0ex}}\stackrel{^}{𝔥}=\ell +2=2n-\ell$ where $n=\ell +1$ is the number of rows of ${A}^{\text{(1)}},$ and $\ell$ is its rank. Clearly the ${h}_{i}$ are linearly independent in $\stackrel{^}{𝔥},$ and the ${\alpha }_{i}$ are $\ell \text{.}\phantom{\rule{0.2em}{0ex}}i\text{.}$ in ${\stackrel{^}{𝔥}}^{*}\text{.}$ The ${e}_{i},{f}_{i}$ and $\stackrel{^}{𝔥}$ satisfy the defining relations (1.2). Since $𝔤$ is a subalgebra of $\stackrel{^}{L}\left(𝔤\right),$ we have $\left[{e}_{i},{f}_{j}\right]={\delta }_{ij}{h}_{i}$ for $1\le i,j\le \ell ;$ moreover $[e0,fj]= [tfφ,fj]=t [fφ,fj]+ ⟨tfφ,fj⟩ c=0,(1≤j≤ℓ) [e0,f0]= [tfφ,t-1eφ]= [fφ,eφ]+ ⟨fφ,eφ⟩c =-hφ+c=h0;$ next, if $\stackrel{^}{h}\in \stackrel{^}{𝔥},$ say $\stackrel{^}{h}=h+\lambda c+\mu d$ $\left(h\in 𝔥;\phantom{\rule{0.2em}{0ex}}\lambda ,\mu \in k\right)$ then for $i=1,2,\dots ,\ell$ we calculate $[h^,ei] = [h+λc+μd,ei] =[h,ei]+μd(ei) =[h,ei] = αi(h)ei=αi (h^)ei; [h^,e0] = [h+λc+μd,tfφ]= t[h,fφ]+μd(tfφ) = -φ(h)tfφ+μt fφ=(μ-φ(h)) e0=α0(h^) e0$ (because ${\alpha }_{0}\left(\stackrel{^}{h}\right)=\left(\delta -\phi \right)\left(h+\lambda c+\mu d\right)=\mu \phi \left(h\right)\text{).}$ Finally, it is clear that $\stackrel{^}{𝔥}$ is abelian. Let $\alpha \in R\cup \left\{0\right\},$ $m\in ℤ\text{.}$ Then ${t}^{m}{𝔤}_{\alpha }$ is a weight space for the adjoint action of $\stackrel{^}{𝔥}$ on $\stackrel{^}{L}\left(𝔤\right),$ with weight $\alpha +m\delta :$ for with $\stackrel{^}{h}$ as above, $[h^,tmx] = [ h+λc+μd, tmx ] = tm[h,x] +μd(tmx) = (α(h)+mμ) tmx = (α+mδ) (h^) tmx,$ Thus the $\alpha +m\delta$ $\left(\alpha \in R\cup \left\{0\right\},\phantom{\rule{0.2em}{0ex}}m\in ℤ\right)$ are the roots of $\stackrel{^}{L}\left(𝔤\right)$ relative to $\stackrel{^}{𝔥}\text{.}$ Now let $𝔞$ be an ideal of $\stackrel{^}{L}\left(𝔤\right)$ such that $𝔞\cap \stackrel{^}{𝔥}=0\text{.}$ By (1.5) $𝔞$ is the direct sum of its weight spaces $𝔞\cap {t}^{m}{𝔤}_{\alpha }$ (where ${𝔤}_{0}$ is to be interpreted as $\stackrel{^}{𝔥}\text{.}$ Hence if $𝔞\ne 0$ there exists $\alpha \in R\cup \left\{0\right\},$ $x\ne 0$ in ${𝔤}_{\alpha }$ and $m\in ℤ$ such that ${t}^{m}x\in 𝔞\text{.}$ Choose $y\in {𝔤}_{-\alpha }$ such that $⟨x,y⟩=1,$ then $\left[x,y\right]={h}_{\alpha }^{\vee }\ne 0$ and $z = [ tmx, t-my ] = [x,y]+ ⟨ d(tmx), t-my ⟩ c = [x,y]+mc≠0$ lies in $𝔞\cap \stackrel{^}{𝔥}:$ contradiction. Hence $\stackrel{^}{L}\left(𝔤\right)$ has no ideals $𝔞\ne 0$ such that $𝔞\cap \stackrel{^}{𝔥}=0\text{.}$ To complete the proof, it remains to be shown that $\stackrel{^}{L}\left(𝔤\right)$ is generated by the ${e}_{i},{f}_{i}$ $\left(0\le i\le \ell \right)$ and $\stackrel{}{𝔥}\text{.}$ Let ${L}_{1}$ be the subalgebra generated by these elements. Then certainly $𝔤\subset {L}_{1};$ also $t{f}_{\phi }={e}_{0}\in {L}_{1},$ and since $𝔤$ is simple it is generated (as a $𝔤$-module) by ${f}_{\phi },$ i.e. $\left[{f}_{\phi },𝔤\right]=𝔤$ and therefore $\left[{e}_{0},𝔤\right]=t𝔤\text{.}$ Thus $t𝔤\subset {L}_{1}\text{.}$ Now assume that ${t}^{k}𝔤\subset {L}_{1}$ for some $k\ge 1\text{.}$ Since $𝔤=\left[𝔤,𝔤\right]$ we have ${t}^{k+1}𝔤=\left[t𝔤,{t}^{k}𝔤\right]\subset {L}_{1},$ and hence ${t}^{k}𝔤\subset {L}_{1}$ for all $k\ge 0\text{.}$ In the same way we prove that ${t}^{k}𝔤\subset {L}_{1}$ for all $k\le 0,$ and hence that ${L}_{1}=\stackrel{^}{L}\left(𝔤\right)\text{.}$ $\square$

(4.4) Corollary If $S$ is the root system of $𝔤\left({A}^{\text{(1)}}\right)$ then

$Sre = { α+mδ:α ∈R,m∈ℤ } Sim = { mδ: m∈ℤ,m≠0 }$

each imaginary root $m\delta$ has multiplicity $\ell =\text{rank}\phantom{\rule{0.2em}{0ex}}\left({A}^{\text{(1)}}\right)\text{.}$

The positive real roots are

$α+mδ ( α∈R,m≥1and α∈R+,m=0 )$

The bilinear form $⟨\xi ,\eta ⟩$ on $L\left(𝔤\right)$ we extend to $\stackrel{^}{L}\left(𝔤\right)$ as follows:

$⟨c,L(𝔤)⟩ =⟨d,L(𝔤)⟩ =0;⟨c,c⟩ =⟨d,d⟩=0; ⟨c,d⟩=1$

It is still invariant: the only nontrivial case to be checked is that

$⟨[d,ξ],η⟩ = ⟨d,[ξ,η]⟩ (ξ,η∈L(𝔤))$

which is true because

$⟨[d,ξ],η⟩ = ⟨dξ,η⟩$

and

$⟨d,[ξ,η]⟩ = ⟨ d, [ξ,η]0 +⟨dξ,η⟩c ⟩ = ⟨dξ,η⟩.$

Remark. $L\left(𝔤\right)$ is certainly not simple: it has lots of ideals. For example, let $a\in {k}^{*}$ and let ${u}_{a}:\phantom{\rule{0.2em}{0ex}}L\left(𝔤\right)\to 𝔤$ be the homomorphism defined by ${u}_{a}\left({t}^{m}x\right)={a}^{m}x$ $\left(m\in ℤ;\phantom{\rule{0.2em}{0ex}}x\in 𝔤\right)\text{.}$ Then ${u}_{a}$ is a Lie algebra homomorphism, and its kernel is a nontrivial ideal, indeed a maximal ideal.

## Construction of the remaining affine Lie algebras

Let $A={\left({a}_{ij}\right)}_{1\le i,j\le n}$ be an indecomposable symmetric Cartan matrix of finite type, i.e. of type $A,$ $D$ or $E\text{.}$ As usual, let $𝔥,R,Q,W$ denote the Cartan subalgebra, the root system, the root lattice and the Weyl group of the algebra $𝔤\left(A\right)=𝔤\text{.}$ The invariant bilinear form $⟨x,y⟩$ on $𝔤,$ constructed as in (3.12), is such that $⟨{h}_{i},{h}_{j}⟩={a}_{ij}$ and also (on ${𝔥}^{*}\text{)}$ $⟨{\alpha }_{i},{\alpha }_{j}⟩={a}_{ij}\text{.}$

In particular, ${\mid {\alpha }_{i}\mid }^{2}={a}_{ii}=2;$ since the scalar product on ${𝔥}^{*}$ is $W$-invariant and all the roots are real, we have ${\mid \alpha \mid }^{2}=2$ for all roots $\alpha \in R\text{.}$ Conversely, by (2.34), if $\alpha \in Q$ is such that ${\mid \alpha \mid }^{2}=2,$ then $\alpha \in R\text{.}$

Let $\alpha ,\beta \in R\text{.}$ Then (Cauchy-Schwarz)

$∣⟨α,β⟩∣ ≤∣α∣·∣β∣ =2$

and since $⟨\alpha ,\beta ⟩$ is an integer, it can therefore take only the values $0,±1,±2\text{.}$

(4.5) We have $⟨\alpha ,\beta ⟩=2,1,0,-1,-2$ respectively if and only if

$α=β; α-β∈R; α±β∉R∪{0}; α+β∈R; α=-β$

 Proof. For example, since ${\mid \alpha -\beta \mid }^{2}={\mid \alpha \mid }^{2}+{\mid \beta \mid }^{2}-2⟨\alpha ,\beta ⟩=4-2⟨\alpha ,\beta ⟩$ it follows that $⟨α,β⟩= {12}⇔ ∣α-β∣2= {20}⇔ α-β {∈R=0}⇔$ Similarly with $\beta$ replaced by $-\beta \text{.}$ $\square$

Now let $\Delta$ be the Dynkin diagram of $A,$ and let $s$ be an automorphism of $\Delta \text{.}$ In terms of the matrix $A,$ this means that $s$ is a permutation of $\left\{1,2,\dots ,n\right\}$ such that

$asi,sj= ai,j$

for all $i,j\text{.}$ Let $k$ be the order of $s\text{.}$ If $k\ne 1$ (i.e. if $s\ne 1\text{)}$ there are just 5 possibilities:

$A2ℓ,ℓ≥1 k=2 A2ℓ-1,ℓ≥2 k=2 Dℓ+1,ℓ≥3 k=2 E6 k=2 D4 k=3$

(In each case, vertices of $\Delta$ in the same vertical line are in the same $s$-orbit.) Thus $k=2$ or $3$ in every case.

The graph automorphism $s$ determines an automorphism (also denoted by $s\text{)}$ of period $k$ of the Lie algebra $𝔤=𝔤\left(A\right)$ by the rule

$s(ei)=esi, s(fi)=fsi, s(hi)=hsi (1≤i≤n)$

This is clear from the construction of $𝔤\left(A\right)$ in Chapter I, since the relations (1.2) are stable under $s\text{.}$

By transposition, $s$ also acts on ${𝔥}^{*}:$ $\left(s\lambda \right)\left(h\right)=\lambda \left({s}^{-1}h\right)\phantom{\rule{1em}{0ex}}\left(h\in 𝔥,\phantom{\rule{0.2em}{0ex}}\lambda \in {𝔥}^{*}\right)$ We have $s{\alpha }_{j}={\alpha }_{sj},$ because

$(sαj)(hi)= αj(s-1hi) =αj(hs-1i) =as-1i,j= ai,sj= αsj(hi).$

The scalar product on $𝔤$ (hence on $𝔥$ and ${𝔥}^{*}\text{)}$ is $s$-invariant.

Since $𝔥$ is stable under $s,$ it follows that $s$ permutes the root-spaces ${𝔤}_{\alpha }$ and hence also the roots $\alpha \in R\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}s\left({𝔤}_{\alpha }\right)={𝔤}_{s\alpha }\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}$ and this action agrees with that already described on ${𝔥}^{*},$ because if $x\in {𝔤}_{\alpha }$ and $h\in 𝔥$ we have

$[h,sx]=s [s-1h,x]=s (α(s-1h)x) =(sα)(h)sx.$

Moreover, since $s$ permutes the simple roots ${\alpha }_{i},$ it follows that $s$ permutes ${R}^{+}\text{.}$ Hence $\alpha +s\alpha$ is never zero, and $\alpha -s\alpha$ is never a root. This observation, together with (4.5), proves the first part of

(4.6) Let $\alpha \in R$ and assume $\alpha \ne s\alpha \text{.}$ Then

1. $⟨\alpha ,s\alpha ⟩=0$ or $-1;$
2. If $⟨\alpha ,s\alpha ⟩=-1$ (so that $\beta =\alpha +s\alpha \in R\text{)}$ then $k=2$ and $s$ acts as $-1$ on ${𝔤}_{\beta }\text{.}$

 Proof of (ii). If $k=3$ then $⟨\alpha ,{s}^{2}\alpha ⟩=⟨\alpha ,{s}^{-1}\alpha ⟩=⟨s\alpha ,\alpha ⟩=-1,$ and $⟨s\alpha ,{s}^{2}\alpha ⟩=⟨\alpha ,s\alpha ⟩=-1,$ hence ${\mid \alpha +s\alpha +{s}^{2}\alpha \mid }^{2}=6-2·3=0$ and therefore $\alpha +s\alpha +{s}^{2}\alpha =0;$ which is plainly impossible. Hence $k=2\text{.}$ Let ${e}_{\alpha }$ generate ${𝔤}_{\alpha },$ then $s{e}_{\alpha }$ generates ${𝔤}_{s\alpha }$ and $x=\left[{e}_{\alpha },s{e}_{\alpha }\right]$ is a nonzero element of ${𝔤}_{\beta }\text{.}$ Hence $x$ generates ${𝔤}_{\beta },$ and $sx=\left[s{e}_{\alpha },{e}_{\alpha }\right]=-x\text{.}$ $\square$

Let $\omega$ be a primitive $k$th roots of unity (assumed to lie in the ground field $K$ if $k=3\text{).}$ For each integer $r$ define

$𝔤(r) = { x∈𝔤:sx =ωrx }$

so that ${𝔤}^{\left(r\right)}$ is the ${\omega }^{r}$-eigenspace of $s$ in $𝔤,$ and depends only on $r$ and $k\text{.}$ We have

$𝔤=⨁r=0k-1 𝔤(r) (1)$

the decomposition being

$x=∑r=0k-1 x(r)$

where

$x(r)=1k ∑i=0k-1 ω-irsix$

Also

$[ 𝔤(p), 𝔤(q) ] ⊂𝔤(p+q) (2)$

for all $p,q\in ℤ,$ so that (1) is a $ℤ/kℤ$-grading of $𝔤\text{.}$

In particular, ${𝔤}^{\left(0\right)}$ is the Lie algebra of $s$-invariants of $𝔤,$ and each ${𝔤}^{\left(r\right)}$ is a ${𝔤}^{\left(0\right)}$-module under the adjoint action.

Next we have

(4.7) The restriction of the bilinear form $⟨x,y⟩$ to ${𝔤}^{\left(p\right)}×{𝔤}^{\left(q\right)}$ is

1. zero if $p+q\not\equiv 0\phantom{\rule{0.2em}{0ex}}\text{(mod}\phantom{\rule{0.2em}{0ex}}k\text{)}$
2. nondegenerate if $p+q\equiv 0\phantom{\rule{0.2em}{0ex}}\text{(mod}\phantom{\rule{0.2em}{0ex}}k\text{).}$

 Proof. Let $x\in {𝔤}^{\left(p\right)},\phantom{\rule{0.2em}{0ex}}y\in {𝔤}^{\left(q\right)}\text{.}$ Then $⟨x,y⟩= ⟨sx,sy⟩= ωp+q ⟨x,y⟩$ which proves (a); then (b) follows because $⟨x,y⟩$ is nondegenerate on $𝔤\text{.}$ $\square$

Now let

$L (𝔤,s) = ∑r∈ℤ tr𝔤(r) ⊂L(𝔤) L∼ (𝔤,s) = L(𝔤,s)⊕Kc⊂ L∼(𝔤) L^ (𝔤,s) = L∼(𝔤,s)⊕ Kd⊂L^(𝔤)$

It follows from (2) that $L\left(𝔤,s\right)$ is a subalgebra of $L\left(𝔤\right),$ and then that $\stackrel{\sim }{L}\left(𝔤,s\right)$ (resp. $\stackrel{^}{L}\left(𝔤,s\right)\text{)}$ is a subalgebra of $\stackrel{^}{L}\left(𝔤\right)$ (resp. $\stackrel{^}{L}\left(𝔤\right)\text{).}$

Our aim is now to show that $\stackrel{^}{L}\left(𝔤,s\right)\cong 𝔤\left({A}^{\left(k\right)}\right)$ where ${A}^{\left(k\right)}$ is an indecomposable Cartan matrix of affine type, to be defined presently.

Let ${\Delta }_{i}$ $\left(1\le i\le \ell \right)$ be the orbits of $s$ in $\Delta ,$ and number the vertices of $\Delta$ so that $i\in {\Delta }_{i}\text{.}$ With one exception (case ${A}_{2\ell }\text{)}$ ${\Delta }_{i}$ is discrete (no joining edges). In the exceptional case, ${\Delta }_{i}$ is connected (of type ${A}_{2}\text{).}$ Define

$ui= { 1, ifΔiis discrete , 2, ifΔiis connected ,$

and put

$u=max1≤i≤ℓ ui$

(so that $u=1$ except in case ${A}_{2\ell },$ where $u=2\text{.)}$

Let

$Ei= ui12 ∑j∈Δi ej, Fi= ui12 ∑j∈Δi fj, Hi= ui12 ∑j∈Δi hj$

for $1\le i\le \ell \text{.}$ These elements are all fixed by $s,$ hence they generate a subalgebra $\stackrel{‾}{𝔤}$ of ${𝔤}^{\left(0\right)}\text{.}$ Let $\stackrel{‾}{𝔥}$ be the subspace of $𝔥$ spanned by the ${H}_{i},$ and note that $\stackrel{‾}{𝔥}={𝔥}^{\left(0\right)}=\left\{h\in 𝔥\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}sh=h\right\}\text{.}$

Next define

$a‾ij=ui ∑p∈Δi apj (1≤i,j≤ℓ)$

and let $\stackrel{‾}{A}={\left({a}_{ij}\right)}_{1\le i,j\le \ell }\text{.}$

We have then

$⟨Hi,Hj⟩ = uiuj ∑ p∈Δi q∈Δj apq = uj∣Δj∣ a‾ij.$

(4.8)

1. $\stackrel{‾}{A}$ is an indecomposable Cartan matrix of finite type, given by the following table $k 2 2 2 2 2 3 A A2 A2ℓ(ℓ≥2) A2ℓ-1(ℓ≥2) DℓH(ℓ≥3) E6 D4 A‾ A1 Dℓ Cℓ Bℓ F4 G2$
2. $\stackrel{‾}{𝔤}\cong 𝔤\left(\stackrel{‾}{A}\right)\text{.}$

 Proof. It is straightforward to verify from the definition (4) that $\stackrel{‾}{A}$ is a Cartan matrix, and that $\mid {\stackrel{‾}{a}}_{ij}\mid \le 3\text{.}$ Let $\Delta ,\stackrel{‾}{\Delta }$ be the Dynkin diagrams of $A$ and $\stackrel{‾}{A}\text{.}$ Then $\stackrel{‾}{\Delta }$ is derived from $\Delta$ by the following rules (which are a restatement of (4)): $is replaced by is replaced by is replaced by is replaced by is replaced by$ (In the left-handed column, vertices in the same vertical line are in the same $s$-orbit). Hence the type of $\stackrel{‾}{A}$ is as stated in the table above. It is straightforward to verify that the generators ${E}_{i},{F}_{i},{H}_{i}$ of $\stackrel{‾}{𝔤}$ satisfy the relations $\text{(1.2}\prime \text{)}$ for the matrix $\stackrel{‾}{A}\text{.}$ To complete the proof, it will be enough to verify that they satisfy Serre's relations $(adEi) 1-a‾ij Ej= (adFi) 1-a‾ij Fj=0 (i≠j). (✶)$ For it will then follow from (2.???) that $\stackrel{‾}{𝔤}$ is a homomorphic image of $𝔤\left(\stackrel{‾}{A}\right);$ but $𝔤\left(\stackrel{‾}{A}\right)$ is simple, hence $\stackrel{‾}{𝔤}\cong 𝔤\left(A\right)\text{.}$ To prove $\left(✶\right),$ there are two cases to consider, according as the orbit ${\Delta }_{i}\subset \Delta$ is discrete or connected. Suppose ${\Delta }_{i}$ discrete. If it consists of the single element $i,$ then we have ${\stackrel{‾}{a}}_{ij}={a}_{ij},$ ${E}_{i}={e}_{i},$ and $\left(✶\right)$ follows from the corresponding relation for $𝔤\text{.}$ If ${\Delta }_{i}$ consists of $k$ elements, then for $p\ne q$ in ${\Delta }_{i}$ we have $⟨{\alpha }_{p},{\alpha }_{q}⟩=0$ and therefore by (4.6) ${\alpha }_{p}+{\alpha }_{q}$ is not a root, so that $\left[{e}_{p},{e}_{q}\right]=0$ and hence $\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{p},$ $\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{q}$ commute. Hence $(adEi) 1-a‾ij Ej= ( ∑p∈Δi adep ) 1-a‾ij ( ∑q∈Δj eq )$ is a sum of terms $\prod _{p\in {\Delta }_{i}}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{p}\right)}^{{n}_{p}}{e}_{q},$ where $q\in {\Delta }_{j}$ and $∑p∈Δi np=1-a‾ij =1-∑p∈Δi apq,$ so that ${n}_{p}\ge 1-{a}_{pq}$ for at least one $p\in {\Delta }_{i},$ and therefore ${\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{p}\right)}^{{n}_{p}}{e}_{q}=0\text{.}$ It follows that ${\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{E}_{i}\right)}^{1-{\stackrel{‾}{a}}_{ij}}{E}_{j}=0,$ and likewise with the $E$'s replaced by ??? Suppose ${\Delta }_{i}$ connected. Then $k=2$ and ${\Delta }_{i}=\left\{i,si\right\}\text{.}$ Since $\Delta$ contains no cycles, at least one of ${a}_{ij},$ ${a}_{si,j}$ is zero. If both are zero, then $\left[{E}_{i},{e}_{j}\right]=0$ and therefore $\left[{E}_{i},{E}_{j}\right]=0\text{.}$ If say ${a}_{ij}=0,$ ${a}_{si,j}=-1,$ then ${\stackrel{‾}{a}}_{ij}=-2,$ and we have to show that ${\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}+\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{si}\right)}^{3}{e}_{j}=0\text{.}$ Let $x=\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i},$ $y=\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{si}\text{.}$ Then $x{e}_{j}={y}^{2}{e}_{j}=0;$ Moreover $z=\left[x,y\right]=\text{ad}\phantom{\rule{0.2em}{0ex}}\left[{e}_{i},{e}_{si}\right]$ commutes with $x$ and $y$ (because $2{\alpha }_{i}+{\alpha }_{si}$ and ${\alpha }_{i}+2{\alpha }_{si}$ are not roots), hence ${x}^{2}y{e}_{j}=xz{e}_{j}=zx{e}_{j}=0$ and $yxy{e}_{j}=yz{e}_{j}=zy{e}_{j}=-yxy{e}_{j},$ so that we have altogether $xej=y2ej= x2yej=yxy ej=0$ and therefore $(x+y)3ej= (x+y)2yej= (x+y)xyej=0.$ $\square$

$\stackrel{‾}{𝔥}$ is a Cartan subalgebra of $\stackrel{‾}{𝔤}\text{.}$ Let $p:\phantom{\rule{0.2em}{0ex}}{𝔥}^{*}\to {\stackrel{‾}{𝔥}}^{*}$ be the restriction map, and let ${\stackrel{‾}{\alpha }}_{i}=p\left({\alpha }_{i}\right)$ $\left(1\le i\le \ell \right)\text{.}$ Then

$α‾j(Hi)= ej ( ui∑p∈Δi hp ) =a‾ij$

so that the ${\stackrel{‾}{\alpha }}_{i}$ are the simple roots of $\stackrel{‾}{𝔤}$ (relative to $\stackrel{‾}{𝔥}\text{).}$ If $\alpha \in R,$ say $\alpha =\sum _{1}^{n}{m}_{i}{\alpha }_{i}$ the $p\left(\alpha \right)=\text{????}\in \stackrel{‾}{Q}$ in particular $p\left(\alpha \right)\ne 0\text{.}$

Let $\stackrel{‾}{R}$ be the root system of $\stackrel{‾}{𝔤}$ and let $\stackrel{‾}{Q}=\sum _{1}^{\ell }ℤ{\stackrel{‾}{\alpha }}_{i}$ be the root lattice.

We have $⟨{H}_{i},{H}_{j}⟩={u}_{j}\mid {\Delta }_{j}\mid {\stackrel{‾}{a}}_{ij},$ from which it follows that

$⟨ α‾i, α‾j ⟩ = (ui∣Δi∣) -1 a‾ij = ∣Δi∣-1 ∑k∈Δi ⟨αk,αj⟩ = ⟨ παi,αj ⟩$

where $\pi =\frac{1}{k}\sum _{i=0}^{k-1}{s}^{i}:\phantom{\rule{0.2em}{0ex}}{𝔥}^{*}\to {𝔥}^{*}\text{.}$ Hence, by linearity, we have

$⟨p(λ),p(μ)⟩ =⟨π(λ),μ⟩$

for all $\lambda ,\mu \in {𝔥}^{*}\text{.}$

In particular

$∣α‾i∣2=2 (ui∣Δi∣)-1$

and therefore ${\mid {\stackrel{‾}{\alpha }}_{i}\mid }^{2}=2{u}^{-1}$ or $2{\left(ku\right)}^{-1}$ for all $\stackrel{‾}{\alpha }\in \stackrel{‾}{R}\text{.}$

(4.9) Let $\alpha ,\beta \in R$ be such that $p\left(\alpha \right)=p\left(\beta \right)\text{.}$ Then $\alpha ,\beta$ are in the same $s$-orbit in $R\text{.}$

 Proof. Suppose not, i.e. $\beta \ne {s}^{i}\alpha$ for $0\le i\le k-1\text{.}$ Since $p\left(\beta -{s}^{i}\alpha \right)=p\left(\beta -\alpha \right)=0,$ it follows that $\beta -{s}^{i}\alpha \notin R,$ hence (4.5) $⟨\beta ,{s}^{i}\alpha ⟩\le 0\text{.}$ But then $∣p(α)∣2= ⟨ p(α), p(β) ⟩ =⟨β,πα⟩≤0$ so that $p\left(\alpha \right)=0,$ which is impossible. $\square$

Since $\stackrel{‾}{𝔤}$ is subalgebra of ${𝔤}^{\left(0\right)},$ each ${𝔤}^{\left(r\right)}$ (and $𝔤\right)$ is a $\stackrel{‾}{𝔤}$-module. Let ${S}^{\left(r\right)}$ (resp. $S\text{)}$ be the set of nonzero weights of ${𝔤}^{\left(r\right)}$ (resp. $𝔤\text{)}$ as $\stackrel{‾}{𝔤}$-module (or $\stackrel{‾}{𝔥}$-module).

Clearly

$R‾⊂S(0); S=⋃r=0k-1 S(r);$

also

$p(R)=S⊂Q‾$

because $R$ is the set of weights of $𝔤$ as $𝔥$-module. By (4.9) the fibres of $p:\phantom{\rule{0.2em}{0ex}}R\to S$ are the orbits of $S$ in $R,$ hence have 1 or $k$ elements.

1. If $\alpha =s\alpha$ then $s\left({𝔤}_{\alpha }\right)={𝔤}_{\alpha },$ hence $s{e}_{\alpha }={\omega }^{r}{e}_{\alpha }$ for some $r=0,\dots ,k-1,$ and correspondingly ${e}_{\alpha }\in {𝔤}^{\left(r\right)},$ hence $p\left(\alpha \right)\in {s}^{\left(r\right)}$ with multiplicity 1, for this value of $r\text{.}$
2. If $\alpha \ne s\alpha$ then ${e}_{\alpha },s{e}_{\alpha },\dots ,{s}^{k-1}{e}_{\alpha }$ are linearly independent, hence ${e}_{\alpha }^{\left(r\right)}\ne 0$ for each $r=0,1,\dots ,k-1\text{.}$ It follows that $p\left(\alpha \right)\in {S}^{\left(r\right)}$ with multiplicity 1, for each $r=0,1,\dots ,k-1\text{.}$

Thus all nonzero weights of each ${𝔤}^{\left(r\right)}$ occur with multiplicity 1.

For $\alpha \in R$ there are three (mutually exclusive) possibilities:

1. $\alpha =s\alpha ;$
2. $\alpha \ne s\alpha ;$ $⟨\alpha ,s\alpha ⟩=0;$
3. $\alpha \ne s\alpha ;$ $⟨\alpha ,s\alpha ⟩=-1\text{.}$

 Proof. We have $∣p(α)∣2 = ⟨α,πα⟩ = { ⟨α,α⟩ =2, in case (i), 1k ⟨α,α⟩ =2k, in case (ii), 12 ( ⟨α,α⟩+ ⟨α,sα⟩ ) =12, in case (iii),$ (by (4.6), since $k=2$ in case (iii)). Suppose first that $u=1\text{.}$ Then case (iii) does not occur. For by (2.34) we have $min { ∣λ∣2: λ∈Q‾,λ≠0 } =2k$ and $p\left(\alpha \right)\in \stackrel{‾}{Q}\text{.}$ Again by (2.34), if ${\mid p\left(\alpha \right)\mid }^{2}=\frac{2}{k}$ (case (ii)) then $p\left(\alpha \right)\in \stackrel{‾}{R};$ whilst if ${\mid p\left(\alpha \right)\mid }^{2}=2,$ i.e. if $\alpha =s\alpha ,$ then if $\alpha =\sum _{i=1}^{n}{m}_{i}{\alpha }_{i}$ we have ${m}_{i}={m}_{si}$ and therefore $p\left(\alpha \right)=\sum _{i=1}^{\ell }\mid {\Delta }_{i}\mid {m}_{i}{\stackrel{‾}{\alpha }}_{i},$ whence again by (2.34) we have $p\left(\alpha \right)\in \stackrel{‾}{R}\text{.}$ So if $u=1$ we have $S=R‾$ and therefore (since $\stackrel{‾}{R}\subset {S}^{\left(0\right)}\subset S\text{)}$ $S(0)=R‾, S(r)=R‾short (1≤r≤k-1)$ where ${\stackrel{‾}{R}}_{\text{short}}$ is the set of short roots $\stackrel{‾}{\alpha }\in \stackrel{‾}{R}$ (i.e. with ${\mid \stackrel{‾}{\alpha }\mid }^{2}=\frac{2}{k}\text{).}$ There remains the case ${A}_{2\ell }^{\left(2\right)},$ where $u=2\text{.}$ In this case ${\mid \stackrel{‾}{\alpha }\mid }^{2}=1$ or $\frac{1}{2}$ for $\stackrel{‾}{\alpha }\in \stackrel{‾}{R},$ and we find by direct calculation that $S(o)=R‾, S(1)=R‾∪ 2R‾short.$ Thus in all cases $\begin{array}{|c|}\hline {𝔤}^{\left(0\right)}=\stackrel{‾}{𝔤}\text{.}\\ \hline\end{array}$ Now let $\stackrel{‾}{\psi }$ be the highest short root of $\stackrel{‾}{R}$ (i.e., ${\stackrel{‾}{\psi }}^{\vee }$ is the highest root of ${\stackrel{‾}{R}}^{\vee }\text{),}$ and put $φ‾=uφ‾.$ Then $\stackrel{‾}{\phi }$ is the highest weight of ${\stackrel{‾}{𝔤}}^{\left(r\right)}$ $\left(1\le r\le k-1\right),$ and is the only weight of ${\stackrel{‾}{𝔤}}^{\left(r\right)}$ such that $\stackrel{‾}{\phi }+{\stackrel{‾}{\alpha }}_{i}\notin {S}^{\left(r\right)},$ for $i=1,\dots ,\ell \text{.}$ It follows that ${\stackrel{‾}{𝔤}}^{\left(r\right)}$ is simple $\left(1\le r\le k-1\right)\text{.}$ as a $\stackrel{‾}{𝔤}$-module. For in any case, by complete reducibility, ${\stackrel{‾}{𝔤}}^{\left(r\right)}$ is a direct sum of simple $\stackrel{‾}{𝔤}$-modules. Let $M$ be one of these, with highest weight $\lambda \in {S}^{\left(r\right)}\text{.}$ One sees easily that $\lambda \ne 0,$ hence ${M}_{\lambda }={𝔤}_{\lambda }^{\left(r\right)}$ (because this space is 1-dimensional). But then $\left[{E}_{i},{𝔤}_{\lambda }^{\left(r\right)}\right]=\left[{E}_{i},{M}_{\lambda }\right]=0$ for $1\le i\le \ell ,$ and therefore (Lemma) $\lambda +{\stackrel{‾}{\alpha }}_{i}\notin {S}^{\left(r\right)}$ and consequently $\lambda =\stackrel{‾}{\phi }\text{.}$ $\square$

(Alternatively: instead of using completed reducibility, use the dimension formula to compute $\text{dim}\phantom{\rule{0.2em}{0ex}}L\left(\stackrel{‾}{\phi }\right)$ in each case.)

We now proceed as in the case considered previously.

Normalize the scalar product on $\stackrel{‾}{𝔤}$ so that we have

$⟨φ‾,φ‾⟩ =2$

(no renormalization in case ${A}_{2\ell }^{\left(2\right)},$ because then ${\mid \stackrel{‾}{\psi }\mid }^{2}=\frac{1}{2},$ $u=2,$ hence ${\mid \stackrel{‾}{\phi }\mid }^{2}=2\text{).}$ Choose ${E}_{\stackrel{‾}{\phi }}\in {𝔤}_{\stackrel{‾}{\phi }}^{\left(-1\right)},$ ${F}_{\stackrel{‾}{\phi }}={𝔤}_{-\stackrel{‾}{\phi }}^{\left(1\right)}$ such that

$⟨ Eφ‾, Fφ‾ ⟩ =1$

(this is possible by (4.7)). Then we have

$[ Eφ‾, Fφ‾ ] =Hφ‾$

(where ${H}_{\stackrel{‾}{\phi }}$ is the image of $\stackrel{‾}{\phi }$ in $\stackrel{‾}{𝔥}\text{),}$ because if $H\in \stackrel{‾}{𝔥}$ we have

$⟨ H, [ Eφ‾, Fφ‾ ] ⟩ = ⟨ [H,Eφ‾] ,Fφ‾ ⟩ = φ‾(H) ⟨ Eφ‾, Fφ‾ ⟩ = φ‾(H).$

Then define

$E0=tFφ‾∈ t𝔤(1), F0=t-1 Eφ‾∈ t-1𝔤(-1) ,H0=- Hφ‾+c$

and set $\stackrel{\Delta }{𝔥}=\stackrel{‾}{𝔥}\oplus kc\oplus kd\text{.}$ Extend each $\stackrel{‾}{\alpha }\in S$ to a linear form (also denoted by $\stackrel{‾}{\alpha }\text{)}$ on $\stackrel{\Delta }{𝔥}$ by setting $\stackrel{‾}{\alpha }\left(c\right)=\stackrel{‾}{\alpha }\left(d\right)=0;$ also define $\stackrel{‾}{\delta }\in {\stackrel{\Delta }{𝔥}}^{*}$ by

$δ‾∣ ( 𝔥‾⊕kc ) =0,δ‾ (d)=1 (δ‾=restriction of δfor𝔥Δ)$

Finally set

$α‾0=δ‾ -φ‾;$

we have

$φ‾= ∑i=1ℓ a‾i α‾i$

with coefficients ${\stackrel{‾}{a}}_{i}\ge 1,$ so that if we define ${\stackrel{‾}{a}}_{0}=1$ we have

$∑i=0ℓ a‾i α‾i =δ‾$

Let

$a‾ij= α‾j (Hi) (0≤i,j≤ℓ)$

and let ${A}^{\left(k\right)}={\left({\stackrel{‾}{a}}_{ij}\right)}_{0\le i,j\le \ell }\text{.}$ The matrix ${A}^{\left(k\right)}$ has $\stackrel{‾}{A}$ as a principal submatrix. One then verifies that

1. ${A}^{\left(k\right)}$ is an indecomposable Cartan matrix of affine type;
2. $\left(\stackrel{\Delta }{𝔥},{\left({H}_{i}\right)}_{0\le i\le \ell },{\left({\stackrel{‾}{\alpha }}_{i}\right)}_{0\le i\le \ell }\right)$ is a minimal realization of $A\left(k\right);$
3. the ${E}_{i},{F}_{i},\stackrel{\Delta }{𝔥}$ satisfy the defining relations (1.2)
4. the weights of $\stackrel{\Delta }{𝔥}$ in $\stackrel{^}{L}\left(𝔤,s\right)$ are $\stackrel{‾}{\alpha }+r\stackrel{‾}{\delta }$ where $\stackrel{‾}{\alpha }\in {S}^{\left(r\right)},$ $r\in ℤ,$ and $r\stackrel{‾}{\delta }$ with multiplicity $=\text{dim}\phantom{\rule{0.2em}{0ex}}{𝔥}^{\left(r\right)}=$ multiplicity of ${\omega }^{m}$ as eigenvalues of $s$ on $𝔥\text{.}$ Thus $mrδ‾= { ℓ, ifr≡0 (modk), n-ℓk-1, otherwise,$ (because if the latter multiplicity is ${\ell }^{\prime }$ then $\ell +\left(k-1\right){\ell }^{\prime }=n\text{)}$
5. $\begin{array}{ccc}\stackrel{^}{L}\left(𝔤,s\right)& \cong & 𝔤\left({A}^{\left(k\right)}\right)\\ \stackrel{\sim }{L}\left(𝔤,s\right)& \cong & {𝔤}^{\prime }\left({A}^{\left(k\right)}\right)\\ L\left(𝔤,s\right)& \cong & {\stackrel{‾}{𝔤}}^{\prime }\left({A}^{\left(k\right)}\right)\end{array}$