## Kac-Moody Lie AlgebrasChapter I

Last update: 16 August 2012

Abstract.
This is a typed version of I.G. Macdonald's lecture notes on Kac-Moody Lie algebras from 1983.

## Construction of the algebras

The construction of the Kac-Moody algebras works for any matrix over $K$:

$A={\left({a}_{ij}\right)}_{1\le i,j\le n}$

– the ${a}_{ij}$ needn't even be integers.

We begin with a lemma of linear algebra. Given a matrix $A$ as above, a realization of $A$ is a triple $\left(𝔥,B,{B}^{\vee }\right)$ where:

$\begin{array}{c}𝔥\phantom{\rule{0.2em}{0ex}}\text{is a finite-dimensional vector space over}\phantom{\rule{0.2em}{0ex}}k;\\ B=\left({\alpha }_{1},\dots ,{\alpha }_{n}\right)\phantom{\rule{0.2em}{0ex}}\text{is a linearly independent set of vectors in}\phantom{\rule{0.2em}{0ex}}{𝔥}^{*}=\text{dual of}\phantom{\rule{0.2em}{0ex}}𝔥;\\ {B}^{\vee }=\left({h}_{1},\dots ,{h}_{n}\right)\phantom{\rule{0.2em}{0ex}}\text{is a linearly independent set of vectors in}\phantom{\rule{0.2em}{0ex}}𝔥;\phantom{\rule{0.2em}{0ex}}\text{such that}\\ {\alpha }_{j}\left({h}_{i}\right)={a}_{ij}\phantom{\rule{2em}{0ex}}\left(1\le i,j\le n\right).\end{array}$

A realization of $A$ will be called minimal if dim $𝔥$ is as small as possible (evidently it must be $\ge n$).

Let $l=\text{rank}\phantom{\rule{0.2em}{0ex}}\left(A\right)$

(1.1)

1. If $\left(𝔥,B,{B}^{\vee }\right)$ is a realization of $A$, then dim $𝔥\ge 2n-l$.
2. $A$ has a minimal realization, of dimension $2n-l$, which is unique up to isomorphism (but the isomorphism is not unique if $l).

 Proof. Extend ${B}^{\vee }$ to a basis ${h}_{1},\dots ,{h}_{N}$ of $𝔥$ (so that dim $𝔥=N$). The $N×n$ matrix $M={\left({\alpha }_{j}\left({h}_{i}\right)\right)}_{1\le i\le N,1\le j\le n}$ is of the form $M=\left(\begin{array}{c}A\\ B\end{array}\right)$ and has rank $n$ (because its columns are linearly independent). Let ${V}_{A},{V}_{b},{V}_{M}$ denote the spaces spanned by the rows of $A,B,M$ respectively. Then we have ${V}_{M}={V}_{A}+{V}_{B}$, and $\text{dim}\phantom{\rule{0.2em}{0ex}}{V}_{A}=\text{rank}\phantom{\rule{0.2em}{0ex}}\left(A\right)=l,\phantom{\rule{0.2em}{0ex}}\text{dim}\phantom{\rule{0.2em}{0ex}}{V}_{M}=\text{rank}\phantom{\rule{0.2em}{0ex}}\left(M\right)=l$. Hence $N-n\ge \text{dim}\phantom{\rule{0.2em}{0ex}}{V}_{B}\ge n-l$ i.e., $N\ge 2n-l$. By reordering the rows and the columns of $A$ we may assume that the $l×l$ minor of $A$ in the top left-hand corner is nonsingular, say $A=\left(\begin{array}{cc}{A}_{1}& {A}_{2}\\ {A}_{3}& {A}_{4}\end{array}\right)$ with ${A}_{1}$ a nonsingular $l×l$ matrix. Let $C=\left(\begin{array}{ccc}{A}_{1}& {A}_{2}& 0\\ {A}_{3}& {A}_{4}& {1}_{n-l}\\ 0& {1}_{n-l}& 0\end{array}\right)$, then $\text{det}\phantom{\rule{0.2em}{0ex}}C=±\text{det}\phantom{\rule{0.2em}{0ex}}{A}_{1}\ne 0$; hence the rows of $C$ are linearly independent. Take $𝔥={k}^{2n-l}$ (row vectors); ${\alpha }_{j}$ the $j$th coordinate function on $𝔥,\phantom{\rule{0.2em}{0ex}}{h}_{i}$ the $i$th row of WHAT GOES HERE?. Then ${\alpha }_{j}\left({h}_{i}\right)={a}_{ij}$ and we have a realization of $A$. Conversely, let $\left(𝔥,B,{B}^{\vee }\right)$ be a minimal realization of $A$ $\left(\text{dim}\phantom{\rule{0.2em}{0ex}}𝔥=2n-l\right)$. Extend ${B}^{\vee }$ to a basis ${h}_{1},\dots ,{h}_{2n-l}$ of $𝔥$, and define ${\alpha }_{n+1},\dots ,{\alpha }_{2n-l}\in {𝔥}^{*}$ so that the matrix $D={\left({\alpha }_{j}\left({h}_{i}\right)\right)}_{1\le i,j\le 2n-l}$ has the form $D=\left(\begin{array}{ccc}{A}_{1}& {A}_{2}& 0\\ {A}_{3}& {A}_{4}& {1}_{n-l}\\ {B}_{1}& {B}_{2}& 0\end{array}\right)$ ${B}_{1},{B}_{2}$ at present unspecified. Then $\text{det}\phantom{\rule{0.2em}{0ex}}D=±\text{det}\phantom{\rule{0.2em}{0ex}}\left(\begin{array}{cc}{A}_{1}& {A}_{2}\\ {B}_{1}& {B}_{2}\end{array}\right)$, and I claim that this matrix is nonsingular. For the submatrix $M=\left(\begin{array}{cc}{A}_{1}& {A}_{2}\\ {A}_{3}& {A}_{4}\\ {B}_{1}& {B}_{2}\end{array}\right)=\left(\begin{array}{c}A\\ B\end{array}\right)$ of $D$ has rank $n$, as before, and now we have ${V}_{M}={V}_{A}\oplus {V}_{B}$; but ${V}_{A}$ has the first $l$ rows of $A$ as a basis, and the rows of $B$ form a basis of ${V}_{B}$; hence the rows of $\left(\begin{array}{cc}{A}_{1}& {A}_{2}\\ {B}_{1}& {B}_{2}\end{array}\right)$ are linearly independent, as claimed. It follows that $D$ is nonsingular, hence that ${\alpha }_{1},\dots ,{\alpha }_{2n-l}$ are a basis of ${𝔥}^{*}$. By adding to ${h}_{n+1},\dots ,{h}_{2n-l}$ suitable linear combinations of ${h}_{1},\dots ,{h}_{l}$, we can make ${B}_{1}=0$. But then $\text{det}\phantom{\rule{0.2em}{0ex}}{B}_{2}\ne 0$, and we can choose another basis of the subspace of $𝔥$ spanned by ${h}_{n+1},\dots ,{h}_{2n-l}$ so as to make ${B}_{2}={1}_{n-l}$, i.e. $D=C$. This completes the proof. $\square$

A matrix $A$ as above will be said to be decomposable if we can partition the index set $\left\{1,\dots ,n\right\}$ into two non-empty disjoint subsets $I,J$ such that ${a}_{ij}={a}_{ji}=0$ whenever $i\in I$ and $j\in J$. In other words, if after simultaneous permutation of rows and columns $A$ becomes a nontrivial direct sum ${A}_{1}\oplus {A}_{2}$.

Clearly, if $\left({𝔥}_{i},{B}_{i},{B}_{i}^{\vee }\right)$ is a minimal realization of ${A}_{i}\phantom{\rule{0.2em}{0ex}}\left(i=1,2\right)$, then $\left(𝔥,B,{B}^{\vee }\right)$ is a minimal realization of $A$, where

$\begin{array}{ccc}𝔥& =& {𝔥}_{1}×{𝔥}_{2}\phantom{\rule{3em}{0ex}}{𝔥}^{*}={𝔥}_{1}^{*}×{𝔥}_{2}^{*}\phantom{\rule{3em}{0ex}}\text{(direct sum)}\\ {B}^{\vee }& =& \left({B}_{1}^{\vee }×0\right)\cup \left(0×{B}_{2}^{\vee }\right)\\ B& =& \left({B}_{1}×0\right)\cup \left(0×{B}_{2}\right)\end{array}$

Again, if $\left(𝔥,B,{B}^{\vee }\right)$ is a minimal realization of $A$, then $\left({𝔥}^{*},{B}^{\vee },B\right)$ is a minimal realization of ${A}^{t}$.

Let $A$ be any $n×n$ matrix over $k$, as before; let $\left(𝔥,B,{B}^{\vee }\right)$ be a minimal realization of $A\phantom{\rule{0.2em}{0ex}}\left(B=\left({\alpha }_{1},\dots ,{\alpha }_{n}\right);\phantom{\rule{0.2em}{0ex}}{B}^{\vee }=\left({h}_{1},\dots ,{h}_{n}\right)\right)$. Let $\stackrel{\sim }{𝔤}\left(A\right)$ denote the Lie algebra generated by $𝔥$ and $2n$ elements ${e}_{i},{f}_{i}\phantom{\rule{0.2em}{0ex}}\left(1\le i\le n\right)$ subject to the relations

$\begin{array}{cc}\text{(1.2)}& \left\{\begin{array}{ccc}\left[h,{h}^{\prime }\right]=0& & \left(\text{all}\phantom{\rule{0.2em}{0ex}}h,{h}^{\prime }\in 𝔥\right)\\ \left[{e}_{i},{f}_{j}\right]={\delta }_{ij}{h}_{i}& & \left(1\le i,j\le n\right)\\ \begin{array}{c}\left[h,{e}_{i}\right]={\alpha }_{i}\left(h\right){e}_{i}\\ \left[h,{f}_{i}\right]=-{\alpha }_{i}\left(h\right){f}_{i}\end{array}\right\}& & \left(1\le i\le n;\phantom{\rule{0.2em}{0ex}}h\in 𝔥\right)\end{array}\end{array}$

By (1.1), $\stackrel{\sim }{𝔤}\left(A\right)$ depends (up to isomorphism) only on the matrix $A$.

Objects defined by generators and relations are often not easy to handle directly. To get a group on $\stackrel{\sim }{𝔤}\left(A\right)$ we shall construct a family of representations ${\rho }_{\lambda }$ of $\stackrel{\sim }{𝔤}\left(A\right)$, one for each $\lambda \in {𝔥}^{*}$check the wedge?. These representations will act on the same vector space $X:\phantom{\rule{0.5em}{0ex}}X$ is the free associative algebra over $k$ on $n$ generators ${x}_{1},\dots ,{x}_{n}$, and we define an action of the generators of $\stackrel{\sim }{𝔤}\left(A\right)$ on $X$ as follows: Let $\lambda \in {𝔥}^{*}$ and define

$\begin{array}{cccc}\left(a\right)& h\left(1\right)=\lambda \left(h\right)·1;& h\left({x}_{j}x\right)={x}_{j}h\left(x\right)-{\alpha }_{j}\left(h\right){x}_{j}x& \left(x\in X,h\in 𝔥,1\le j\le n\right)\\ \left(b\right)& {e}_{i}\left(1\right)=0;& {e}_{i}\left({x}_{j}x\right)={x}_{j}{e}_{i}\left(x\right)+{\delta }_{ij}{h}_{i}\left(x\right)& \left(x\in X,1\le i,j\le n\right)\\ \left(c\right)& {f}_{i}\left(x\right)={x}_{i}x& \left(x\in X,1\le i\le n\right).\end{array}$

I claim that these formulas define a representation ${\rho }_{\lambda }$ of $\stackrel{\sim }{𝔤}\left(A\right)$ on $X$. To verify this, we have to check the defining relations (1.2).

First, it follows from ($a$) that

$h\left({x}_{{j}_{1}}\dots {x}_{{j}_{r}}\right)=\left(\lambda -{\alpha }_{{j}_{1}}-\dots -{\alpha }_{{j}_{r}}\right)\left(h\right){x}_{{j}_{1}}\dots {x}_{{j}_{r}}$

by induction on $r$; hence each $h\in 𝔥$ acts diagonally on $X$ (relative to the basis of $X$ formed by the monomials) and therefore $\left[h,{h}^{\prime }\right]=0$ for all $h,{h}^{\prime }\in 𝔥$.

Next, we have

$\begin{array}{cc}\left[{e}_{i},{f}_{j}\right]={\delta }_{ij}{h}_{j}& \text{from}\phantom{\rule{0.2em}{0ex}}\left(b\right)\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}\left(c\right)\\ \left[h,{f}_{j}\right]=-{\alpha }_{j}\left(h\right){f}_{j}& \text{from}\phantom{\rule{0.2em}{0ex}}\left(a\right)\phantom{\rule{0.2em}{0ex}}\text{and}\phantom{\rule{0.2em}{0ex}}\left(c\right)\end{array}$

and therefore it remains to show that $\left[h,{e}_{i}={\alpha }_{i}\left(h\right){e}_{i}\right]$ (as linear transformations of the vector space $X$). So let $u=\left[h,{e}_{i}\right]-{\alpha }_{i}\left(h\right){e}_{i}$, then

$\begin{array}{ccc}\left[u,{f}_{j}\right]& =& \left[\left[h,{e}_{i}\right],{f}_{j}\right]-{\alpha }_{i}\left(h\right)\left[{e}_{i},{f}_{j}\right]\\ & =& \left[h,\left[{e}_{i},{f}_{j}\right]\right]-\left[{e}_{i},\left[h,{f}_{j}\right]\right]-{\alpha }_{i}\left(h\right)\left[{e}_{i},{f}_{j}\right]\\ & =& \left[h,{\delta }_{ij}{h}_{i}\right]+{\alpha }_{j}\left(h\right){\delta }_{ij}{h}_{i}-{\alpha }_{i}\left(h\right){\delta }_{ij}{h}_{i}\\ & =& 0\end{array}$

Hence $u\left({x}_{j}x\right)={x}_{j}u\left(x\right)$ for all $x\in X$ and $1\le j\le n$; hence (induction on $r$) $u\left({x}_{{j}_{1}}\dots {x}_{{j}_{r}}\right)={x}_{{j}_{1}}\dots {x}_{{j}_{r}}u\left(1\right)$; but

$\begin{array}{ccc}u\left(1\right)& =& h\left({e}_{i}\left(1\right)\right)=-{e}_{i}\left(h\left(1\right)\right)-{\alpha }_{i}\left(h\right){e}_{i}\left(1\right)\\ & =& -\lambda \left(h\right){e}_{i}\left(1\right)=0\end{array}$

and therefore $u=0$ as required.

Thus for each $\lambda \in {𝔥}^{*}$ the formulas ($a$) - ($c$) define a representation ${\rho }_{\lambda }$ of $\stackrel{\sim }{𝔤}\left(A\right)$ on $X$.

This may look like a rabbit pulled out of a hat: in fact it is a standard construction (Verma module).

We may remark straightaway that the canonical mapping $𝔥\to \stackrel{\sim }{𝔤}\left(A\right)$ is injective. For if $h\in 𝔥$ because zero in $\stackrel{\sim }{𝔤}\left(A\right)$, then from ($a$) we have $\lambda \left(h\right)·1=0$ for all $\lambda \in {𝔥}^{*}$, and hence $h=0$.

Let ${\stackrel{\sim }{𝔫}}_{+}$ (resp. ${\stackrel{\sim }{𝔫}}_{-}$) denote the subalgebra of $\stackrel{\sim }{𝔤}\left(A\right)$ generated by ${e}_{1},\dots ,{e}_{n}$ (resp. ${f}_{1},\dots ,{f}_{n}$).

(1.3)

1. $\stackrel{\sim }{𝔤}\left(A\right)={\stackrel{\sim }{𝔫}}_{-}\oplus 𝔥\oplus {\stackrel{\sim }{𝔫}}_{+}$ (direct sum of vector spaces)
2. ${\stackrel{\sim }{𝔫}}_{+}$ (resp. ${\stackrel{\sim }{𝔫}}_{-}$) is the free Lie algebra generated by ${e}_{1},\dots ,{e}_{n}$ (resp. ${f}_{1},\dots ,{f}_{n}$)
3. $\exists$ unique involutory automorphsim $\stackrel{\sim }{\omega }$ of $\stackrel{\sim }{𝔤}\left(A\right)$ such that

$\stackrel{\sim }{\omega }\left({e}_{i}\right)=-{f}_{i},\phantom{\rule{2em}{0ex}}\stackrel{\sim }{\omega }\left({f}_{i}\right)=-{e}_{i},\phantom{\rule{2em}{0ex}}\stackrel{\sim }{\omega }\left(h\right)=-h,\phantom{\rule{1em}{0ex}}\left(h\in 𝔥\right)$.

 Proof. We shall take these in reverse order. is clear, since the relations (1.2) are stable under $\stackrel{\sim }{\omega }$. Since $X$ is the free associative algebra on ${x}_{1},\dots ,{x}_{n}$, L($X$) is the free Lie algebra on the same generators. Now the mapping $\phi :{\stackrel{\sim }{𝔫}}_{-}\to \text{L}\left(X\right)$ defined by $\phi \left(f\right)=f\left(1\right)$ takes ${f}_{i}$ to ${x}_{i}$ and is a Lie algebra homomorphism. Since L($X$) is free, $\phi$ must be an isomorphism, and ${\stackrel{\sim }{𝔫}}_{-}$ is the free Lie algebra on ${f}_{1},\dots ,{f}_{n}$, and $U\left({\stackrel{\sim }{𝔫}}_{-}\right)\cong X$. By applying $\stackrel{\sim }{\omega }$, we see that ${\stackrel{\sim }{𝔫}}_{+}$ is the free Lie algebra on ${e}_{1},\dots ,{e}_{n}$. Let $𝔞={\stackrel{\sim }{𝔫}}_{-}+𝔥+{\stackrel{\sim }{𝔫}}_{+}$. It follows easily from the defining relations (1.2) that $𝔞$ is stable under ad ${e}_{i}$, ad ${f}_{i}$ and ad $h$ $h\in 𝔥$. Hence it is an ideal in $\stackrel{\sim }{𝔤}\left(A\right)$, and since it contains the generators ${e}_{i},{f}_{i},h\in 𝔥$ it is the whole of $\stackrel{\sim }{𝔤}\left(A\right)$. Remains to prove that the sum is direct. Suppose then that we have ${𝔫}_{-}\in {\stackrel{\sim }{𝔫}}_{-},\phantom{\rule{0.5em}{0ex}}h\in 𝔥,\phantom{\rule{0.5em}{0ex}}{𝔫}_{+}\in {\stackrel{\sim }{𝔫}}_{+}$ such that ${𝔫}_{-}+h+{𝔫}_{+}=0$ Apply ${\rho }_{\lambda }$ and evaluate at $1\in X$. We have ${𝔫}_{+}\left(1\right)=0$ (because ${e}_{i}\left(1\right)=0$), hence ${𝔫}_{-}\left(1\right)+\lambda \left(h\right)1=0\phantom{\rule{0.5em}{0ex}}\text{in}\phantom{\rule{0.5em}{0ex}}X$ whence $\lambda \left(h\right)=0$ and ${𝔫}_{-}\left(1\right)=0$. Since this is true for all $\lambda \in {𝔥}^{*}$, it follows first that $h=0$; next, as we have seen, ${𝔫}_{-}↦{𝔫}_{-}\left(1\right):\phantom{\rule{0.5em}{0ex}}{\stackrel{\sim }{𝔫}}_{-}\to X$ is the embedding of ${\stackrel{\sim }{𝔫}}_{-}$ in its universal enveloping algebra $X\cong U\left({\stackrel{\sim }{𝔫}}_{-}\right)$; hence ${𝔫}_{-}=0$, whence finally ${𝔫}_{+}=0$ and the proof is complete. $\square$

In general, if $A$ is a $k$–algebra and $G$ an abelian group, a $G$–grading of $A$ is a decomposition

$\begin{array}{cc}A=\underset{\alpha \in G}{\oplus }{A}_{\alpha }& \left(1\right)\end{array}$

of $A$ into a direct sum of $k$–subspaces ${A}_{\alpha }$, indexed by $G$, such that

${A}_{\alpha }{A}_{\beta }\subset {A}_{\alpha +\beta }\phantom{\rule{3em}{0ex}}\left(\alpha ,\beta \in G\right)$.

The elements of ${A}_{\alpha }$ are said to be homogeneous of degree $\alpha$; the decomposition (1) says that any $x\in A$ can be written uniquely as the sum $x=\sum _{\alpha }{x}_{\alpha }$ of its homogeneous components (only finitely many of which can be $\ne 0$).

An ideal $𝔞$ in $A$ is a graded ideal if

$𝔞=\underset{\alpha }{\oplus }{𝔞}_{\alpha }$

where ${𝔞}_{\alpha }=𝔞\cap {A}_{\alpha }$, that is to say if whenever $x\in 𝔞$ all the homogeneous components ${x}_{\alpha }$ of $x$ lie in $𝔞$. Any sum of graded ideals is graded; any ideal generated by homogeneous elements is graded.

If $𝔞$ is a graded (two-sided) ideal, then $A/𝔞$ is a $G$–graded algebra:

$A/𝔞=\underset{\alpha \in G}{\oplus }{A}_{\alpha }/{𝔞}_{\alpha }$.

In the present context, let

$Q=\sum _{i=1}^{n}ℤ{\alpha }_{i}\phantom{\rule{2em}{0ex}}\left(\cong {ℤ}^{n}\right)$

denote the Lattice generated by $B$ in ${𝔥}^{*}$ (the root lattice). Also let

${Q}^{+}=\sum _{i=1}^{n}ℕ{\alpha }_{i}$

For $\alpha \in Q$ we write $\alpha \ge 0$ to mean $\alpha \in {Q}^{+}$, i.e. $\alpha =\sum _{1}^{n}{m}_{i}{\alpha }_{i}$ with all ${m}_{i}=0$; also $\alpha >0$ to mean $\alpha \in {G}^{+}$ and $\alpha \ne 0$. Likewise $\alpha \le 0,\alpha <0$. If $\alpha =\sum _{1}^{n}{m}_{i}{\alpha }_{i}\in Q$, we define the height of $\alpha$ to be

$\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)=\sum _{1}^{n}{m}_{i}$.

Now the free Lie algebra generated by $𝔥$ and ${e}_{1},\dots ,{e}_{n},{f}_{1},\dots ,{f}_{n}$ is $G$–graded by assigning degree 0 to each $h\in 𝔥$, degree ${\alpha }_{i}$ to ${e}_{i}$ and degree $-{\alpha }_{i}$ to ${f}_{i}$ $\left(1\le i\le n\right)$.

The relations (1.2) are homogeneous, hence $\stackrel{\sim }{𝔤}\left(A\right)$ is a $Q$–graded Lie algebra:

$\stackrel{\sim }{𝔤}\left(A\right)=\underset{\alpha \in Q}{\oplus }{\stackrel{\sim }{𝔤}}_{\alpha },\phantom{\rule{2em}{0ex}}\left[{\stackrel{\sim }{𝔤}}_{\alpha },{\stackrel{\sim }{𝔤}}_{\beta }\right]\subset {\stackrel{\sim }{𝔤}}_{\alpha +\beta }$

where ${\stackrel{\sim }{𝔤}}_{\alpha }$ consists of the homogeneous elements of degree $\alpha$ in $\stackrel{\sim }{𝔤}\left(A\right)$. By (1.3) we have ${\stackrel{\sim }{𝔤}}_{0}=𝔥$ and (for $\alpha \ne 0$) ${\stackrel{\sim }{𝔤}}_{\alpha }=0$ unless either $\alpha >0$ or $\alpha <0$, because

${\stackrel{\sim }{𝔫}}_{+}=\underset{\alpha >0}{\oplus }{\stackrel{\sim }{𝔤}}_{\alpha },\phantom{\rule{2em}{0ex}}{\stackrel{\sim }{𝔫}}_{-}=\underset{\alpha <0}{\oplus }{\stackrel{\sim }{𝔤}}_{\alpha }$.

We can introduce other gradings on $\stackrel{\sim }{𝔤}\left(A\right)$. Let $s:Q\to ℤ$ be any homomorphism of abelian groups, and for each $m\in ℤ$ define

${\stackrel{\sim }{𝔤}}_{m}\left(s\right)=\sum _{s\left(\alpha \right)=m}{\stackrel{\sim }{𝔤}}_{\alpha }$

Then $\stackrel{\sim }{𝔤}\left(A\right)=\underset{m\in ℤ}{\oplus }{\stackrel{\sim }{𝔤}}_{m}\left(s\right)$, and $\left[{\stackrel{\sim }{g}}_{m}\left(s\right),{\stackrel{\sim }{g}}_{{m}^{\prime }}\left(s\right)\right]\subset {\stackrel{\sim }{𝔤}}_{m+{m}^{\prime }}\left(s\right)$, giving a $ℤ$–grading of $\stackrel{\sim }{𝔤}\left(A\right)$. The most important case of this is the principal grading, defined by $s\left({\alpha }_{i}\right)=1\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$. For this choice of $s$ we have $s\left(\alpha \right)=\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)$, and we therefore define

${\stackrel{\sim }{𝔤}}_{m}=\sum _{\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)=m}{\stackrel{\sim }{𝔤}}_{\alpha }\phantom{\rule{3em}{0ex}}\left(m\in ℤ\right)$

We have ${\stackrel{\sim }{𝔤}}_{0}=𝔥$; ${\stackrel{\sim }{𝔤}}_{1}$ is spanned by ${e}_{1},\dots ,{e}_{n};\phantom{\rule{0.2em}{0ex}}{\stackrel{\sim }{𝔤}}_{-1}$ by ${f}_{1},\dots ,{f}_{n}$; and ${\stackrel{\sim }{𝔫}}_{+}=\underset{m\ge 1}{\oplus }{\stackrel{\sim }{𝔤}}_{m},\phantom{\rule{0.5em}{0ex}}{\stackrel{\sim }{𝔫}}_{-}=\underset{m\ge 1}{\oplus }{\stackrel{\sim }{𝔤}}_{-m}$.

Let $\alpha >0$. Then ${\stackrel{\sim }{𝔤}}_{\alpha }$ is the $\alpha$–component of the free Lie algebra ${\stackrel{\sim }{𝔫}}_{+}$ generated by ${e}_{1},\dots ,{e}_{n}$, hence it is spanned by all commutators

${x}_{\alpha }=\left[{e}_{{i}_{1}}\dots {e}_{{i}_{r}}\right]$

such that ${\alpha }_{{i}_{1}}+\dots +{\alpha }_{{i}_{r}}=\alpha$ (notation: $\left[{u}_{1}\dots {u}_{r}\right]$ means $\left[{u}_{1},\left[{u}_{2},\dots {u}_{r}\right]\right]$). There are only finitely many of these (at most ${𝔫}^{h+\alpha }\right),$ hence ${\stackrel{\sim }{𝔤}}_{\alpha }$ is finite-dimensional. Likewise when $\alpha <0$. In particular, ${\stackrel{\sim }{𝔤}}_{±{\alpha }_{i}}$ are 1-dimensional.

For ${x}_{\alpha }$ as above we have, for each $h\in 𝔥$,

$\begin{array}{ccc}\left[h,{x}_{\alpha }\right]& =& \sum _{p=1}^{r}\left[{e}_{{i}_{1}}\dots \left[h,{e}_{{i}_{p}}\right]\dots {e}_{{i}_{r}}\right]\\ \multicolumn{3}{c}{\text{(because ad}\phantom{\rule{0.2em}{0ex}}h\phantom{\rule{0.2em}{0ex}}\text{is a derivation)}}\\ & =& \sum _{p=1}^{r}{\alpha }_{{i}_{p}}\left(h\right)\left[{e}_{{i}_{1}}\dots {e}_{{i}_{p}}\dots {e}_{{i}_{r}}\right]\phantom{\rule{2em}{0ex}}\text{by}\phantom{\rule{0.2em}{0ex}}\text{(1.2)}\\ & =& \alpha \left(h\right){x}_{d}\end{array}$

Hence if temporarily we write

${M}_{\alpha }=\left\{x\in \stackrel{\sim }{𝔤}\left(A\right):\phantom{\rule{0.2em}{0ex}}\left[h,x\right]=\alpha \left(h\right)x\phantom{\rule{0.2em}{0ex}}\text{for all}\phantom{\rule{0.2em}{0ex}}h\in 𝔥\right\}$

for each $\alpha \in Q$, then we have

${\stackrel{\sim }{𝔤}}_{\alpha }\subset {M}_{\alpha },\phantom{\rule{2em}{0ex}}\text{all}\phantom{\rule{0.2em}{0ex}}\alpha \in Q$.

(For the calculation above shows this is true when $\alpha >0$; similarly when $\alpha <0$ or $\alpha =0$; and in other cases ${\stackrel{\sim }{𝔤}}_{\alpha }=0$).

Moreover a standard argument shows that the sum $\sum _{\alpha \in Q}{M}_{\alpha }$ is direct. For if there exist non-trivial relations

$\begin{array}{cc}\sum _{\alpha }{x}_{\alpha }=0& \left(1\right)\end{array}$

with ${x}_{\alpha }\in {M}_{\alpha }$ (and only finitely many ${x}_{\alpha }\ne 0$), choose such a relation with as few non-zero terms as possible; by applying $\text{ad}\phantom{\rule{0.2em}{0ex}}h$ we conclude that

$\begin{array}{cc}\sum _{\alpha }\alpha \left(h\right){x}_{\alpha }=0& \left(2\right)\end{array}$

for each $h\in 𝔥$. We can then subtract a multiple of (1) from (2) to obtain a shorter relation: contradiction. Hence we have

$\stackrel{\sim }{𝔤}\left(A\right)=\underset{\alpha \in Q}{\oplus }{\stackrel{\sim }{𝔤}}_{\alpha }\subset \underset{\alpha \in Q}{\oplus }{M}_{\alpha }\subset \stackrel{\sim }{𝔤}\left(A\right)$

from which it follows that ${M}_{\alpha }={\stackrel{\sim }{𝔤}}_{\alpha }$ for all $\alpha \in Q$. To summarize:

(1.4) For each $\alpha \in Q$, let ${\stackrel{\sim }{𝔤}}_{\alpha }$ denote the component of degree $\alpha$ in $\stackrel{\sim }{𝔤}\left(A\right)$. Then

1. ${\stackrel{\sim }{𝔤}}_{\alpha }=\left\{x\in \stackrel{\sim }{g}\left(A\right):\phantom{\rule{0.2em}{0ex}}\left[h,x\right]=\alpha \left(h\right)x\phantom{\rule{0.5em}{0ex}}\text{for all}\phantom{\rule{0.5em}{0ex}}h\in 𝔥\right\}$
2. ${\stackrel{\sim }{𝔤}}_{\alpha }=0$ unless $\alpha >0,\alpha <0$ or $\alpha =0$; moreover ${\stackrel{\sim }{𝔤}}_{0}=𝔥$.
3. each ${\stackrel{\sim }{𝔤}}_{\alpha }$ is finite-dimensional over $k$, and $\text{dim}\phantom{\rule{0.2em}{0ex}}{\stackrel{\sim }{𝔤}}_{±{\alpha }_{i}}=1$.

## Ideals in $\stackrel{\sim }{𝔤}\left(A\right)$

We shall next prove that all ideals in $\stackrel{\sim }{𝔤}\left(A\right)$ are $Q$–graded ideals. This will be a consequence of the following lemma:

(1.5) Let $𝔥$ be an abelian Lie Algebra, $M$ an $𝔥$–module. For each $\lambda \in {𝔥}^{*}$ let

${M}_{\lambda }=\left\{x\in M:\phantom{\rule{0.2em}{0ex}}h·x=\lambda \left(h\right)x\phantom{\rule{0.5em}{0ex}}\text{for all}\phantom{\rule{0.5em}{0ex}}h\in 𝔥\right\}$.

Suppose that $M=\underset{\lambda \in {𝔥}^{*}}{\oplus }{M}_{\lambda }$, and let ${M}^{\prime }$ be a submodule of $M$. Then

${M}^{\prime }=\underset{\lambda \in {𝔥}^{*}}{\oplus }{{M}^{\prime }}_{\lambda },\phantom{\rule{2em}{0ex}}\text{where}\phantom{\rule{0.5em}{0ex}}{{M}^{\prime }}_{\lambda }={M}^{\prime }\cap {M}_{\lambda }$.

 Proof. Each $x\in {M}^{\prime }$ can be written in the form $x=\sum _{i=1}^{m}{x}_{{\lambda }_{i}}$ where ${\lambda }_{1},\dots ,{\lambda }_{m}$ are district elements of ${𝔥}^{*}$, and ${x}_{{\lambda }_{i}}\in {M}_{{\lambda }_{i}}$. We have to show that each ${x}_{{\lambda }_{i}}\in {M}^{\prime }$. The polynomial function $\prod _{i on $𝔥$ is not zero, hence $\exists h\in 𝔥$ such that ${\lambda }_{1}\left(h\right),\dots ,{\lambda }_{m}\left(h\right)$ are all distinct. We have ${h}^{j}·x=\sum _{i=1}^{m}{\lambda }_{i}{\left(h\right)}^{j}{x}_{{\lambda }_{i}}\phantom{\rule{3em}{0ex}}\left(0\le j\le m-1\right)$ and we can solve these equations for ${x}_{{\lambda }_{1}},\dots ,{x}_{{\lambda }_{m}}$ by Cramer's rule, since $\text{det}\phantom{\rule{0.2em}{0ex}}{\left({\lambda }_{i}{\left(h\right)}^{j}\right)}_{\genfrac{}{}{0}{}{1\le j\le m}{0\le j\le m-1}}=\prod _{i. Hence each ${x}_{{\lambda }_{i}}$ is a linear combination of the ${h}^{j}·x$, hence lies in ${M}^{\prime }$. $\square$

We apply this lemma with $M=\stackrel{\sim }{𝔤}\left(A\right)$ and ${M}^{\prime }$ an ideal $𝔞$ in $\stackrel{\sim }{𝔤}\left(A\right)$. Then $𝔞$ is a $𝔥$–submodule of $\stackrel{\sim }{𝔤}\left(A\right)$ under the adjoint action, hence by (1.4) and (1.5) we have

$𝔞=\underset{\alpha \in Q}{\oplus }{𝔞}_{\alpha }$

where ${𝔞}_{\alpha }=𝔞\cap {\stackrel{\sim }{𝔤}}_{\alpha }\phantom{\rule{0.5em}{0ex}}$ i.e. $𝔞$ is a $Q$–graded ideal. Hence also

$𝔞=\underset{m\in ℤ}{\oplus }{𝔞}_{m}$

where ${𝔞}_{m}=𝔞\cap {\stackrel{\sim }{g}}_{m}$ (principal grading).

Consider now ideals $𝔞$ in $\stackrel{\sim }{𝔤}\left(h\right)$ such that ${𝔞}_{0}=0$, i.e. $𝔞\cap 𝔥=0$. Any sum of such ideals has the same property, hence there is a unique largest ideal $𝔯$ in $\stackrel{\sim }{𝔤}\left(A\right)$ such that $𝔯\cap 𝔥=0$. We have

$𝔯={𝔯}_{+}\oplus {𝔯}_{-}$

where

${𝔯}_{+}=\underset{m>0}{\oplus }{𝔯}_{m}=𝔯\cap {\stackrel{\sim }{𝔫}}_{+}$
${𝔯}_{-}=\underset{m<0}{\oplus }{𝔯}_{m}=𝔯\cap {\stackrel{\sim }{𝔫}}_{-}$

We have $\left[{f}_{i},{𝔯}_{+}\right]=\underset{m>0}{\oplus }\left[{f}_{i},{𝔯}_{m}\right]\subset \underset{m>0}{\oplus }{𝔯}_{m-1}={𝔯}_{+}$; and since clearly $\left[𝔥,{𝔯}_{+}\right]\subset {𝔯}_{+},\phantom{\rule{0.5em}{0ex}}\left[{e}_{i},{𝔯}_{+}\right]\subset {𝔯}_{+}$ it follows that ${𝔯}_{+}$ is an ideal in $\stackrel{\sim }{𝔤}\left(A\right)$. Similarly, of course, for ${𝔯}_{-}$.

Next I claim that ${𝔯}_{1}={𝔯}_{-1}=0$. For ${𝔯}_{1}=\underset{i=1}{\overset{n}{\oplus }}{𝔯}_{{\alpha }_{i}}$; if ${𝔯}_{{\alpha }_{i}}\ne 0$ then ${𝔯}_{{\alpha }_{i}}={𝔤}_{{\alpha }_{i}}$ (because ${𝔤}_{{\alpha }_{i}}$ is 1-dimensional, spanned by ${e}_{i}$), hence ${e}_{i}\in 𝔯$; but then ${h}_{i}=\left[{e}_{i},{f}_{i}\right]\in 𝔯\cap 𝔥$, contradiction. Hence ${𝔯}_{1}=0$, and similarly ${𝔯}_{-1}=0$.

Finally, we must have $\stackrel{\sim }{\omega }\left(𝔯\right)=𝔯$ (for $\stackrel{\sim }{\omega }\left(𝔯\right)$ has the same properties as $𝔯$).

To summarize:

(1.6)

1. All ideals in $\stackrel{\sim }{𝔤}\left(A\right)$ are $Q$–graded.
2. The set of ideals $𝔞$ in $\stackrel{\sim }{𝔤}\left(A\right)$ such that $𝔞\cap 𝔥=0$ has a unique maximal element of $𝔯$.
3. ${𝔯}_{+}=𝔯\cap {\stackrel{\sim }{𝔫}}_{+}$ and ${𝔯}_{-}=𝔯\cap {\stackrel{\sim }{𝔫}}_{-}$ are ideals in $\stackrel{\sim }{𝔤}\left(A\right)$, and $𝔯={𝔯}_{+}\oplus {𝔯}_{-}$ (direct sum)
4. ${𝔯}_{1}={𝔯}_{-1}=0$
5. $\stackrel{\sim }{\omega }\left(𝔯\right)=𝔯$

Now define

$𝔤\left(A\right)=\stackrel{\sim }{𝔤}\left(A\right)/𝔯$.

It is this algebra which is the object of our investigations. If $A$ is a Cartan matrix, $𝔤\left(A\right)$ is the Kac-Moody algebra defined by the matrix $A$.

(1.7) Remarks
Since the ideal $𝔯$ is $Q$–graded (1.6), $𝔤\left(A\right)$ is a $Q$–graded Lie algebra:

$𝔤\left(A\right)=\underset{\alpha \in Q}{\oplus }{𝔤}_{\alpha }$

where ${𝔤}_{\alpha }={\stackrel{\sim }{𝔤}}_{\alpha }/{𝔯}_{\alpha }$; thus

1. ${𝔤}_{0}={\stackrel{\sim }{𝔤}}_{0}=𝔥$
2. Since ${𝔯}_{1}={𝔯}_{-1}=0$ (1.6), $\phantom{\rule{0.5em}{0ex}}{𝔤}_{1}={\stackrel{\sim }{𝔤}}_{1}=\sum _{i=1}^{n}k{e}_{i};\phantom{\rule{1em}{0ex}}{𝔤}_{-1}={\stackrel{\sim }{𝔤}}_{-1}=\sum _{i=1}^{n}k{f}_{i};\phantom{\rule{1em}{0ex}}{𝔤}_{{\alpha }_{i}}=k{e}_{i},\phantom{\rule{1em}{0ex}}{𝔤}_{{-\alpha }_{i}}=k{f}_{i},\phantom{\rule{1em}{0ex}}\left(1\le i\le n\right)$
(Since the images of ${e}_{1},\dots ,{e}_{n},{f}_{1},\dots ,{f}_{n}$ in $𝔤\left(A\right)$ remain linearly independent, we continue to denote them by the same symbols).
3. ${𝔤}_{\alpha }=0$ unless $\alpha -0$ or $\alpha >0$ or $\alpha <0$, by (1.4)
4. ${𝔤}_{\alpha }=\left\{x\in 𝔤\left(A\right)\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}\left[h,x\right]=\alpha \left(h\right)x\phantom{\rule{0.5em}{0ex}}\text{for all}\phantom{\rule{0.5em}{0ex}}h\in 𝔥\right\}$ (same proof as in (1.4)) and each ${𝔤}_{\alpha }$ is finite-dimensional. If $\alpha \ne 0$ and ${𝔤}_{\alpha }=0$ we say $\alpha$ is a root of $𝔤\left(A\right)$ with multiplicity ${m}_{\alpha }=\phantom{\rule{0.5em}{0ex}}\text{dim}\phantom{\rule{0.2em}{0ex}}{𝔤}_{\alpha }$. (In the classical case, all ${m}_{\alpha }$ are 1).
5. Let ${𝔫}_{+}={\stackrel{\sim }{𝔫}}_{+}/{𝔯}_{+}=\underset{\alpha >0}{\oplus }{𝔤}_{\alpha },\phantom{\rule{0.5em}{0ex}}{𝔫}_{-}={\stackrel{\sim }{𝔫}}_{-}/{𝔯}_{-}=\underset{\alpha <0}{\oplus }{𝔤}_{\alpha }$

These are subalgebras of $𝔤\left(A\right)$, generated by ${e}_{1},\dots ,{e}_{n}$ and by ${f}_{1},\dots ,{f}_{n}$ respectively, and

$𝔤\left(A\right)={𝔫}_{-}\oplus 𝔥\oplus {𝔫}_{+}\phantom{\rule{2em}{0ex}}\text{(vector space direct sum)}$

6. All ideals $𝔞$ in $𝔤\left(A\right)$ are $Q$–graded (1.6): $𝔞=\underset{\alpha \in Q}{\oplus }{𝔞}_{\alpha }$; and $𝔤\left(A\right)$ has no ideal $𝔞\ne 0$ such that $𝔞\cap 𝔥=0$ (by construction)
7. Since $𝔯$ is stable under the involution $\stackrel{\sim }{\omega }$ (1.6) we have an involution $\omega :\phantom{\rule{0.2em}{0ex}}𝔤\left(A\right)\to 𝔤\left(A\right)$ under which ${e}_{i}↦-{f}_{i},\phantom{\rule{0.5em}{0ex}}{f}_{i}↦-{e}_{i},\phantom{\rule{0.5em}{0ex}}h↦-h\phantom{\rule{0.5em}{0ex}}\left(h\in 𝔥\right)$ we have $\omega \left({𝔤}_{\alpha }\right)={𝔤}_{-\alpha }$ for all $\alpha \in Q$, hence $\omega$ interchanges ${𝔫}_{+}$ and ${𝔫}_{-}$.

The following lemma is frequently useful:

(1.8)

1. Let $x\in {𝔫}_{+}$ be such that $\left[x,{f}_{i}\right]=0\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$. Then $x=0$.
2. Let $x\in {𝔫}_{-}$ be such that $\left[x,{e}_{i}\right]=0\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$. Then $x=0$.

 Proof. We shall prove (i), and then (ii) will follow by use of the involution $\omega$. Write $x=\sum _{\alpha >0}{x}_{\alpha }$, then we have $\sum _{\alpha >0}\left[{x}_{\alpha },{f}_{i}\right]=0$ for $1\le i\le n$. Hence $\left[{x}_{\alpha },{f}_{i}\right]=0$ for each $\alpha$ and each $i$; in other words we may assume $x$ homogeneous. Consider the ideal $U\left(𝔤\right)·x=𝔞$ say, generated by $x$. ($U\left(𝔤\right)$ acting via ad). Since $𝔤={𝔫}_{-}\oplus 𝔥\oplus {𝔫}_{+}$ (1.7) we have (corollary of P-B-W) $U\left(𝔤\right)=U\left({𝔫}_{+}\right)U\left(𝔥\right)U\left({𝔫}_{-}\right)$ By assumption, $U\left({𝔫}_{-}\right)·x=kx;\phantom{\rule{0.5em}{0ex}}U\left(𝔥\right)·x=kx$, hence $𝔞=U\left(𝔤\right)·x=U\left({𝔫}_{+}\right)·x$ has only positive components, hence $𝔞\cap 𝔥=0$. Hence ((1.7)(vi)) $𝔞=0$, i.e. $x=$0? $\square$

Example Suppose $A=0$ (the $n×n$ zero matrix). What does $𝔤\left(0\right)$ look like?

We have ${\alpha }_{j}\left({h}_{i}\right)={a}_{ij}=0$ for all $i,j$, hence the relations (1.2) give

$\left[{h}_{i},{e}_{j}\right]={\alpha }_{j}\left({h}_{i}\right){e}_{j}=0\phantom{\rule{1em}{0ex}}\left(\text{all}\phantom{\rule{0.5em}{0ex}}i,j\right)$

and likewise

$\left[{h}_{i},{f}_{j}\right]=0\phantom{\rule{1em}{0ex}}\left(\text{all}\phantom{\rule{0.5em}{0ex}}i,j\right)$

Consider now $\left[{e}_{i},{e}_{j}\right]$. We have

$\begin{array}{ccc}\left[\left[{e}_{i},{e}_{j}\right],{f}_{k}\right]& =& \left[{e}_{i},\left[{e}_{j},{f}_{k}\right]\right]-\left[{e}_{j},\left[{e}_{i},{f}_{k}\right]\right]\\ & =& \left[{e}_{i},{\delta }_{jk}{h}_{k}\right]-\left[{e}_{j},{\delta }_{ij}{h}_{k}\right]=0\end{array}$

whence by (1.8) $\left[{e}_{i},{e}_{j}\right]=0$. Hence ${𝔫}_{+}$ is abelian, i.e. ${𝔫}_{+}={𝔤}_{1}=\sum _{1}^{n}k{e}_{i}$. Similarly ${𝔫}_{-}={𝔤}_{-1}=\sum _{1}^{n}k{f}_{i}$, and

$𝔤\left(0\right)={𝔤}_{-1}\oplus {𝔤}_{0}\oplus {𝔤}_{1}$

Note that ${𝔤}_{0}=𝔥$ has dimension $2n$ (because here $l=0$).

## The algebra ${𝔤}^{\prime }\left(A\right)$

Let ${𝔤}^{\prime }=D𝔤\left(A\right)$ be the derived algebra of $𝔤\left(A\right)$.

(1.9) ${𝔤}^{\prime }\left(A\right)$ is the subalgebra of $𝔤\left(A\right)$ generated by ${e}_{1},\dots ,{e}_{n},{f}_{1},\dots ,{f}_{n},$ and

${𝔤}^{\prime }\left(A\right)={𝔫}_{-}\oplus {𝔥}^{\prime }\oplus {𝔫}_{+}$

where ${𝔥}^{\prime }$ is the subspace of $𝔥$ generated by ${h}_{1},\dots ,{h}_{n}$.

Thus ${𝔤}^{\prime }\left(A\right)=𝔤\left(A\right)$ iff det $\left(A\right)\ne 0$.

 Proof. Let $𝔞$ denote the subalgebra of $𝔤\left(A\right)$ generated by ${e}_{1},\dots ,{f}_{n}$, and let $𝔟={𝔫}_{-}\oplus {𝔥}^{\prime }\oplus {𝔫}_{+}$. Since $\left[h,{e}_{i}\right]={\alpha }_{i}\left(h\right){e}_{i}$ for all $h\in 𝔥$, and since $\exists h\in 𝔥$ such that ${\alpha }_{i}\left(h\right)\ne 0$, it follows that ${e}_{i}\in {𝔤}^{\prime }\left(A\right)$; similarly ${f}_{i}\in {𝔤}^{\prime }\left(A\right)$, and therefore $\begin{array}{cc}𝔞\subset {𝔤}^{\prime }\left(A\right)& \left(1\right)\end{array}$ Next, by (1.7)(v), ${𝔫}_{+}$ and ${𝔫}_{-}$ are subalgebras of $𝔞$; and since ${h}_{i}=\left[{e}_{i},{f}_{i}\right]\in 𝔞$, it follows that ${𝔥}^{\prime }\subset 𝔞$, whence $\begin{array}{cc}𝔟\subset 𝔞& \left(2\right)\end{array}$ Finally, I claim that $𝔟$ is an ideal in $𝔤\left(A\right)$. We have to check that $\left[h,b\right]\subset 𝔟\phantom{\rule{0.5em}{0ex}}\left(h\in 𝔥\right);\phantom{\rule{0.5em}{0ex}}\left[{e}_{i},𝔟\right]\subset 𝔟;\phantom{\rule{0.5em}{0ex}}\left[{f}_{i},b\right]\subset 𝔟$ The first of these is obvious. As to the second, we have $\left[{e}_{i},{𝔫}_{-}\right]\subset {𝔫}_{-}+{𝔥}^{\prime }$ (because $\left[{e}_{i},{f}_{j}\right]={\delta }_{ij}{h}_{i}\phantom{\rule{0.5em}{0ex}}\text{(1.2))};\phantom{\rule{0.5em}{0ex}}\left[{e}_{i},{𝔥}^{\prime }\right]\subset {𝔫}_{+};\phantom{\rule{0.5em}{0ex}}\left[{e}_{i},{𝔫}_{+}\right]\subset {𝔫}_{+};\phantom{\rule{0.5em}{0ex}}\text{and}\phantom{\rule{0.5em}{0ex}}\left[{f}_{i},𝔟\right]\subset 𝔟$ is proved similarly. Since $𝔤\left(A\right)/𝔟\cong 𝔥/{𝔥}^{\prime }$ is abelian, it follows that $\begin{array}{cc}{𝔤}^{\prime }\left(A\right)\subset 𝔟\text{.}& \left(3\right)\end{array}$ (1), (2), (3) complete the proof. $\square$

Remark The algebra ${𝔤}^{\prime }\left(A\right)$ is also sometimes called the Kac-Moody algebra associated to the matrix $A$ (if $A$ is a Cartan matrix). We can give a more direct construction of ${𝔤}^{\prime }\left(A\right)$, as follows: Let ${\stackrel{\sim }{𝔤}}^{\prime }\left(A\right)$ denote the Lie algebra with $3n$ generators ${e}_{i},{f}_{i},{h}_{i}\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$ subject to the relations

$\begin{array}{cc}\left(1.{2}^{\prime }\right)& \left\{\begin{array}{c}\left[{h}_{i},{h}_{j}\right]=0\\ \left[{e}_{i},{f}_{j}\right]={\delta }_{ij}{h}_{i}\\ \left[{h}_{i},{e}_{j}\right]={a}_{ij}{e}_{j}\\ \left[{h}_{i},{f}_{j}\right]=-{a}_{ij}{f}_{j}& & \left(1\le i,j\le n\right)\end{array}\end{array}$

Let $Q\left(\cong {ℤ}^{n}\right)$ be a free abelian group on generators ${\alpha }_{1},\dots ,{\alpha }_{n}$; then ${\stackrel{\sim }{𝔤}}^{\prime }\left(A\right)$ is $Q$–graded by assigning degrees 0, ${\alpha }_{i}$, $-{\alpha }_{i}$ to ${h}_{i},{e}_{i},{f}_{i}$ respectively $\left(1\le i\le n\right)$, and there exists a unique maximal $Q$–graded ideal ${𝔯}^{\prime }$ subject to ${𝔯}^{\prime }\cap {𝔥}^{\prime }=0$ (where ${𝔥}^{\prime }=\sum _{1}^{n}k{h}_{i}$ as above). Then ${𝔤}^{\prime }\left(A\right)={\stackrel{\sim }{𝔤}}^{\prime }\left(A\right)/{𝔯}^{\prime }$.

We can then construct $𝔤\left(A\right)$ as a semidirect product of ${𝔤}^{\prime }\left(A\right)$ by a suitable algebra of derivations.

## Semidirect products

In general, let $𝔤$ be a Lie algebra, $𝔫$ an ideal in $𝔤$, $𝔞$ a subalgebra of $𝔤$, such that $𝔤=𝔫\oplus 𝔞$ (vector space direct sum). If ${x}_{1},{x}_{2}\in 𝔤$, say ${x}_{i}={n}_{i}+{a}_{i}\phantom{\rule{0.5em}{0ex}}\left({n}_{i}\in 𝔫,\phantom{\rule{0.2em}{0ex}}{a}_{i}\in 𝔞\right)$ then

$\left[{x}_{1},{x}_{2}\right]=\left[{n}_{1},{n}_{2}\right]+\left[{a}_{1}.{n}_{2}\right]-\left[{a}_{2},{n}_{2}\right]+\left[{a}_{1},{a}_{2}\right]$

in which the first 3 terms on the right lie in $𝔫$ (because $𝔫$ is an ideal in $𝔤$).

The Lie algebra $𝔞$ acts (via ad) on $𝔫$ as an algebra of derivations:

$\text{ad}:\phantom{\rule{0.2em}{0ex}}𝔞\to \text{Der}\left(𝔫\right)$

Conversely, if we are given Lie algebras $𝔫,𝔞$ and a Lie algebra homomorphism $\phi :\phantom{\rule{0.2em}{0ex}}𝔞\to \text{Der}\left(𝔫\right)$, we construct the semidirect product $𝔤=𝔫⋊𝔞$ as follows: $𝔤=𝔫\oplus 𝔞$ as a vector space, and the Lie bracket in $𝔤$ is defined by

$\left[{n}_{1}+{a}_{1},{n}_{2}+{a}_{2}\right]=\left[{n}_{1},{n}_{2}\right]+\phi \left({a}_{1}\right){n}_{2}-\phi \left({a}_{2}\right){n}_{1}+\left[{a}_{1},{a}_{2}\right]$.

One has of course to check the Jacobi identity, which is tedious but straightforward.

In the present case, let $𝔞$ be a vector space complement of ${𝔥}^{\prime }$ in $𝔥$: then

$𝔤\left(A\right)={𝔤}^{\prime }\left(A\right)\oplus 𝔞$

with ${𝔤}^{\prime }\left(A\right)$ an ideal (1.9) and $𝔞$ a subalgebra. Hence $𝔤\left(A\right)$ may be constructed as the semidirect product ${𝔤}^{\prime }\left(A\right)⋊𝔞$, with $𝔞$ acting as an (abelian) algebra of derivations.

## The centre of $𝔤\left(A\right)$

(1.10) The algebras $𝔤\left(A\right),{𝔤}^{\prime }\left(A\right)$ have the same centre $𝔠$:

${𝔥}_{0}=𝔠=\bigcap _{i=1}^{n}\text{Ker}\left({\alpha }_{i}\right)\subset {𝔥}^{\prime }$.

We have dim $𝔠=n-l$, hence $𝔠=0$ iff $A$ is nonsingular.

 Proof. Suppose $x\in 𝔤\left(A\right)$ commutes with ${e}_{1},\dots ,{f}_{n}$. Say $x=\sum _{r\in ℤ}{x}_{r}$ (principal grading); then $0=\left[x,{f}_{i}\right]=\sum _{r}\left[{x}_{r},{f}_{i}\right]$, so that $\left[{x}_{r},{f}_{i}\right]=0$ for $1\le i\le n$ and all $r\in ℤ$. By (1.8) it follows that ${x}_{r}=0$ if $r\ge 1$, and similarly ${x}_{r}=0$ for $r\le$what goes here?. Hence $x={x}_{0}\in 𝔥$. But then (1.2) $0=\left[x,{e}_{i}\right]={x}_{i}\left(x\right){e}_{i}$ so that ${x}_{i}\left(x\right)=0$ $\left(1\le i\le n\right)$, whence $x\in \bigcap _{1}^{n}\text{Ker}\phantom{\rule{0.2em}{0ex}}{\alpha }_{i}$. Conversely, if ${\alpha }_{i}\left(x\right)=0$ for $1\le i\le n$, then by (1.2) we have $\left[x,{e}_{i}\right]=\left[x,{f}_{i}\right]=0$, and of course $\left[x,𝔥\right]=0$. This shows that the centre of $𝔤\left(A\right)$ is $𝔠=\bigcap _{i=1}^{n}\text{Ker}\left({\alpha }_{i}\right)$; since the ${\alpha }_{i}$ are independent linear forms on $𝔥$, we have dim $𝔠=$dim $𝔥-𝔫=n-l$. Finally, I claim that $𝔠\in {𝔥}^{\prime }$. For $\begin{array}{ccc}\sum _{1}^{n}{\mu }_{i}{h}_{i}\in 𝔠\cap {𝔥}^{\prime }& ⇔& \sum _{1}^{n}{\mu }_{i}{\alpha }_{i}\left({h}_{i}\right)=0\phantom{\rule{1em}{0ex}}\left(1\le j\le n\right)\\ & ⇔& \sum _{1}^{n}{\mu }_{i}{a}_{ij}=0\phantom{\rule{1em}{0ex}}\left(1\le j\le n\right)\end{array}$ Since $A=\left({a}_{ij}\right)$ has rank $l$, it follows that $𝔠\cap {𝔥}^{\prime }$ has dimension $n-l=\text{dim}\phantom{\rule{0.2em}{0ex}}𝔠$. Hence $𝔠\in {𝔥}^{\prime }$ as claimed and therefore $𝔠$ is also the centre of ${𝔤}^{\prime }\left(A\right)$. $\square$

## Decomposability

Let us say that two $n×n$ matrices $A=\left({a}_{ij}\right)$ and ${A}^{\prime }={a}_{ij}^{\prime }$ are equivalent:

$A\sim {A}^{\prime }$

if $\exists w\in {s}_{n}$ such that

${a}_{ij}^{\prime }={a}_{w\left(i\right),w\left(j\right)}\phantom{\rule{2em}{0ex}}\left(1\le i,j\le n\right)$

i.e. if ${A}^{\prime }$ is obtained from $A$ by applying the same permutation to rows and columns. Clearly $A\sim {A}^{\prime }⇒𝔤\left(A\right)\cong 𝔤\left({A}^{\prime }\right)$: we have merely reindexed the generators. Now suppose that $A$ satisfies the condition

$\begin{array}{cc}{a}_{ij}=0\phantom{\rule{1em}{0ex}}⇔\phantom{\rule{1em}{0ex}}{a}_{ji}=0& \left(✶\right)\end{array}$

We associate with $A$ a graph $𝔯\left(A\right)$, as follows: the vertices of $A$ are the indices $1,2,\dots ,n$, and distinct vertices $i$ and $j$ are joined by an edge iff ${a}_{ij}\ne 0$ or ${a}_{ji}\ne$something

(1.11) Assume that $A$ satisfies ($✶$). Then the following conditions on $A$ are equivalent:

1. $A$ is equivalent to a nontrivial diagonal sum $\left(\begin{array}{cc}{A}_{1}& 0\\ 0& {A}_{2}\end{array}\right)$;
2. There exist non-empty complimentary subsets $I,J$ of $\left\{1,2,\dots ,n\right\}$ such that ${a}_{ij}=0$ for $i\in I$ and $j\in J$;
3. $𝔯\left(A\right)$ is not connected.

 Proof. Obvious. $\square$

If these equivalent conditions are satisfied, we say that $A$ is decomposable.

(1.12) If $A$ is decomposable, say $A\sim \left(\begin{array}{cc}{A}_{1}& 0\\ 0& {A}_{2}\end{array}\right)$, then

$𝔤\left(A\right)\cong 𝔤\left({A}_{1}\right)×𝔤\left({A}_{2}\right)$

(direct product).

 Proof. Consider $𝔤\left(A\right)=𝔤\left({A}_{1}\right)×𝔤\left({A}_{2}\right)$, which is a Lie algebra generated by ${e}_{1},\dots ,{e}_{n},{f}_{1},\dots ,{f}_{n}$ and $𝔥={𝔥}_{1}\oplus {𝔥}_{2}$. Check that these generators satisfy relations (1.2) (for the $Q$–graded matrix $A$); hence $𝔤$ is a homomorphic image of $\stackrel{\sim }{𝔤}\left(A\right)$, i.e. we have a surjective homomorphism $\stackrel{\sim }{\phi }:\phantom{\rule{0.5em}{0ex}}\stackrel{\sim }{𝔤}\left(A\right)\to 𝔤$. Then $\stackrel{\sim }{\phi }\left(r\right)=𝔞$ say is an ideal of $𝔤$ such that $𝔞\cap 𝔥=0$. But $𝔤$ is a direct product, hence $𝔞={𝔞}_{1}×{𝔞}_{2}$, where ${𝔞}_{i}$ is an ideal in $𝔤\left({A}_{i}\right)$ which intersects ${𝔥}_{i}$ trivially $\left(i=1,2\right)$. Hence (1.7) ${𝔞}_{1}={𝔞}_{2}=0$ and therefore $𝔞=0$, consequently $\stackrel{\sim }{\phi }$ induces a surjective homomorphism $\phi :\phantom{\rule{0.5em}{0ex}}𝔤\left(A\right)\to 𝔤$. The kernel of $\phi$ is an ideal $𝔟$ such that $𝔟\cap 𝔥=0$ (because $\phi /𝔥$ is injective), hence $𝔟=0$ (1.7). Hence $\phi$ is an isomorphism. $\square$

$⇒\alpha \in R$, then Supp($\alpha$) connected.

## Ideals in $𝔤\left(A\right)$

Assume that $A$ satisfies the condition

${a}_{ij}=0\phantom{\rule{1em}{0ex}}⇔\phantom{\rule{1em}{0ex}}{a}_{ji}=0$.

(1.13)

1. Suppose $A$ is indecomposable. Then every ideal in $𝔤\left(A\right)$ either contains ${𝔤}^{\prime }\left(A\right)$ or is contained in the centre $𝔠$.
2. $𝔤\left(A\right)$ is simple iff $A$ is indecomposable and nonsingular.

 Proof. Let $𝔞$ be an ideal in $𝔤\left(A\right)$. By (1.7)(vi), $𝔞$ is $Q$–graded, hence we may write $𝔞=\underset{r\in ℤ}{\oplus }{𝔞}_{r}$ (principal grading). Suppose first that ${𝔞}_{0}\subset 𝔠$. If ${𝔞}_{1}\ne 0$, then ${e}_{i}\in {𝔞}_{1}$ for some $i$. But then ${h}_{i}=\left[{e}_{i},{f}_{i}\right]\in {𝔞}_{0}$, hence ${h}_{i}\in 𝔠$ and therefore (1.10) ${a}_{ij}={\alpha }_{j}\left({h}_{i}\right)=0$ for $1\le j\le$something. This contradicts the assumption that $A$ is indecomposable. Hence ${𝔞}_{1}=0$. It now follows by induction on $r$ that ${𝔞}_{r}=0$ for all $r\ge 1$. For if $x\in {𝔞}_{r}$ where $r\ge 2$, then $\left[x,{f}_{i}\right]\in {𝔞}_{r-1}=0$ by ind. hyp., hence $x=0$ by (1.8). Likewise ${𝔞}_{r}=0$ for $r\le -1$ and therefore $𝔞={𝔞}_{0}\subset c$. Now suppose that ${𝔞}_{0}\not\subset 𝔠$. Let $h\in {𝔞}_{0}$, $h\notin 𝔠$. By (1.10) we have ${\alpha }_{i}\left(h\right)\ne 0$ for some $i$, hence ${e}_{i}={\alpha }_{i}{\left(h\right)}^{-1}\left[{h}_{i},{e}_{i}\right]\in 𝔞$; similarly ${f}_{i}\in 𝔞$ (for this value of $i$), and ${h}_{i}=\left[{e}_{i},{f}_{i}\right]\in 𝔞$. Since $𝔯\left(A\right)$ is connected (1.11) $\exists j\in \left[1,n\right]$ such that ${a}_{ij}\ne 0$; since $\left[{h}_{i},{e}_{j}\right]={a}_{ij}{e}_{j}$, it follows that ${e}_{j}\in 𝔞$, and likewise ${f}_{j}\in 𝔞$. It now follows that ${e}_{j},{f}_{j}\in 𝔞$ for every index $j$ connected to $i$ by a path in the graph $𝔯\left(A\right)$ – i.e. ${e}_{1},\dots ,{f}_{n}\in 𝔞$, hence (1.9) $𝔞\supset {𝔤}^{\prime }\left(A\right)$. $⇒$ If $A$ is decomposable, $𝔤\left(A\right)$ is not simple, by (1.12). Again, if $A$ is singular, i.e. $l, then $𝔠\ne 0$ (1.10) and again $𝔤\left(A\right)$ is not simple. $⇐$ If $A$ is nonsingular ($l=n$) then ${𝔤}^{\prime }\left(A\right)=𝔤\left(A\right)$ and $𝔠=0$. Now use (i). $\square$

Note for later use the following corollary of (1.13):

(1.13$\frac{1}{2}$). Assume $A$ indecomposable. Then the following conditions are equivalent:

1. $𝔤\left(A\right)$ is infinite-dimensional
2. $R$ is infinite
3. For each $\alpha \in {R}^{+}$ there exists $i$ such that $\alpha +{\alpha }_{i}\in {R}^{+}$.

 Proof. (i) $⇒$ (ii) is clear from the root space decomposition $𝔤\left(A\right)=𝔥\oplus \underset{\alpha \in R}{\oplus }{𝔤}_{\alpha }$ (iii) $⇒$ (ii) is clear (ii) $⇒$ (iii) If (iii) is false, there exists a positive root $\alpha$ such that $\alpha +{\alpha }_{i}\notin R$ $\left(1\le i\le n\right)$. Let $x\in {𝔤}_{\alpha },x\ne 0$. Then we have $\left[x,{e}_{i}\right]=0\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$, from which it follows that $U\left({𝔫}_{+}\right)·x=kx$ and therefore the ideal $𝔞=U\left(𝔤\right)·x$ generated by $x$ in $𝔤\left(A\right)$ is $𝔞=U\left({𝔫}_{-}\right)U\left(𝔥\right)U\left({𝔫}_{+}\right)·x=U\left(𝔫\right)·x$ Hence ${𝔞}_{\beta }=0$ unless $\beta \le \alpha$. But by (1.13) we have $𝔞\supset {𝔤}^{\prime }\left(A\right)$ (because clearly $𝔞\not\subset 𝔠$), hence in particular $𝔞\supset {𝔫}_{+}$. It follows that all roots $\beta$ are $\le \alpha$, whence ${R}^{}$ and therefore $R$ is finite. $\square$

Thus if $R$ is finite there is a unique highest root $q$ such that $\alpha \le \phi$ for all $\alpha \in R$.

## The algebra ${\stackrel{‾}{𝔤}}^{\prime }={𝔤}^{\prime }\left(A\right)/𝔠$

We have

${𝔤}^{\prime }\left(A\right)={𝔫}_{-}\oplus {𝔥}^{\prime }\oplus {𝔫}_{+}$

by (1.9), and $𝔠\subset {𝔥}^{\prime }$ (1.10), hence

${\stackrel{‾}{𝔤}}^{\prime }\left(A\right)={𝔫}_{-}\oplus {\stackrel{‾}{𝔥}}^{\prime }\oplus {𝔫}_{+}$

where ${\stackrel{‾}{𝔥}}^{\prime }={𝔥}^{\prime }/𝔠$ (so that dim ${\stackrel{‾}{𝔥}}^{\prime }=n-\left(n-l\right)=l$).

Assume that $A$ is indecomposable (and that ${a}_{ij}=0⇔{a}_{ji}=0$). The proof of (1.13)(i) shows that any $Q$–graded ideal $𝔞$ in ${𝔤}^{\prime }\left(A\right)$ such that ${𝔞}_{0}\phantom{\rule{0.2em}{0ex}}\left(=𝔞\cap {𝔥}^{\prime }\right)\subset 𝔠$ is contained in $𝔠$. Hence ${\stackrel{‾}{𝔤}}^{\prime }\left(A\right)$ has no nontrivial $Q$–graded ideal $\stackrel{‾}{𝔞}$ such that ${\stackrel{‾}{𝔞}}_{0}=0$. From this it follows that (1.8) is valid for the algebra ${\stackrel{‾}{𝔤}}^{\prime }\left(A\right)$.

We shall make use of this remark in the proof of the following proposition:

(1.14) Assume that $A$ is indecomposable and that each root $\alpha$ has a nonzero restriction to ${𝔥}^{\prime }$. Then the algebra ${\stackrel{‾}{𝔤}}^{\prime }\left(A\right)$ is simple.

 Proof. Let $𝔞\ne 0$ be an ideal in ${\stackrel{‾}{𝔤}}^{\prime }\left(A\right)$. Each $x\in 𝔞$ is of the form $x=\sum _{\alpha \in S}{x}_{\alpha }$ where $S$ is some finite subset of $Q$, each ${x}_{\alpha }\ne 0$ and ${x}_{\alpha }\in {𝔤}_{\alpha }$ for $\alpha \ne 0,{x}_{0}\in {\stackrel{‾}{𝔥}}^{\prime }$. Call $|S|$ the length of $x$, and the number $\underset{\alpha \in S}{\text{max}}\phantom{\rule{0.2em}{0ex}}\text{ht}\left(\alpha \right)$ the height of $x$. Choose $x\ne 0$ in $𝔞$ of minimal length. Suppose that the chosen $x$ has height $r\ge 1$, so that $x={x}_{\alpha }+\dots$ where $\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\alpha \right)=r$, ${x}_{\alpha }\ne 0$. Since ${x}_{\alpha }\ne 0$ we have $\left[{x}_{\alpha },{f}_{i}\right]\ne 0$ for some $i$, by the remark above; hence $\left[x,{f}_{i}\right]$ is a nonzero element of $𝔞$, of minimal length and height $r-1$. By proceeding in this way we shall obtain an element $y\ne 0$ of $𝔞$ of minimal length and height 0, say $\begin{array}{cc}y={\stackrel{‾}{h}}_{0}+\sum _{\alpha \in {S}^{\prime }}{y}_{\alpha }& \left(1\right)\end{array}$ where ${\stackrel{‾}{h}}_{0}\in {\stackrel{‾}{𝔥}}^{\prime }$ (and $\ne 0$), and $|{S}^{\prime }|=|S|-1$. Similarly, if $x$ has height $<0$, we use the ${e}_{i}$ rather than the ${f}_{i}$ to achieve the same result. From (1) we have, for all $\stackrel{‾}{h}\in {\stackrel{‾}{𝔥}}^{\prime }$, $\left[\stackrel{‾}{h},y\right]=\sum _{\alpha \in {S}^{\prime }}\alpha \left(\stackrel{‾}{h}\right){y}_{\alpha }$ which is an element of $𝔞$ of length $<|S|$, hence is 0. Hence $\alpha \left(\stackrel{‾}{h}\right)=0$ for all $\alpha \in {S}^{\prime }$ and all $\stackrel{‾}{h}\in {\stackrel{‾}{h}}^{\prime }$, i.e. $\alpha \in {S}^{\prime }⇒\alpha |{𝔥}^{\prime }=0$. By hypothesis, therefore ${S}^{\prime }$ is empty and therefore $y={\stackrel{‾}{h}}_{0}\in {\stackrel{‾}{𝔥}}^{\prime }$. Since ${\stackrel{‾}{h}}_{0}\ne 0$ we have ${\alpha }_{i}\left({\stackrel{‾}{h}}_{0}\right)\ne 0$ (by (1.something)) for some $i$, and therefore ${e}_{i},{f}_{i}\in 𝔞$ for this value of $i$ (because ${\alpha }_{i}\left({\stackrel{‾}{h}}_{0}\right){e}_{i}=\left[{\stackrel{‾}{h}}_{0},{e}_{i}\right]\in 𝔞$). But now it follows as in the proof of (1.13) that ${e}_{1},\dots ,{f}_{n}$ all lie in $𝔞$, whence $𝔞={\stackrel{‾}{𝔤}}^{\prime }\left(A\right)$. $\square$

Remark: The converse of (1.14) is true if $A$ is a Cartan matrix (proof later) I do not know whether it is so in general.

I shall conclude this chapter with some properties of $𝔤\left(A\right)$ that are valid only when $A$ is a Cartan matrix. So assume now that the matrix $A$ satisfies the condition $\left(C\right)$:

 $\left(C\right)$ ${a}_{ij}\in ℤ;\phantom{\rule{0.5em}{0ex}}{a}_{ii}=2;\phantom{\rule{0.5em}{0ex}}{a}_{ij}\le 0\phantom{\rule{0.5em}{0ex}}\text{if}\phantom{\rule{0.5em}{0ex}}i\ne j;\phantom{\rule{0.5em}{0ex}}{a}_{ij}=0⇔{a}_{ji}=0$.

For each $i=1,\dots ,n$ let ${s}_{i}$ denote the subspace of $𝔤\left(A\right)$ spanned by ${e}_{i},{f}_{i},{h}_{i}$. From (1.2) we have

$\left[{e}_{i},{f}_{i}\right]={h}_{i},\phantom{\rule{1em}{0ex}}\left[{h}_{i},{e}_{i}\right]=2{e}_{i},\phantom{\rule{1em}{0ex}}\left[{h}_{i},{f}_{i}\right]=-2{f}_{i}$

(1.15) ${s}_{i}$ is a subalgebra of $𝔤\left(A\right)$, isomorphic to ${𝔰𝔩}_{2}\left(k\right)$.

 Proof. The relations just written show that ${s}_{i}$ is a 3-dimensional subalgebra of $𝔤\left(A\right)$. The mapping ${e}_{i}↦\left(\begin{array}{cc}0& 1\\ 0& 0\end{array}\right),\phantom{\rule{1em}{0ex}}{f}_{i}↦\left(\begin{array}{cc}0& 0\\ 1& 0\end{array}\right),\phantom{\rule{1em}{0ex}}{h}_{i}↦\left(\begin{array}{cc}1& 0\\ 0& -1\end{array}\right)$ is an isomorphism of ${s}_{i}$ onto ${𝔰𝔩}_{2}\left(k\right)$ $\square$

Next we require the following lemma:

(1.16) Let $x,y$ be elements of an associative ring $R$. Then for each positive integer $N$ we have

1. ${x}^{N}y=\sum _{r=0}^{N}\phantom{\rule{0.2em}{0ex}}\left(\genfrac{}{}{0}{}{N}{r}\right)\phantom{\rule{0.2em}{0ex}}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}x\right)}^{r}y{x}^{N-r}$.
2. $x{y}^{N}=\sum _{r=0}^{N}{\left(-1\right)}^{r}\phantom{\rule{0.2em}{0ex}}\left(\genfrac{}{}{0}{}{N}{r}\right)\phantom{\rule{0.2em}{0ex}}{y}^{N-r}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}y\right)}^{r}x$.

 Proof. We shall prove (ii); the proof of (i) is analogous. Let ${\lambda }_{y};\phantom{\rule{0.2em}{0ex}}{p}_{y}:\phantom{\rule{0.2em}{0ex}}R\to$something denote respectively left and right multiplication by $y$ in $R$. Since $R$ is associative they commute with each other and hence also with $\text{ad}\phantom{\rule{0.2em}{0ex}}y={\lambda }_{y}-{p}_{y}$. Hence $\begin{array}{ccc}x{y}^{N}={p}_{y}^{N}\left(x\right)& =& {\left({\lambda }_{y}-\text{ad}\phantom{\rule{0.2em}{0ex}}y\right)}^{N}x\\ & =& \sum _{r=0}^{N}{\left(-1\right)}^{r}\phantom{\rule{0.2em}{0ex}}\left(\genfrac{}{}{0}{}{N}{r}\right)\phantom{\rule{0.2em}{0ex}}{\lambda }_{y}^{N-r}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}y\right)}^{r}x\text{.}\end{array}$ $\square$

Let us apply this formula with $x={e}_{i},\phantom{\rule{0.2em}{0ex}}y={f}_{i},\phantom{\rule{0.2em}{0ex}}R=U\left({s}_{i}\right)$: we have

$\begin{array}{c}\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right){e}_{i}=\left[{f}_{i},{e}_{i}\right]=-{h}_{i}\\ {\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{2}{e}_{i}=-\left[{f}_{i},{h}_{i}\right]=-2{f}_{i}\\ {\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{3}{e}_{i}=-2\left({f}_{i},{f}_{i}\right)=0\end{array}$

and therefore

$\begin{array}{cc}\text{(1.17)}& \begin{array}{ccc}{e}_{i}{f}_{i}^{N}& =& {f}_{i}^{N}{e}_{i}+N{f}_{i}^{N-1}{h}_{i}+\phantom{\rule{0.2em}{0ex}}\left(\genfrac{}{}{0}{}{N}{2}\right)\phantom{\rule{0.2em}{0ex}}{f}_{i}^{N-2}·-2{f}_{i}\\ & =& {f}_{i}^{N}{e}_{i}+N{f}_{i}^{N-1}\left({h}_{i}-N+1\right)\text{.}\end{array}\end{array}$

(1.18) In $𝔤\left(A\right)$ we have

${\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right)}^{1-{a}_{ij}}{e}_{j}=0$
${\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{1-{a}_{ij}}{f}_{j}=0$

whenever $i\ne j$.

 Proof. It is enough to prove one of these relations, because the other then follows by applying the involution $\omega$. Let ${f}_{ij}={\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{1-{a}_{ij}}{f}_{j}$. By (1.8), in order to show that ${f}_{ij}=0$ it is enough to show that $\left[{e}_{k},{f}_{ij}\right]=0\phantom{\rule{1em}{0ex}}\left(1\le k\le n\right)$. There are 3 cases to consider: $k\ne i,k\ne j$. Then ${e}_{k}$ commutes with ${f}_{i}$ and ${f}_{j}$ (1.2), hence with ${f}_{ij}$. $k=j,k\ne i$. Then ${e}_{j}$ commutes with ${f}_{i}$, hence $\begin{array}{ccc}\left[{e}_{j},{f}_{ij}\right]& =& {\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{1-{a}_{ij}}\left[{e}_{j},{f}_{j}\right]\\ & =& {\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{-{a}_{ij}}\left[{f}_{i},{h}_{j}\right]\\ & =& {a}_{ji}{\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{-{a}_{ij}}{f}_{i}\text{.}\end{array}$ If ${a}_{ij}\ne 0$ this is zero, whilst if ${z}_{ij}=0$ then ${a}_{ji}=0$ (by (c)), so again it is 0. $k=i,k\ne j$. We have, using the formula (1.17) $\begin{array}{ccc}\left[{e}_{i},{f}_{ij}\right]& =& \left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right){\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{1-{a}_{ij}}{f}_{j}\\ & =& \left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}^{1-{a}_{ji}}\right)\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}\right){f}_{j}+\left(1-{a}_{ij}\right){\left(\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\right)}^{-{a}_{ji}}\left(\left({h}_{i}\right){f}_{j}+{a}_{ij}{f}_{j}\right)\\ & =& 0\phantom{\rule{0.5em}{0ex}}\text{by (1.2).}\end{array}$ $\square$

Remark For an arbitrary Cartan matrix $A$, it is still an open question whether the relations (1.2) together with (1.17) are a complete set of defining relations for the algebra $𝔤\left(A\right)$: or, equivalently, whether the left sides of the relations (1.18) generate the ideal $𝔯$ in $\stackrel{\sim }{𝔤}\left(A\right)$. At any rate this is known to be true (proof later, perhaps) if $A$ is symmetrizable.

In general, a derivation $d$ of a Lie algebra $𝔤$ is said to be locally nilpotent if for each $x\in 𝔤$ there exists a positive integer $N\left(x\right)$ such that ${d}^{N\left(x\right)}x=0$: i.e. if for each $x\in 𝔤$ is killed by some power of $d$. In that case ${e}^{d}:𝔤\to 𝔤$ is well defined, because the series

${e}^{d}\left(x\right)x\sum _{n\ge 0}\frac{{d}^{n}x}{n!}$

terminates for each $x\in 𝔤$. The Leibniz formula shows that ${e}^{d}\left[x,y\right]=\left[{e}^{d}x,{e}^{d}y\right]$ and hence that ${e}^{d}$ is an automorphism of the Lie algebra $𝔤$ (with inverse what does this say??).

(1.19) $\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}$ and $\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}\phantom{\rule{0.5em}{0ex}}\left(1\le i\le n\right)$ are locally nilpotent derivations of $𝔤\left(A\right)$ (and of ${𝔤}^{\prime }\left(A\right)$). (Consequently ${e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}},{e}^{\text{ad}\phantom{\rule{0.2em}{0ex}}{f}_{i}}$ are automorphisms of $𝔤\left(A\right)$ and of ${𝔤}^{\prime }\left(A\right)$.)

 Proof. It is enough to consider $\text{ad}\phantom{\rule{0.2em}{0ex}}{e}_{i}=\phi$, say. Let $\alpha$ be the subspace of $𝔤\left(A\right)$ consisting of all $x\in 𝔤\left(A\right)$ killed by some power of $\phi$. Since $\phi$ is a derivation, $𝔞$ is a subalgebra of $𝔤$, by virtue of the Leibnitz formula: if ${\phi }^{r}x=0$ and ${\phi }^{s}y=0$, then ${\phi }^{r+s-1}\left[x,y\right]=0$. Hence to show that $𝔞=𝔤\left(A\right)$ it is enough to show that the generators ${e}_{j},{f}_{j},h\in 𝔥$ belong to $𝔞$. For ${e}_{j}$ this follows from (1.18). for $h\in 𝔥$ we have $\phi \left(h\right)=\left[{e}_{i},h\right]=-{\alpha }_{i}\left(h\right){e}^{i}\phantom{\rule{1em}{0ex}}\text{(1.2)}$ whence ${\phi }^{2}\left(h\right)=0$. Finally, for ${f}_{j}$ we have $\phi \left({f}_{j}\right)=\left[{e}_{i},{f}_{j}\right]=0$ if $j\ne i$, and $\phi \left({f}_{i}\right)={h}_{i},\phantom{\rule{0.5em}{0ex}}{\phi }^{2}\left({f}_{i}\right)\left[{e}_{i},{h}_{i}\right]=-2{e}_{i},\phantom{\rule{0.5em}{0ex}}{\phi }^{3}\left({f}_{i}\right)=-2\left[{e}_{i},{e}_{i}\right]=0$. $\square$

## The Lie algebra defined by a principal submatrix

Let $A={\left({a}_{ij}\right)}_{1\le i,j\le n}$ be any $n×n$ matrix with entries in $k$. For any non-empty subset $J$ of $\left\{1,2,\dots ,n\right\}$ let

${A}_{J}={\left({a}_{ij}\right)}_{i,j\in J}$

be the principal submatrix defined by the subset $J$. Write

${n}_{J}=\text{Card}\left(J\right),\phantom{\rule{0.5em}{0ex}}{l}_{J}=\text{rank}\left({A}_{J}\right)$.

We wish to see how $𝔤\left({A}_{J}\right)$ is related to $𝔤\left(A\right)$.

Let $\left(𝔥,B,{B}^{\vee }\right)$ be a minimal realization of $A$, so that $\text{dim}\phantom{\rule{0.2em}{0ex}}𝔥=2n-l=N$ say. Let

${𝔥}_{J}^{\prime }=\sum _{j\in J}k{h}_{j}$
${𝔠}_{J}=\bigcap _{j\in J}\text{Ker}\phantom{\rule{0.2em}{0ex}}\left({\alpha }_{j}\right)$

which are subspaces of $𝔥$.

(a) Let $V$ be a vector subspace of $𝔥$. Then the restrictions ${\alpha }_{j}|V\phantom{\rule{0.2em}{0ex}}\left(j\in J\right)$ are linearly independent (as linear forms on $V$) iff $V+{𝔠}_{J}=𝔥$.

 Proof. Take annihilators: $V+{𝔠}_{J}=𝔥⇔{V}^{0}\cap {𝔠}_{J}^{0}=0$ (in ${𝔥}^{*}$). But ${𝔠}_{J}^{0}$ is the subspace of ${𝔥}^{*}$ spanned by the ${\alpha }_{j}\phantom{\rule{0.2em}{0ex}}\left(j\in J\right)$, whence the result. Note that ${𝔠}_{J}={⟨h{\alpha }_{j};j\in J⟩}^{0}$. $\square$

(b) Let ${𝔥}_{J}$ be minimal among subspaces $V$ of $𝔥$ satisfying (i) $V\supset {𝔥}_{J}^{\prime }$; (ii) $V+{𝔠}_{J}=𝔥$. Let check the following

${B}_{J}=\left\{{\alpha }_{j}|{𝔥}_{J}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}j\in J\right\},\phantom{\rule{0.5em}{0ex}}{B}_{J}^{\vee }=\left\{{h}_{j}\phantom{\rule{0.2em}{0ex}}:\phantom{\rule{0.2em}{0ex}}j\in J\right\}$

Then $\left({𝔥}_{J},{B}_{J},{B}_{J}^{\vee }\right)$ is a minimal realization of ${A}_{J}$.

 Proof. By (a), the elements of ${B}_{J}$ are linearly independent in ${𝔥}_{J}^{*}$. It remains to show that $\text{dim}\phantom{\rule{0.2em}{0ex}}{𝔥}_{J}=2{n}_{J}-{l}_{J}$. Let $\pi :𝔥\to 𝔥/{𝔥}_{J}^{\prime }$ be the projection. We have $\pi \left({𝔠}_{J}\right)=\left({𝔠}_{J}+{𝔥}_{J}^{\prime }\right)/{𝔥}_{J}^{\prime }\cong {𝔠}_{J}/\left({𝔠}_{J}\cap {𝔥}_{J}^{\prime }\right)$ and $\text{dim}\phantom{\rule{0.2em}{0ex}}\left({𝔠}_{J}\cap {𝔥}_{J}^{\prime }\right)={n}_{K}-{l}_{J}$ just as in the proof of (1.10); also $\text{dim}\phantom{\rule{0.2em}{0ex}}{𝔠}_{J}=N-{n}_{J}$, so that $\text{dim}\phantom{\rule{0.2em}{0ex}}\pi \left({𝔠}_{J}\right)=N-2{n}_{J}+{l}_{J}=N-{N}_{J}$ say where ${N}_{J}=2{n}_{J}-{l}_{J}$. Clearly ${𝔥}_{J}$ must be such that ${𝔥}_{J}/{𝔥}_{J}^{\prime }=\pi \left({𝔥}_{J}\right)$ is a vector space complement of $\pi \left({𝔠}_{J}\right)$ in $𝔥/{𝔥}_{J}^{\prime }$, and therefore $\begin{array}{ccc}\text{dim}\phantom{\rule{0.2em}{0ex}}\pi \left({𝔥}_{J}\right)& =& \text{dim}\phantom{\rule{0.2em}{0ex}}\left(𝔥/{𝔥}_{J}^{\prime }\right)-\text{dim}\phantom{\rule{0.2em}{0ex}}\pi \left({𝔠}_{J}\right)\\ & =& \left(N-{n}_{J}\right)-\left(N-{N}_{J}\right)={N}_{J}-{n}_{J}\end{array}$ and finally $\text{dim}\phantom{\rule{0.2em}{0ex}}{𝔥}_{J}={n}_{J}+\left({N}_{J}-{n}_{J}\right)={N}_{J}$. $\square$

(c) From (1.3) we have

$\stackrel{\sim }{𝔤}\left({A}_{J}\right)={\stackrel{\sim }{𝔫}}_{J,-}\oplus {𝔥}_{J}\oplus {\stackrel{\sim }{𝔫}}_{J,+}$

where ${\stackrel{\sim }{𝔫}}_{J,+}$ (resp. ${\stackrel{\sim }{𝔫}}_{J,-}$) is the free Lie algebra generated by the ${e}_{j},j\in J$ (resp. by the ${f}_{j},j\in J$). Hence $\stackrel{\sim }{𝔤}\left({A}_{J}\right)$ is a subalgebra of $\stackrel{\sim }{𝔤}\left(A\right)$, and if we put ${Q}_{J}=\sum _{j\in J}ℤ{\alpha }_{j}$ we have

$\stackrel{\sim }{𝔤}\left({A}_{J}\right)={𝔥}_{J}+\sum _{\genfrac{}{}{0}{}{\beta \in {Q}_{J}}{\beta \ne 0}}{\stackrel{\sim }{𝔤}}_{\beta }$

with components ${\stackrel{\sim }{𝔤}}_{\beta }\phantom{\rule{0.2em}{0ex}}\left(\beta \in {Q}_{J},\beta \ne 0\right)$ the same as those in $\stackrel{\sim }{𝔤}\left(A\right)$.

(d) Let ${𝔯}_{J}$ be the unique largest ideal in $\stackrel{\sim }{𝔤}\left({A}_{J}\right)$ satisfiying ${𝔯}_{J}\cap {𝔥}_{J}=0$ (so that $𝔤\left({A}_{J}\right)=\stackrel{\sim }{𝔤}\left({A}_{J}\right)/{𝔯}_{J}$). Then

${𝔯}_{J}=\stackrel{\sim }{𝔤}\left({A}_{J}\right)\cap 𝔯$

and hence

${𝔯}_{J}=\underset{\beta \in {Q}_{J}}{\oplus }{𝔯}_{\beta }$

 Proof. Let ${𝔯}_{J}^{\prime }=\stackrel{\sim }{𝔤}\left({A}_{J}\right)\cap 𝔯=\oplus {𝔯}_{\beta }$. this is an ideal in $\stackrel{\sim }{𝔤}\left({A}_{J}\right)$ which intersects ${𝔥}_{J}$ trivially, hence certainly ${𝔯}_{J}^{\prime }\subset {𝔯}_{J}$. Conversely, let $\phi :\phantom{\rule{0.2em}{0ex}}\stackrel{\sim }{𝔤}\left({A}_{J}\right)↪\stackrel{\sim }{𝔤}\left(A\right)\to 𝔤\left(A\right)$, so that $\text{Ker}\phantom{\rule{0.2em}{0ex}}\left(\phi \right)={𝔯}_{J}^{\prime }$. Let $x\in {𝔯}_{J}$, where $\beta \in {Q}_{J}$; I claim that $\phi \left(x\right)=0$ in $𝔤\left(A\right)$. Suppose for example $\beta >0$, and proceed by induction on $m=\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\beta \right)$. If $m=1$ then $x=0$ (1.6), so certainly $\phi \left(x\right)=0$. If $m>1$, consider $\left[\phi \left(x\right),{f}_{i}\right]$. There are two cases: if $i\in J$, then $\left[\phi \left(x\right),{f}_{i}\right]=\phi \left(\left[x,{f}_{i}\right]\right)=0$ by the inductive hypothesis, because $\left[x,{f}_{i}\right]\in {𝔯}_{J,\beta -{\alpha }_{i}}$ and $\text{ht}\phantom{\rule{0.2em}{0ex}}\left(\beta -{\alpha }_{i}\right)=m-1$; if $i\ne J$, then since $\phi \left(x\right)\in {𝔤}_{\beta }$ we have $\left[\phi \left(x\right),{f}_{i}\right]\in {𝔤}_{\beta -{\alpha }_{i}}$; but $\beta -{\alpha }_{i}$ is not a root, because $\beta =\sum _{j\in J}{m}_{j}{\alpha }_{j}$ (say) and $i\notin J$. Hence $\left[\phi \left(x\right),{f}_{i}\right]=0$ in both cases, and therefore by (1.8) $\phi \left(x\right)=0$. Likewise if $\beta <0$. It follows that ${𝔯}_{J}\subset \text{Ker}\phantom{\rule{0.2em}{0ex}}\left(\phi \right)={𝔯}_{J}^{\prime }$. $\square$

(e) From (d) it follows that the embedding of $\stackrel{\sim }{𝔤}\left({A}_{J}\right)$ in $\stackrel{\sim }{𝔤}\left(A\right)$ induces an embedding of $𝔤\left({A}_{J}\right)$ in $𝔤\left(A\right)$. We have

$𝔤\left({A}_{J}\right)={𝔥}_{J}+\sum _{\genfrac{}{}{0}{}{\beta \in {Q}_{J}}{\beta \ne 0}}{\stackrel{\sim }{𝔤}}_{\beta }/{𝔯}_{\beta }$

from (c) and (d); but ${\stackrel{\sim }{𝔤}}_{\beta }/{𝔯}_{\beta }={𝔤}_{\beta }$ (summand of $𝔤\left(A\right)$). Hence

$\begin{array}{cc}\text{(1.20)}& 𝔤\left({A}_{J}\right)={𝔥}_{J}+\sum _{\genfrac{}{}{0}{}{\beta \in {Q}_{J}}{\beta \ne 0}}{𝔤}_{\beta }\end{array}$

Hence is $R$ (resp. ${R}_{J}$) is the set of roots of $𝔤\left(A\right)$ (resp. $𝔤\left({A}_{J}\right)$) we have $RJ=R\cap {Q}_{J}$, and the multiplicity of $\beta \in {R}_{J}$ a root of $𝔤\left({A}_{J}\right)$ is the same as its multiplicity as a root of $R$.