Wisconsin Bourbaki Seminar

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au

Last update: 24 March 2014

Notes and References

This is an excerpt from notes of the Wisconsin Bourbaki Seminar, Fall 1993. The notes were written by Oliver Eng, Susan Hollingsworth, Mark Logan, Arun Ram and Louis Solomon.

§3

  1. Let V be a finite dimensional real vector space with an inner product. Let F be a finite subgroup of the orthogonal group of V generated by reflections. Let Λ be a discrete subgroup of V stable under F. Let W be the group of affine transformations of V generated by F and translations by vectors in Λ. Let be the set of hyperplanes H in V such that sHF and let R be the set of rΛ such that there exists a hyperplane h such that r,h=0.
    1. Show that W is generated by (affine) reflections if and only if R generates Λ as a -module.
    2. Consider 2 with the scalar product ((x,y),(x,y))xx+yy. Let e1=(1,0), e2=(-12,32), e3=(-12,-32), Δi=ei, and let F be the dihedral group generated by the sΔi. Let Λ be the discrete subgroup of 2 generated by the ei. The subgroup Λ is stable under F. Show that W is not generated by reflections.

    Solution.
    1. : Assume that R generates Λ. W is generated by F and translations in Λ. Translations in Λ are generated by translations in R. So it is sufficient to show that translations by vectors in R are generated by reflections. Let rR and let tr be the corresponding translation. Suppose that r is orthogonal to H. Then
      1. tr=(trsH)sH.
      2. trsH is order 2, since sHtrsH=tsHr=t-r and thus trsHtrsH=trt-r=1.
      3. trsH fixes the hyperplane H+12r, since if x is on the hyperplane H+12r then sHx is a corresponding point on the hyperplane H-12r and thus trsHx=x.
      It follows from b) and c) that trsH is the reflection about the hyperplane H+12r. So tr=sH+12rsH. Thus translations by elements of R are generated by reflections.
      : Suppose that W is generated by reflections. If λV, let tλ denote the translation by the vector λ. Then if we let T={tλ|λV}, T is normalized by O(V), and hence the group of affine transformations of V is the semidirect product of T by O(V), since only the identity is both a translation and an element of O(V). Thus, since Λ is stable under F, every element of W is uniquely expressible as tλ·g for some gF and λΛ, and hence {tλ|λΛ} is normalized by F. Now consider an affine reflection ϕ:VV. The reflection ϕ is determined by its fixed set, which is an affine hyperplane. This affine hyperplane may be written as Hλ={λ+μ|μH}, where λV, H is a (linear) hyperplane, and λ is perpendicular to H. Then V=λH and ϕ acts by ϕ(aλ+μ)=(2-a)λ+μ for a, μH (To get from aλ+μ to λ+μ, we add (1-a)λ; then add it again to get its mirror image. This is since the orthogonal projection of aλ+μ in the affine hyperplane is simply λ+μ. See the diagram.) But if we let sHO(V) be the orthogonal reflection in H, then for a, μH, t2λsH (aλ+μ) = t2λ (-aλ+μ) = (2-a)λ+μ = ϕ(aλ+μ). So it follows that any affine reflection in W is uniquely expressible as tλsH where λΛ, H, and λ is orthogonal to H (and hence λR). Now suppose λΛ is arbitrary. Then tλW, may be written as a product of affine reflections in W, say tλ=tμ1g1 tμ2g2 tμrgr, (*) with μiΛ, giF, and where tμigi is an affine reflection. By the remark above it follows that μiR for all i=1,,r. Now note that if μV, gO(V), then gtμ=tgμg. Applying this repeatedly to the right hand side of (*) we get tλ = tμ1 tg1μ2 tg1g2μ3 tg1g2gr-1μr g1gr = tνg1gr, where ν=μ1+g1μ2++g1gr-1μr. Thus λ=ν, and g1gr=1 (which is not used). But Λ is clearly stable under W, and W permutes , and W is orthogonal. Thus it folllows from the definition of R that WRR. Therefore, since μiR for i=1,,r, each of μ1,g1μ2,,g1gr-1μrR. Thus λ=ν lies in the -span of elements of R, and hence R generates λ.
    2. See attached picture.

  2. Let V be a finite dimensional vector space over , W a finite subgroup of GL(V) generated by reflections. Show that every element of order 2 in W is generated by pairwise commuting reflections belonging to W.

    Solution. The proof is by induction on the dimension of V. If dimV=1 then there is only one element of order 2 in W and this is certainly a reflection. Assume dimV>1 and let wW be an element of order 2. We may choose a basis of V such that the matrix of V with respect to this basis is diagonal with digonal entries (-1,-1,,-1,1,1,,1). There are two cases:
    Case 1. w fixes a nonzero subspace E of V pointwise.
    Let be the set of hyperplanes such that W is generated by the reflections in these hyperplanes. Define W(E)={wW|wfixesEpointwise}. Then, by Proposition 2, Chapt. V §3, W(E) is generated by reflections in the hyperplanes in the set E={H|HE}. Since w fixes E, w is an element of W(E) and W(E) acts on V/E and dimV/E<dimV. By induction we have that w|E=t1t2tr where ti are pairwise commuting reflections. If we let si be the reflection which acts on E by the identity and on E as ti then w=s1s2sr. It is clear that since the reflections t1,t2,,tr are pairwise commuting so are the reflections s1,s2,,sr. The reflections si, 1ir are elements of W since they are reflections in hyperplanes in E.
    Case 2. The only element which w fixes is 0V.
    Then wv=-v for all vV. Since -1 is in the center of GL(V), w is in the center of W. Since W is generated by reflections we can write w=s1sr as a product of reflections. Since w is in the center w=srwsr-1= srs1sr-1. Let w=s1sr-1. Then (w)2=s1 sr-1sr2s1 sr-1=w2=1. Since ww, then w-1 and so by case 1 we may write w as a product of pairwise commuting reflections w=t1tm. Since w=t1tmsr= tit1tˆi tmsr=srti t1tˆi tm,and w=srt1tm= srtit1 tˆitm, it follows that srti=tisr for each i. So w can be written as a product of pairwise commuting reflections.

    Notes. Recall that The longest element of the Weyl group w0 is an element of order 2. It is interesting to write these elements as a product of pairwise commuting reflections.

  3. Let V be a finite dimensional real vector space and let W be a finite subgroup of GL(V) generated by reflections. Let wW. Suppose that V is a subspace of V stable under W and let k be the order of the restriction w|V of w to V. Show that there exists xW of order k, leaving V stable and such that x|V=w|V.

    Solution. Let W={wW|wv=v,for allvV}. By Proposition 2 §3.3, W is generated by reflections. Let C be a chamber of W. Let C=wC, wW, be another chamber. By Lemma 2 §3.1, there is an hW such that hC=C. So whC=C. Then
    1. wh stabilizes V since both w and h stabilize V.
    2. wh|V=w|V since h fixes V pointwise.
    3. It remains to show that wh has order k.
    Suppose that (wh)=1. Then, since wh|V=w|V we have that (wh)|V= w|V=1|V. Since the order of w|V is k it follows that k. We will show that (wh)k=1. Since wh stabilizes a chamber of W so does (wh)k. Since wh|V=w|V and wk is the identity on V it follows that (wh)k is the identity on V. So (wh)kW. Since (wh)kW and (wh)k stabilizes a chamber of W it follows from Theorem 1 §3.2, that (wh)k is the identity in W. Therefore (wh)k is the identity in W.
    1. Let K be a commutative field and let V be an n dimensional vector space over K. Let ϕ be a symmetric bilinear form on V and let N={nV|ϕ(n,v)=0,for allvV} be the null space of ϕ. Suppose that dimN=1. Show that the null space of the extension of ϕ to n-1V is dimension n-1.
    2. Suppose that K= and that ϕ is positive. Let (e1,,en) be a base of V, and let aij=ϕ(ei,ej). Suppose that aij0 for ij. Suppose that {1,2,,n} does not admit a partition IJ such that aij=0 for all iI and jJ. Let Aij be the cofactor of aij in the matrix (aij). Show that Aij>0 for all i,j.
    3. Let η1e1++ηnen be a vector with all coordinates >0 which generates the null space N. Show that η1,,ηn are proportional to A11,,Ann.

    Solution.
    1. The extension of the bilinear form ϕ on V to kV is given by, Alg. Chapt III §11.5 formula 30, ϕ ( vi1vik, vi1vik ) =det(ϕ(vij,vik)). Let v1N and complete this to a basis v1,,vn of V. The set of vectors {v1vˆivn} is a basis of n-1V. Let M be the n×n matrix given by M=(ϕ(vk,v)) and let Mij denote the matrix M with the ith row and the jth column removed. Then for any i,j such that i1, ϕ ( v1vˆivn ,v1vˆjvn ) =det(Mij)=0, since Mij is a matrix with top row containing all zeros. It follows that all the basis vectors v1vˆivn are in the null space of the form on n-1V. Furthermore, since rankM=dimN=1 and M is a matrix such that the first row and the first column are zero, ϕ(v2vn,v2vn) =det(M11)0. Thus the vector v2vn is not an element of the null space of the form on n-1V.
    2. Let M=(ϕ(ek,ej))=(akj) and Mij be the matrices given in the proof of part a). Then Aij=(-1)i+jdet(Mij). Then for any basis vector ek, we have that ϕ(ek,jAijej) = jAijϕ (ek,ej) = jAijajk = δikdet(M), by Cramer’s rule. Since det(M)=0 we have that ϕ(ek,jAijej) for all k. It follows that jAijej is an element of N. By Lemma 4 §3.5 it follows that N is spanned by a vector η1e1++ηnen such that ηi>0 for all i. Since jAijejN it follows that Aij=riηj for some constants ri. A similar argument shows that Aij=cjηi for some constants cj. Since Aii=riηi=ciηi and the ηi>0, ri=ci for all i. Then since Aij=riηj=rjηi, riηi= rjηj, for alli,j. Setting μ=r1η1 we have that Aij=riηi ηiηj=μηi ηj, for all i and j. Since ϕ is positive semidefinite we know that A11=det(M11)0. We have already seen in part a) that det(M11)0. So A11=μη12>0 and we have that μ>0. It follows that Aij=μηiηj>0 for all i,j.
    3. From the proof of part b) we have that Aii=μηi2 where μ>0. It follows that ηi=1μ Aii. So ηi is proportional to Aii.

  4. Let q(ξ1,,ξn)=i,jaijξiξj(aij=aji) be a positive degenerate quadratic form on n, such that aij0 for ij. Suppose that {1,2,,n} does not admit a partition IJ such that aij=0 for iI, jJ.
    1. Show that, if one puts ξ=0, one gets a positive nondegenerate form by restricting to the coordinates ξ1,,ξi-1,ξi+1,,ξn.
    2. Show that aii>0 for all i.
    3. Show that if one replaces one of the aij with a value aijaij, the new form is nonpositive.

    Solution.
    1. By Lemma 4 §3.5, we have that the subspace of isotropic vectors is dimension 1 and that this space is generated by a vector η=(η1,,ηn) such that ηi>0 for all i. If the form on n-1 given by q(ξ1,,ξi-1,0,ξi+1,,ξn) is degenerate then there is a nonzero vector ξ=(ξ1,,ξi-1,0,ξi+1,,ξn) such that q(ξ)=0. The vector ξ must be a multiple of η. This is a contradiction to the fact that ξ is nonzero.
    2. Since the q is positive we have that, for each i, 0q(εi)=aii, where εi is the vector with ith coordinate 1 and all other coordinates 0. If aii=0=q(εi), then by a) we must have that εi=0. This is a contradiction. So aii>0 for all i.
    3. Let η=(η1,,ηn) be a vector such that ηi>0 for all i and such that q(η)=0. Then, if the new form is positive we have that 0=i,jaij ηiηj> i,jaij ηiηj0, which is clearly a contradiction.

    Notes. The null space of a form ϕ is defined to be N= { nV|ϕ (n,v)=0, for allvV } . Let I be the set of isotropic vectors for ϕ, I={vV|ϕ(v,v)=0}. It is clear that if nN the nI since ϕ(n,n)=0. So NI. Suppose that the form ϕ is positive and let nI. Let vV and let λ. Then 0 ϕ(λn+a,λn+a) = λ2ϕ(n,n)+ 2λϕ(n,v)+ ϕ(v,v) = 0+2λϕ(n,v)+ ϕ(v,v). If ϕ(n,v)0 then we may choose λ such that 2λϕ(v,n)<ϕ(v,v). So ϕ(n,v)=0. It follows that nN. So I=N if ϕ is positive.

  5. Let (aij) be a real symmetric matrix with n rows and n columns.
    1. Put sk=i=1kaik. For all ξ1,,ξn, one has that i,kaik ξiξk=k skξk2-12 i,kaik (ξi-ξk)2.
    2. Let η1,,ηn*. Set iηiaik=tk. Then i,kaik ξiξk=k tkξk2ηk -12i,k ηiηkaik (ξiηi-ξkηk)2.
    3. If there exist numbers η1,,ηn>0 such that iηiaik=0 (k=1,2,,n), and if aij0 for ij, then the quadratic form i,kaikξiξk is positive degenerate.
    4. Let i,jqijξiξk be a quadratic form on n such that qij0 for ij. Suppose that {1,2,,n} does not admit a partition IJ such that qij=0 for iI and jJ. Show that this form is positive degenerate if and only if there exist η1>0,,ηn>0 such that iηiqik=0 (k=1,,n).

    Solution.
    1. Begin with the right hand side. ksk ξk2-12 i,kaik (ξi-ξk)2 = i,kaik ξk2-12 ( i,kaik ( ξi2-2ξi ξj+ξj2 ) ) = i,kaik ξk2-12 i,kaik ξi2+i,k aikξiξk -12i,k aikξk2 = 12i,k aikξk2- 12i,k aikξi2+ i,kaik ξiξk = 12i,k (aik-aki) ξk2+i,k aikξiξk = i,kaik ξiξk, since (aij) is symmetric.
    2. Replace ξi by ξi/ηi and aik by ηiηkaik in a). Then sk=iηiηk aik=ηki ηiaik=ηk tk. Substituting, the formula in a) is i,kηiηk aikξiηi ξkηk=k ηktk ξk2ηk2-12 i,kηiηk aik (ξiηi-ξkηk)2. The trivial cancellations give the desired identity.
    3. Since tk=iaikηk=0 the identity in b) reduces to i,kaik ξiξk = -12i,k ηiηkaik (ξiηi-ξkηk)2 = -12ik ηiηkaik (ξiηi-ξkηk)2. Since aik0 for ik and ηi>0 for all i we have that the right hand side is 0. It follows that the form is positive. The form is degenerate since i,kηkηiaik= kηkiηiaik =0.
    4. follows immediately from c).
      : In view of Lemma 4 §3.5 it is sufficient to show that if η=(η1,,ηn) is a vector such that q(η)=0 and that ηi>0 for all i then iηiqik=0 (k=1,,n). Let Q=(qij). Let λ and x=(x1,,xn)n. Then, since the form q is positive, 0i,k (xi+ληi) (xk+ληk) qik=i,k xixkqik+ 2λi,kxk ηiqik+0 for arbitrary values of the xi and λ. This is only possible if i,kxkxiqik=0 for all values of x. So iηiqik=0.

    Notes. Parts 8a) and 8b) are known as Crosby’s Lemma and are proved on pp. 177-178 of the book Regular Polytopes by H.S.M. Coxeter. See also the historical remarks pp. 185.

page history