Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au
Last update: 16 December 2013
Lecture 6: February 26, 1997
(The following is the beginning of Lecture 4 given on Feb. 19).
Schubert Cells in
Recall that a closed subgroup of is called a standard parabolic subgroup if
Let be a standard parabolic subgroup. Then subset
s.t.
where
is the subgroup of by Set
Thus is the set of minimum representatives of the coset space
We have
?????
????? is called the Schubert Cell corresponding to
Each is and
takes into a
?????
????? is a complex projective variety called the Schubert variety ????? have
????? let
?????
Set
Schubert Basis for and
Fact:
is a basis for
Notation: The dual basis of
dual to is denoted by
Remark:
(Here starts Lecture 6)
Schubert Basis for and
Definition: For put
Then
is a basis for
There is then a unique basis
of (over
s.t.
Bot and
are called Schubert basis.
????? basis
of is characterized by ????? properties:
(1)
(2)
Under evaluation at
we have
(3)
if
?????, we look at
The action of on
in the basis
The action of on
in the basis
The ring of ????? characteristic operators expressed
in terms of the on
The Hopf algebroid structure on
Another set of elements in
For consider the map
set
Of course
if We think of
as localizing at the pt
Warning:
is NOT an for
because
Remark: Expressing as a linear combination over of the
we get the
in Kostant-Kumar. Will do this later.
Properties: Consider the map
Then
Action of on in the basis
Proposition 1:
Proof.
Let
Recall that
From the fact stated at the end of last lecture,
where
It follows that (?)
?????ave: for
Action of on in the basis
Proposition 2: For
Proof.
Let's first check that
From the previous Proposition 1, if
But
by definition, so by letting
and we get
or
otherwise follows.
Remark: Recall that
We can identify
by
Then this is an identification of and from Proposition 2,
ie.
Thus by Proposition 1, we see that under the identification
the (left)
on
becomes the (left) action on
by
where, recall from lecture 2, that
(The in Lecture 2 is defined to be
and
The ring of characteristic operators again
Set
So
Proposition:
(1)
Every characteristic operator
can be uniquely written as
In fact,
(Recall
(2)
is compactly supported iff only finitely many occur in the sum. (ie. at
only finitely many are ?????
Proof.
(1) For any write
Then Thus
to show it is enough to show that
for any
(See Lecture 5). Since both and are
it is enough to show that
for all Now
?????e Corollary 3 in Lecture 5).
Uniqueness is clear.
If has compact support, since any compact subset of is contained in some
where
if
[red], we see that there are only finitely many involved in the expression
Note: The following remark was crossed out in the scanned notes.
Remark: We can think of as
or the ????? of via
the pairing:
Let's check then that action on
becomes the on
by left multiplications: For use
to denote the element given by
For we want to check
The Hopf Algebroid Structure on
Recall from Lecture 5 that is a Hopf
algebroid over We now express the structure maps for this Hopf algebroid in the basis
First, recall that we have ring homomorphisms
This gives two structures on
The map is nothing but the characteristic homomorphism in Lecture 5. The map
is a little more mysterious. It gives the 2nd structure on
in Lecture 4.
Proposition: The elements
is also a basis for the second on
defined by
Remark: I (Lu) suspect that has a lot to do with the Bruhat-Poisson structure on
The next theorem expresses the structure maps for the Hopf algebroid structure on
in the basis
Theorem: (Recall notation from Lecture 5):
1)
For
2)
3)
[red] means ?????
4)
5)
For any and
Proof.
We first prove 5). 5) is due to the general fact if an algebra acts on a space then using a basis
of
and the dual basis of
the co-module map is nothing but
????? in our example, we are identifying
with ????? the pairing
????? this pairing; we have
as a basis for ????? dual basis in
is
(see page 6-8). Thus for any
Peterson gave the following proof in class:
Since
is a basis for we know
for some
for each Need to show
To do this, let and calculate We have
This finishes the proof of 5)
Remark: What is quoted as Corollary 1 in Lecture 5 is the fact that the action of
on is obtained by the comodule map
by
and
is the pairing between and
This is just like in the Hopf algebra case.
We now prove 3). This is just a special case of 5) for
Indeed, ????? 5), we get
But
?????
This finishes the proof of 3).
2) is is clear from definition since
It remains to prove 1) and 4).
To prove 1), we need the following Lemma:
Lemma: For any
Proof.
Write
for some for each
Using
and
we get
This proves the Lemma.
Remark: In proving the Lemma, we used the fact that ?????
the pairing between and
defined by
satisfies
and
It says that is
not only an for the first structure on
(defined by
but also for the 2nd
structure defined by
Is this really true? Recall that
is the pullback of the map
It is not clear why is
Now we prove 1): By Lemma
But
Remark: This is an interesting formula. Understand what this says for Kostant's Harmonic form later.
It remains to prove 4), ie.
The following is the proof given by Peterson. It is kind of strange.
W>e first prove that
We'll determine the sign later. For let
Then
(Why? This is saying that we do not distinguish and its Bott-Samelson resolution?)
Recall that
So
But
We know
Now show that
OK.
OK.
For assume sign
for Since
where
we get, from
that
But
must have
This proves 4).
This completes the proof of the theorem.
?????rable actions of
Definition: Let be an affine scheme over with
structure homomorphism
An structure on
is said to be integrable if for all
and
1)
2)
and
are both maps
3)
For each
for all but finitely many
Example: as a scheme over
with structure homomorphism (?) Is this an example? Maybe not, because
we use
to define the structure on the first copy of
(OK. Because in the multiplication of
even the on the first copy is defined by
One way:
If
is an action, have
Then for define
The other way, given on
define
This is the map giving the action
Next, we look at the 2nd action of on
Notation: The action of on
that we have been talking about way along will from now on be denoted by the second
action that we will now introduce now will be denoted by
The second action of on
Define a second action of on by
Properties:
1)
2)
3)
for and
and
Thus, in the basis
Any can be written as
Any can be written as
(Recall:
see page 6-8).
Lecture 7: March 4, 1997
Recall formulas from last time:
For any
?????
?????
Given
of degree for each s.t.
moreover
Proof.
Induction on
Assume
Assume
Then
Since
shows that
for any
Moreover,
Assume
Then
Remark: Sara Billey's formula gives an express for each
Will come back to this later.
Corollary:
1)
2)
3)
is reduced, ie. the only nilpotent elemtn
4)
is also reduced.
Proof.
1) follows from
2) If
then
Since the matrix is upper-triangular it is invertible
But is a basis for
If is
s.t. for some then for each
But
ie.
Clear.
Proposition: The action of
on descends to an action on
via the map
where the structure on
is defined by
Proof.
This is because the action defined by commutes with
for any
Remark: The incuded action of on
is by the BGG-operators.
?????ne Constants" for the multiplication on
For
define by
cocommutative
Proposition:
Proof.
We know that
Then
and
Special properties of the
(1)
unless
Proof.
This is seen from the definition:
so clear from induction on
Proposition: is a homogeneous polynomial of degree
Proof.
Proposition: For
where, recall are defined by
?????e
Proof.
Write
But
On the other hand,
By Su1w=dw,wdu1,w
and dw,w≠0 get
du1,w=awu1,w
□
(Very strange proof).
Proposition: For w∈W,∑w∈uv[red]ϵ(u)σB(u-1)σB(v)=δw,id(1)∑w∈uv[red]σB(u)ϵ(v)σB(v)=δw,id(2)
Remark: This will also be true for quantum cohomology.
Remark: Fix e0∈Eu. Define
i:K/T⟶Eu/T:kT⟼e0kT.
Then
i×i:K/T×K/T⟶Eu(2)/T×T.
Consequently,
(i×i)*:HT(K/T)⟶HT(K/T)⊗ℤHT(K/T).
We have
(i×i)*σB(w)=∑w∈uv[red]ϵ(u)σβu-1⊗σBv.
The Finite Case
Proposition: In the finite case, we have
A_L=EndA_R(HT(K/T))A_R=EndA_L(HT(K/T)).HT(K/T) is a free
A_L (as well as
A_R) module with one generator
σB(w0),
where w0 is the longest element in W. If
ϕ∈EndA_L(HT(K/T))
then ∃a∈A_ s.t.
ϕ(σB(w0))=aR·σB(w0).
Claim:∀z∈HT(K/T),ϕ(z)=aR·z.
Proof.
For any z∈HT(K/T),∃b∈A_ s.t.
z=bL·σB(w0)⇒ϕ(z)=ϕ(bL·σB(w0))=bL·ϕ(σB(w0))(ϕ∈EndA_L)=bL·aR·σB(w0)=aR·bL·σB(w0)=aR·z.
□
The space HT(K) with K acting on K by conjugations
Consider now K as a K-space by conjugations. The map
p:K⟶K/T
is T-equivariant (but not K-equivariant). Thus
p*:H(K/T)⟶HT(K)
is an S-module map:
p*(πL(s)z)=π(s)p*(z)
where
π=[k→pt]*:S⟶HT(K).
Now A_ acts on both HT(K) and
HT(K/T) by characteristic operations. But since p
is not a K-map,p* does not intertwine the
A_-actions on HT(K)
and on HT(K/T). We have, nevertheless,
the following:
Proposition: For a∈A_ with
Δa=a(1)⊗a(2),
and for all z∈HT(K/T)a·p*(z)=p*(a(1)La(2)R·z)
In particular, for s∈S and w∈Wπ(s)p*(z)=p*(πL(s)z)=p*(πR(s)z)w·p*(z)=p*(wLwR·z)Aw·p*(z)=p*(∑u≤wv≤wπL(awuv)AuLAvR·z)
Proposition: For any K-spaceX with action map
μX:K×X⟶X
the pullback
μX*:HT(X)⟶HT(K×X)
is the composition
HT(X)⟶ΔXHT(K/T)⊗SHT(X)⟶p*⊗idHT(K)⊗SHT(X)≃HT(K×X).
The Pontrayagin action of the ring H*(K):
μK:K×K⟶K:(k1,k2)⟼k1k2
gives a map
μK*:H*(K)⊗H*(K)⟶H*(K).
This defines a ring structure on H*(K). Now for any
K-spaceX with
μX:K×X⟶X
get
μX*:H*(K)⊗H*(X)⟶H*(X)
which defines an action of H*(K) on
H*(X).
????? at the special case X=K/T with
μX=μK/T:K×K/T⟶K/T.A_R acts on
H*(K/T), and this action
commutes with the Pontryagin action of H*(K) on
H*(K/T).
Define a ring structure on H*(K/T) by
σvσw={σvwifℓ(v)+ℓ(w)=ℓ(vw),0otherwise.
Then
μK/T*(σ⫙H*(K)×σ′⫙H*(K/T))=p*(σ)σ′.
Consequently,
p*:H*(K)⟶H*(K/T)
is a ring homomorphism.
Theorem (Peterson-Kac): Over any field 𝔽
1)
p*(H*(K/T),𝔽)
is a Hopf subalgebra of H*(K𝔽).
If mij=∞ for all
i≠j, then
p*(H*(K/T),ℚ)≃
the dual of a tensor algebra as a Hopf algebra.
Poincare Duality in the finite case
Define A_-module homomorphism
PD:HT(G/P)⟶HomS(HT(G/P),S),PD(z)(y)=∫[G/P]yz∈S
Consider the case P=B:∫[G/B]=ε∘Aw0R.
In general,
∫[G/P]σP(w)=δw,w0wP
where wP is the longest element in WP,
so w0wP is the longest element in
WP.
Recall that (from Lecture 2)
ΔAw0=∑w∈WAw⊗w0Aw0wPD(σP(w))=w0·σ(w0wwP)P
Also
w0Lw0R·σB(w)=ϵ(w)σB(w0ww0).
It follows that PD is an S-module isomorphism.
The Euler Class:
For z∈HT(G/P), consider the
operator Mz on HT(G/P)
by y⟼zy. The Euler Class
χG/P∈HT(G/P)
is defined by the property:
traceMz=∫[G/P]χG/P·z.
Proposition:χG/P=∑w∈WPσP(w)[w0·σP(w0wwP)].
Proof.
By the definition of trace and using the "dual basis" {σ(w)P}
of {σP(w)}, we have
Mz=∑w∈WP(σ(w)P,zσP(w))σ(w)P=PD(w0·σP(w0wwP))
?????
?????Mz=∑w∈WP(PD(w0·σP(w0wwP)),z∈σP(w))=∑w∈WP∫[G/P]zσP(w)(w0·σP(w0wWP))
?????
χG/P=∑w∈WPσP(w)(w0·σP(w0wwP)).
□
We will use PD to denote its inverse as well.
Lemma: For v,w∈WPσP(v)PD(σ(w)P)=0
unless v≤w.
So χG/P is the trace of a rank 1 upper triangular matrix. Also
σP(w)PD(σ(w)P)=w·PD(σ(id)P).
Facts:
1)
χG/P has image ∏α>0w0wP>0αR·1
in HT(K/T)
2)
χG/P is W-invariant under the left action
3)
Image of χG/P in
H*(G/P) is
|WP|σPw0wP
Some facts on the classifcying spaces
H*(BT)↪πRHT(G/B)⤣H*(BT)wP↪HT(G/B)(wP)R≅HT(G/P)
????? ℚ, we have
H*(BT)wP≡H*(BK∩P)≃HT(G/P).
In fact
In what sense does the diagonal map
K⟶K×K:k⟼(k,k)
correspond to the co-product
Δ:A_⟶A_⊗SA_
Given homomorphism K1→K2 with T1→T2,N1→N2 can easily calculate
HT2(K2/T2)⟶HT1(K1/T1)
(2)
Conjecture: For each u,v,w∈W,
the ϵ(uvw)awu,v
is a polynomial in the αj'si∈I with
ℤ+-coefficient.
True for:
(1)
ℓ(u)+ℓ(v)=ℓ(w) -
Kumar
(2)
v=w or u=w - Sara Billey.
(3)
Similar models for K-theory (done?). Cobordism:
HT(G/P)⟶KT(G/P).BGG-operators⟶Demazure operators
(4)
Find combinatorial interpretation of the coefficients of ϵ(uvw)awu,v
(5)
Find combinatorial interpretation of the structure constants of HS1(Grass(k,n))
with S1 acting by exp(tρ∨).
(6)
Prove Little-Richardson Rule for σ where σ is a diagram automorphism of f dim G
and σ is admissible, re. 〈ασk(i),αi∨〉≠0⇒σk(i)=1.
(In this case Gσ has the structure of a Kac-Moody group. λ∈hℤ*σ(λ)=λλ minuscule
α∈Δ+⇒0≤〈λ,α∨〉≤1⇒H*(G/Pλ)→H*(Gσ/(Gσ∩P))
?)
(7)
Study more of the Bruhat Graph(G/B)T⟷W
Vertices: W, edges w→wrαα>0T-stable curves (≡P′)
in G/P
Full subgraphs correspond to XwP with vertices v≤w.v→vrα iff
v,vrα≤w.
(8)
Theorem (Carrell-Peterson): The Kazdan-Lusztig Polynomial Pv,w=1⇔ for this graph, have the same # of edges emanate from each point.
(9)
Study directed Bruhat graphs:
w⟶α∨wrαifw<wrα.
Lecture 8: March 11, 1997
Recall picture for the next two lectures
Let
K: compact simple Lie group
ΩK: base preserving algebraic loops in K
Then T⊂K acts on ΩK
by conjugation:
(t·k)(z)=tk(z)t-1.
Roughly, the diagonal embedding
ΩK⟶ΩK×ΩK
gives a co-product
HT(ΩK)⟶HT(ΩK)⊗SHT(ΩK)
and the multiplication map for the group structure on ΩK:ΩK×ΩK⟶ΩK
gives a product
HT(ΩK)⊗SHT(ΩK)⟶HT(ΩK)
In fact, HT(ΩK)
is a commutative and cocommutative Hopf algebra over S. We will identify this Hopf algebra structure using
A_af. In fact, we have a map
ΩK⟶Gaf/Baf
which gives
HT(ΩK)⟶HT(Gaf/Baf)=A_af.
Under this, we will identify
HT(ΩK)≃Zaf(S)(centralizer ofSinA_af)
and describe Zaf(S) using the affine Weyl group
Waf.
Notation: For a variety X over ℂ, use
X∼=Mor(ℂ×,X).
Let G be a finite dimensional connected simple algebraic group over ℂ. We then have the
finite root datum
I,αi,∈hℤ∨,αi∨∈hℤ.Δ+,Π,W,g_,h_,b_,…
Let θ be the highest root. From these we form the following Kac-Moody root datum:
Corresponding to this root datum, we have the following Kac-Moody Lie algebra 𝔤_af:𝔤_af=𝔤_⊗ℂℂ[t,t-1]=𝔤_∼ei=ei⊗1fi=fi⊗1e0=e-θ⊗tf0=fθ⊗t-1⇒[e0,f0]=[e-θ⊗1,eθ⊗1]=-?????Roots are in Qaf. They are all those in Qaf
of the form
α+nδn∈ℤ,α∈Δ,orα=0.The root spaces are
(𝔤_af)α+nδ={𝔤α⊗tnifα∈Δ,n∈ℤ,h_⊗tnα=0,n∈ℤ
so
Δre={α∈nδ:α∈Δ,n∈ℤ}
and all nδ'sn∈ℤ, are
"imaginary roots". They have multiplicity =dimℂh_.
The positive roots are
(Δaf)+={α+nδ:n>0orn≥0α∈Δ+},(Δaf)+re={α+nδ:n≥0α∈Δ+}.
The affine Weyl group Waf:
By definition,
Waf=W⋉Γ
the semi-direct product, where Γ≃Q∨ with
Q∨⟶Γ:h⟼th.wthw-1=tw·hthth′=th+h′
The reason why this is the same as the group generated by the reflections r0,ri,i∈I is because
tθ∨=r0rθ
For w∈W,w·(α+nδ)=w·α+nδ(⇒wδ=δ)th·(α+nδ)=α+nδ-〈α,h〉δ(soth·α=α-〈α,h〉δ,th·(nδ)=nδ-〈α,h〉δ).
The Kac-Moody group:
Gaf=G∼=Mor(ℂ×,G)(Laurent series int)
set
P0=Mor(ℂ,G)(power series int)Baf={g∈Mor(ℂ,G):g(0)∈B}⊂P0Uaf+={g∈Mor(ℂ,G):g(0)∈U+}Kaf={g∈Gaf:g(S1)⊂K}ΩK={k∈Kaf:k(1)=id}Taf=TG≃const. loops⊂GafKaf acts on ΩK by
k·k′=kk′k(1)-1
Then
iΩ:ΩK⟶Gaf/P0:k⟼k·**=P0
is a Kaf-equivariant map. This map is also a home????? because
Gaf=KafBafKaf∩Baf=T=(ΩK)KBaf=(ΩK)P0
The compact involution on Gaf:
(wKaf)(g)(t)=wK(g(t‾-1))g∈Mor(ℂ×,G)=Gaf
where wk:G→G is the compact involution on
G corresponding to K.
The normalizer Naf of Haf=H
in Gaf is
Naf=N∼=Mor(ℂ×,N)
where, recall, N is the normalizer of H in G, so also have
Naf=semi-direct product ofNandΩTΩT={g∈ΩK:g(S1)⊂T}
so g∈ΩT must be a homomorphism from
S1 to T.
Thus
ΩT≃Γ≃Q∨
where
Q∨⟶∼ΩT:h⟼hˆ:hˆ(z)=zh,z∈ℂ×.
This way we also see
W⋉Γ⟶∼Waf:(w,th)⟼(W,hˆ-1H)∈Naf/H.
The nil-Hecke rings A_ and A_af:
Since (hℤ)af=hℤ,
we have Saf=S. Let A_
be the nil-Hecke ring defined by W. Let A_af
be the nil-Hecke ring defined by Waf. Then we have the embedding
A_↪A_af:s⟼sAi⟼Ai,i∈I,(i≠0).
Recall that if β=waiϵΔre
with i∈I, then we define
Aβ∨=wAαiw-1=wAiw-1rβ=1-βAβ∨
(It is not obvious how to write Aβ∨ in terms of the
Ai's).
Define a ring homomorphism
ev:A_af⟶A_:ev|S=idev(Aβ∨)=Aβ‾∨ev(wth)=w
where if β=α+n,β‾=α.
This is well-defined.
The embedding A_↪A_af
is a section of ev.
Now identify
ΩK⟶iΩ∼Gaf/P0
we see that ΩK is a Kac-Moody G/P,
so we have all we discussed before, namely:
set Waf-=WafP0
For each x∈Waf-, have Schubert variety
X_‾xΩ and
inclusion
ixΩ:X_‾xΩ⟶ΩK
so have Schubert basis
σxΩ∈H2ℓ(x)(ΩK)σΩx∈H2ℓ(x)(ΩK)σ(x)Ω∈HomS(HT(ΩK),S)σΩ(x)∈HT(ΩK)
Also for x∈Waf, have
ψxΩ∈HomS(HT(ΩK),S).
(It is possible that ψxΩ=ψyΩ for
x≠y).
Have A_af-module
structures on HT(ΩK)
and HomS(HT(ΩK),S).
In the Schubert basis
Ax·σ(y)Ω={σ(xy)Ωifxy∈Waf-,ℓ(xy)=ℓ(x)+ℓ(y),0otherwise.
Define
HT(ΩK)=S-span of{σ(x)Ω:x∈?????}⊂HomS(HT(ΩK),S)
In our special case at hand, not only do we have Gaf/Baf→Gaf/?????
but also: ΩK↪Gaf/Baf.
Thus have
HT(ΩK)⟶A_af.
Next time, write the images of σ(x)Ω,
for x∈Waf-, in
A_af under the above embedding and identify
HT(ΩK) as a subalgebra of
A_af.
About Waf- and Waf/W
Recall that Waf-=WafP0 is the set
of minimal representatives of the coset space Waf/WP0=Waf/W.(WP0=W).x∈Waf-⇔x<xri∀i∈I,(i≠0),⇔x·αi>0∀i∈I.
Write x=wt-h. Then
x·αi=w·t-h·αi=w·(αi+〈h,αi〉δ)=wαi+〈h,αi〉δx∈Waf-⟺wαi+〈h,αi〉δ>0∀i∈I⟺〈h,αi〉≥0and when〈h,αi〉=0must havewαi>0⟺his dominant and when〈h,αi〉=0must havew<wri.
Now for h dominant, set
Wh=the subgroup ofWgenerated by〈ri:〈h,αi〉=0〉={w∈W:wh=h}.
Set Ph=BWhB⊃B parabolic. Then
Wh=WPh. Let
Wh=WPh be the set of minimal representatives of the coset
space W/Wh, ie.
w∈Wh⇔w<wri∀ri∈Wh
so
w∈Wh⇔For eachiwith〈h,αi〉=0havew<wri.
Thus we have proved
Waf-={wt-h:hdominant (ie.〈h,αi〉≥0∀i∈Iandw∈Wh}={wt-h:hdominant and if〈h,αi〉=0fori∈Imust havewαi>0}.
The map
Waf-⟼Waf/Wwt-h⟼wt-h/W
is of course a bijection.
Now another model for Waf/W is
Γ≃Q∨:Γ⟶∼Waf/Wth⟼th/W.
In other words, each coset Waf/W has a unique translation element
t-h in it, namely
wt-h/W=wt-hw-1/W=t-w·h/W.
Thus:
(1)
each coset in Waf/W has a unique minimal representative.
(2)
each coset in Waf/W has a unique translation element as a representative.
(3)
Let x∈Waf-. Then x
is the minimal representative for the coset xW. We know that x must be of the form
x=wt-h where h
is dominant and w∈Wh. The translation element in this coset is
t-w-h, so
wt-h≤t-w·h.
(4)
When h is dominant and regular, we have
wt-h∈Waf-
for all w∈W. So for different w1,w2∈W,
the two elements w1t-h and
w2t-h lie in two different cosets in
Waf/W.
(5)
A special case is when
x∈Waf-∩Γ.
This is the case iff the minimal representative for xW, namely x itself, coincides with the
translational representative of xW. Write
x=wt-h where h is dominant and
w∈Wh. Then
x=t-w·h⟺wt-h=t-w·h⟺w=1
so
Waf-∩P={t-h:his dominant}.
(6)
Let's now calculate the length ℓ(t-h) when h
is dominant. Recall that α+nδ>0⇔ either
n>0 or n=0,α>0.
Now we need to see for α+nδ>0, when do we have
t-h·(α+nδ)<0.
Now
t-h·(α+nδ)=α+(n+〈h,α〉)δ
If n>0,α<0, then
t-h·(α+nδ)<0
for n=0,1,…,〈h,α〉-1.
If n>0,α=0, then
t-h·(α+nδ)nδ<0.
If n>0,α>0, then
t-h·(α+nδ)<0.
If n=0,α>0, then
t-h·(α+nδ)<0.
Then the only case when α+nδ>- and
t-h·(α+nδ)<0
is when α=-β<0 (so β>0)n=0,1,…,〈h,β〉-1
The number of such element is ∑β>0〈h,β〉=〈h,2ρ〉.
Hence
ℓ(t-h)=〈h,2ρ〉=∑β>0〈h,β〉
for h dominant. Let's notice that
the sum of all{α+nδ>0;t-h(α+nδ)<0}=∑β>0(-β-β+δ+(-β+2δ)+…+(-β+(〈h,β〉-1)δ))=∑β>0(-〈h,β〉β+12〈h,β〉(〈h,β〉-1)δ).
(7)
For any x=wt-h∈Waf-,t=t-h1∈Γ-=Waf-∩Γ
we have xt=wt-(h+h1)∈Waf-
and ℓ(xt)=ℓ(x)+ℓ(t).
(8)
Can prove that for x=wt-h∈Waf-,α+nδ>0 be st. x·(α+nδ)=wα+(n+〈h,α〉)δ<0⟺ either α<0,wα>0 and
n=1,2,…,-〈α,h〉-1 or
α<0,wα<0 and
n=1,2,…,-〈α,h〉.
In other words
{α+nδ>0:wt-h·(α+nδ)<0}={-β+nδ:β>0,wβ<0,n=1,…,〈β,h〉-1}∪{-β+nδ:β>0,wβ>0,n=1,2,…〈β,h〉}.
Consequently,
ℓ(wt-h)=〈2ρ,h〉-ℓ(w).
Lecture 9: March 12, 1997
Recall the A_af-action on
HomS(HT(ΩK),S):Ax·σ(y)Ω={σ(xy)Ωifxy∈Waf-ℓ(x)+ℓ(y)=ℓ(xy),0otherwise,w·ψt=ψwtt′·ψt=ψt′tt,t′∈Γ,w∈W.
Define
HT(ΩK)=∑x∈Waf-sσ(x)Ω
as the A_af-submodule of
HomS(HT(ΩK),S)
spanned over S by {σ(x)Ω:x∈Waf-}.
For x∈Waf-, set
Fx=∑y∈Waf-y≤xsσ(y)Ω.
Then
ixΩ:X_‾xΩ⟶ΩK
gives
HomS(HT(X_‾xΩ),S)⟶∼FxHomS(Fx,S)⟶∼HT(X_‾xΩ).
Structure on Fx
(1)
{1⊗ψt:t∈Γ,t≤xw0}
is a free S-basis for Frac(S)⊗SFx
where
Frac(S)=the fractional field ofSx∈Γ-⇔the minimal rep. ofxWcoincides with the translational representative ofxW.
(2)
Set
Γ-=Γ∩Waf-={t-h:h∈h_ℤdominant}see end of Lecture 8 onWaf-≃Waf/W≃Γ.
Then:
X_‾tΩ
is K-stable, so Ft is an
A_-submodule of
HT(ΩK),
σ(t)Ω∈[HT(ΩK)]A_
ie. σ(t)Ω is
A_-invariant.
Proof.
To show that X_‾tΩ
is K-stable, it is enough to show
P0t·P0⊂X_‾tΩ⟺t-1B-t∈P0
But for any α∈Δ+t-h·α=α+〈h,α〉δ∈Δ(P0/baf)(ie. a root forP0)⇒t-1B-t∈P0⇒X_‾tΩisJ-stable⇒FtisA_-submod. ofHT(ΩK).
Next, we need to show that ∀i∈I,∀ℓ(rit)<ℓ(t)+1,Ai·σ(t)Ω=0∀ ?????. But Ai·σ(t)Ω=0
unless rit∈Waf-.
So just need to show that rit∉Waf-.
for any i∈I.
This is not possible. Suppose rit∈Waf- for
some i. Then ri must satisfy
"〈h,αj〉=0
for some j∈I⇒riαj>0".
Since riαi<0, must have
〈h,αi〉>0.
If ℓ(rit)=ℓ(t)+1,
then t<rit or t-1<t-1ri⇒t-1αi>0.
But t-1·αi=th·αi=αi-〈h,αi〉δ,
Since 〈h,αi〉>0⇒th·αi<0.
Contractiction. Hence Ai·σ(t)Ω=0∀i∈I.
□
Hopf algebra structure on HT(ΩK)
Proposition:HT(ΩK)
is a Hopf algebra over S, commutative and cocommutative.
Proof (outline) and structure maps.
The T-equivariant multiplication map
m:ΩK×ΩK⟶ΩK
induces the product map:
μ:HT(ΩK)⊗HT(ΩK)⟶HT(ΩK).
Since
m(X_‾xΩ×X_‾tΩ)⊂X_‾xtΩ
we actually have
μ:Fx⊗Ft⟶Fxt.
The diagonal imbedding
ΩK⟶ΩK×ΩK
induces the co-product:
Δ:HT(ΩK)⟶HT(ΩK)⊗HT(ΩK).
Clearly
ΔFx⊂Fx⊗Fx
Co-commutativity is clear. As for commutativity of μ, one can give a couple of reasons. One reason is that over
Frac(S),Fx has a basis
{1⊗ψt:t∈Γ,t≤xw0}
and ψtψt′=ψt′ψt=ψt′t.
Another reason is because ΩK is a double loop space so its (at least ordinary) homology is commutative.
unit:ψid
antipode:c(Ft)=Fω(t)
where ω is the diagram automorphism defined by
ω·αi=-αω(i),i≠0,ω(0)=0(⇒ω(w)=w0ww0
for w∈W and ω(th)=t-w0·h).
In terms of the ψt's, the Hopf algebra structure is easier to express.
ε(ψt)=1,c(ψt)=ψt-1,Δψt=ψt⊗ψt,ψtψt′=ψtt′,ψid=1.
□
In the following, we describe a model for HT(ΩK).
The map j:HT(ΩK)⟶A_af
First, we have the general fact that if X is a T-space and
ϕ:ΩK×X⟶X
is a T-equivariant map (with T acting on
ΩK by conjugations and on
ΩK×X by the diagonal
action), then each σ∈HT(ΩK)⊂HomS(HT(ΩK),S)
defines the following composition map
HT(X)⟶ϕ*HT(ΩK)⊗SHT(X)⟶c(σ)⊗idS⊗SHT(X)≃HT(X).
If ϕ defines an action of ΩK on
X, then these composition maps define an
HT(ΩK)-module
structure on HT(X).
Now assume that X is a Kaf-space. By restriction to
T and ΩK, it is both a
T-space and an ΩK-space
and the action map
ϕ:ΩK×X⟶X
is T-equivariant. Thus each σ∈HT(ΩK)
defines an operator on HT(X). This is functorial
in X, so we get a characteristic operator. In other words, we have a map
j:HT(ΩK)⟶Aˆ_af.
A calculation shows that j(ψt)=t.
Thus j(σ) is compactly supported, (?) so ie
j(σ)∈A_af.
It is obvious that j is a ring homomorphism. Since HT(ΩK)
is commutative and since j is an s-map, (?) we have
j(HT(ΩK))⊂ZA_af(s),centralizer ofsinA_af(t∈Waf⊂A_afcommutes withs).
Set
A_Ω=ZA_af(s).
It is a commutative S-algebra. Thus we have an S-algebra homomorphism
j:HT(ΩK)⟶A_Ω=ZA_af(S)
Will show that it is in fact an isomorphism.
Connection between j:HT(ΩK)→A_af and jΩ:ΩK→Gaf/Baf:k↦kBaf.
Have commutative diagram
HT(ΩK)⟶jA_af↓HomS(HT(ΩK),S)⟶(jΩ)XHomS(HT(Gaf/Baf),S)a↧ε·aR=ε·c(a)L
Before we find j(σ(x)Ω),
we collect some facts about the action of HT(ΩK)
on HT(X) for a
Kaf-spaceX.
Note: The following Lemma 1 had a large cross through it in the scanned copy.
Lemma 1: For any Kaf-spaceX,
the action of A_af on
HT(X) factors through A_
via the map (Is this right?)
ev:A_af⟶A_
where, recall,
ev|S=id,ev|Aβ∨=Aβ‾∨,ev|wth=w.
Lemma 2: For σ∈HT(ΩK),(id⊗ev)Δ·j(σ)=j(σ)⊗1.?
Proof.
This is roughly due to the fact that
ΩK↪Kaf:k⟼(k,1).
□
Now for any A_af-moduleM and A_-moduleN, set
M*SN=M⊗Sev*N,
an A_af-module. Then by Lemma 2,
j(σ)·(m⊗n)=j(σ)·m⊗n.
Apply this to the action map
F:HT(ΩK)⊗SHT(X)⟶HT(X).
Proposition: The above action map is an A_af-module map.
Proof.
For σ∈HT(ΩK) and
z∈HT(X), we know
F(σ⊗z)=j(σ)·z?
so for w∈Ww·F(σ⊗z)=w·j(σ)·z.
In particular
w·F(ψt⊗z)=w·t·z=wtw-1·w·z=(w·t)·(w·z).
On the other hand
F(w·(ψt⊗z))=F(w·ψt⊗w·z)=F(w·(ψt⊗z)).
Also
t′·F(ψt⊗t)=t′t·z=F(ψt′t⊗z)=F(t′·(ψt⊗z)).
□
Proposition: The multiplication map
HT(ΩK)⊗SHT(ΩK)⟶HT(ΩK)
is an A_af-map.
Proof.
This is because
σσ′=j(σ)·σ′.
□
More generally, for any A_af-moduleM, the map
ϕ:HT(ΩK)⊗SM⟶Mσ⊗m⟼j(σ)·m
is always an A_af-module map.
We now look at j(σ(x)Ω),
Introduce the ideal I⊂A_af:
(left ideal)
I=∑w∈Ww≠idA_afAw.
This is the ideal of annihilators of 1∈HT(ΩK)
for the action of A_af on
HT(ΩK).
Corollary 1:Axw0=j(σ(x)Ω)Aw0
where w0= longest in W.
Proof.
j(σ(x)Ω)Aw0=(Ax+a)Aw0=AxAw0=Axw0(a∈I).
□
Corollary 2: For any x∈Waf-,t∈Γ-σ(x)Ωσ(t)Ω=σ(xt)Ω,FxFt=Fxt.(ℓ(x)+ℓ(w0)=ℓ(xw0)
holds for all x∈Waf-). This is due to the following general fact:
For any parabolic P,∀x∈WP,y∈WP,ℓ(xy)=ℓ(x)+ℓ(y).
Proof.
Since σ(t)Ω∈[HT(ΩK)]A_,
have
σ(x)Ωσ(t)Ω=j(σ(x)Ω)·σ(t)Ω=(Ax+a)·σ(t)Ωa∈I=Ax·σ(t)Ω(a·(σ(t)Ω=0))=σ(xt)Ω.
(We are saying ℓ(x)+ℓ(t)=ℓ(xt)? automatically?)
□
Proposition:HT(ΩK)⊗SA_⟶A_af:σ⊗a⟼j(σ)a
is an A_af-module isomorphism,
where A_af acts on A_
via ev:A_af→A_.
Proof.
□
?????or:j:HT(ΩK)→∼A_Ω
is an isomorphism.
Thus we have a direct sum decomposition
A_af≃A_Ω+I
as an A_Ω-module.
Structures on A_Ω
First, by identifying
A_Ω≅A_af/I
we get an A_af-module
structure on A_Ω,
ie., for a∈A_af and
a′∈A_Ω,a·a′∈A_Ω
is the unique element of A_Ω st
a·a′-aa′∈I.
By definition, z(A_af)⊂A_Ω≃ZA_af(S),
and the action of A_af on A_Ω
is Z(A_af)-linear.
For each x∈Waf-,j(σ(xe)Ω)
is the unique element in A_Ω such that
j(σ(x)Ω)∈Ax+I.
In other words,
j(σ(x)Ω)=Ax·1
for the action of A_af on
A_Ω.
We can calculate the action of A_af on
A_Ω as follows.
Proposition: For s∈S,a∈A_Ω,w∈W,t∈Γ and
β∈Δres·a=sa=aswt·a=wtaw-1Aβ∨·a=Aβ∨a-rβaAβ‾∨(β=α+nδ,β‾=α)=Aβ∨arβ‾+aAβ‾∨(Aα0∨·a‾0∨=-θ∨).
The proof of this proposition is not trivial. Need calculat?????
Theorem: The map
j:HT(ΩK)⟶A_Ω
is an isomorphism of both A_af-modules
and Hopf algebra modules.
Lecture 10: March 19, 1997
Ω-integrableA_af-modules
We first recall the definition of the integrable A_-modules
where A_ is A_af
or A_finite, that was given at the end of
Lecture 6:
An integrable A_-module is an
A_-module structure on
𝒪(X), where X is an affine scheme over
h_=SpecS with structure
homomorphism πX:S→𝒪(X)
such that
(1)
s·p=πX(s)p∀s∈S,p∈𝒪(X).
(2)
πX:S→𝒪(X)
is an A_-module map.
(3)
m:𝒪(X)⊗S𝒪(X)→𝒪(X)
is an A_-module map.
(4)
For each p∈𝒪(X),Aw·p=0 for all but finitely many w∈W.
Now back to our notation where A_ denotes the nil-Hecke ring for the finite Wely group
W. Then condition (4) is not needed.
Definition: An Ω-integrableA_af-module is by definition
an affine scheme X over h_=SpecS,
with structure homomorphism πX:S→𝒪(X),
and an A_af-module structure on
𝒪(X) such that
(1)
X is an integrable A_-module by restricting
the action of A_af to A_;
(2)
m:𝒪(X)*S𝒪(X)→𝒪(X)
is an A_af-map.
(Part of the requirement for (1) is in (2) as well).
Question: Is (2) weaker than asking m:𝒪(X)⊗S𝒪(X)→𝒪(X)
being an A_af-map? This seems to be just a
different requirement. So the notion of Ω-integrableA_af-module seems different from that of
an integrable A_af-module.
Set 𝒜=SpecHT(ΩK).
Then 𝒜 is an integrable A_-module.
We know from Lecture 9 (page 9-9) that m:HT(ΩK)*SHT(ΩK)→HT(ΩK)
is an A_af-module map, so
𝒜 is an Ω-integrableA_af-module.
Proposition: An Ω-integrableA_af-module structure on
𝒪(X) is equivalent to
(1)
an integrable A_-module structure
𝒪(X); and
(2)
an A_-module map
g:HT(ΩK)→𝒪(X).
More explicitly, given an A_af-module
structure on 𝒪(X), by restriction to
A_ we get an integrable
A_-module structure on
𝒪(X), and the map
g:HT(ΩK)⟶𝒪(X):g(σ)=j(σ)·1.
Conversely, given (1) and (2), the A_af-module
structure on 𝒪(X) is defined by
(j(σ)a)·p=g(σ)(a·p).
Proof.
Assume that the A_af-module structure
on 𝒪(X) is given. We need to show that the map g is an
A_-map, i.e., for a∈A_
and σ∈HT(ΩK),
need to show
g(a·σ)=a·g(σ).
Now
g(a·σ)=j(a·σ)·1,a·g(σ)=a·j(σ)·1=(aj(σ))·1.
Thus we need to show
(j(a·σ)-aj(σ))·1=0∈𝒪(X).
But we know that the action of A_ on
HT(ΩK) is characterised by the fact that
j(a·σ)-aj(σ)∈I=∑w∈Ww≠idA_afAw.
Since for any i∈I,Ai·1=Ai·πX(1)=πX(Ai·1)=0∈𝒪(X)
we see that b·1=0 for any b∈I.
Thus
(j(a·σ)-aj(σ))·1=0
or g:HT(ΩK)→𝒪(X)
is an A_-map.
Conversely, assume that we are given an integrable A_-module
structure on 𝒪(X) and an A_-mapg:HT(ΩK)→𝒪(X).
Define, for σ∈HT(ΩK) and
a∈A_,p∈𝒪(X)(j(σ)a)·p=g(σ)(a·p).
Need to show that this gives an Ω-integrableA_af-module structure on
𝒪(X). First need to show that this is indeed and action of
A_af. This must follow from the fact that
HT(ΩK)*SA_⟶A_af:σ⊗a⟼j(σ)a
is an A_af-module map. (?) In order to show
m:𝒪(X)⊗S𝒪(X)⟶𝒪(X)
is an A_af-module map, only need to show
m(j(σ)·(p1⊗p2))=j(σ)·(p1p2).
But
j(σ)·(p1p2)=g(σ)p1p2
and (Remark after Lemma 2 in Lecture 9 on page 9-7)
m(j(σ)·(p1⊗p2))=m(j(σ)·p1⊗p2)=m(g(σ)p1⊗p2)=g(σ)p1p2
so
m(j(σ)·(p1⊗p2))=j(σ)·(p1p2).
□
Need to fill in the proof of why (j(σ)a)·p=defg(σ)(a·p)
defines an A_af-action.
In more geometrical terms, let
𝒰=SpecHT(K/T).
We said in Lecture 6 that an integrable A_-module
should be thought of as an action ϕ:𝒰×A_X→X.
In this language, an Ω-integrableA_af-module
structure on 𝒪(X)⟺ pairs
(ϕ,f) where ϕ is an action of
𝒰 on X and f:X→𝒜 is a
𝒰-equivariant map.
The polynomials jxy,x∈Waf-,y∈Waf
For x∈Waf-, introduce
jxy∈S,y∈Waf, by
j(σ(x)Ω)=∑y∈WafjxyAy.
In terms of the map
jΩ:Ωx⟶Gaf/Baf
we have
jΩ*σGaf/Baf(y)=∑x∈Waf-jxyσΩ(x).
Immediate properties of the polynomial jxy's,x∈Waf-,y∈Waf:
Property 1:degjxy=2(ℓ(y)-ℓ(x)).
This is because
deg(σ(x)Ω)=-2ℓ(x),degAy=-2ℓ(y).
Since
jΩ(X_‾xΩ)⊂πp-1(X_‾xGaf/Baf)=X_‾xw0Gaf/Baf
and since jΩ*(ψt)=ψt
by definition, we have jΩ*(z)=0
in HTX_‾xΩ
if ψt(z)=0 for all
t∈Γ with t ?????. Property 3 now follows from this.
□
Proposition: For x,z∈Waf-σ(x)Ωσ(z)Ω=∑y∈Wafyz∈Waf-ℓ(y)+ℓ(z)=ℓ(yz)jxyσ(xz)Ω.
Conjecture: The jxy's are polynomials in the
αi's with coefficients in
ℤ+={0,1,2,…}.
Remark 1: Can show jxy∈ℤ+ when
ℓ(y)=ℓ(x)
by making connection with quantum cohomology: these are the Gromov-Witten invariants.
Remark 2: We proved last time that ∀x∈Waf-
and t∈Γ-=Waf-∩Γ,σ(x)Ωσ(t)Ω=σ(xt)Ω.
On the other hand, since HT(ΩK)
is commutative, we have
σ(x)Ωσ(t)Ω=σ(t)Ωσ(x)Ω=j(σ(t)Ω)·σ(x)Ω.
It follows that, for h dominant
j(σ(t-h)Ω)=∑w∈WAt-w·h
since σ(t-h)Ω
is A_-invariant, we know that
j(σ(t-h)Ω)
is in the center of A_af.
An integral formula
Define
ev1:Kaf/T⟶K/Tev1(kT)=k(1)T.
Proposition: For x,y∈Waf-
and w∈W,jxyω(w-1)=〈σGaf/Baf(yw0)ev1*(w0L·σG/B(w)),σ(xw0)Gaf/Baf〉=∫[X_‾Ωyw0∩X_‾xw0Ω]ev1*(w0L·σG/B(w))
where
X_‾Ωyw0=Baf-yw0·Baf‾,X_‾xw0Ω=Bafxw0·Baf‾
and
ω(w)=w0ww0-1
is the diagram automorphism.
Remarks:
1.
w0L·σG/B(w)
restricts to σG/Bw under the restriction map
HT(K/T)⟶H(K/T).
2.
The formula for ℓ(w)=1 will be used later to show that
H*(ΩK)≃qH*(G/B).
Proof.
The proof uses various formulas we have proved so far.
〈σGaf/Baf(yw0)ev1*(w0L·σG/B(w)),σ(xw0)Gaf/Baf〉=ε((Axw0)R·(σGaf/Baf(yw0)ev1*(w0L·σG/B(w))))(definition of〈〉)=ε(j(σ(x)Ω)R·Aw0R·(σGaf/Baf(yw0)ev1*(w0L·σG/B(w))))(Axw0=j(σ(x)Ω)Aw0from Lecture?????)=ε(j(σ(x)Ω)R·(∑v∈W((Aw,v)R·σGaf/Baf(yw0))((w0Av)R·ev1*?????)))(Δλw0=∑v∈WAw0v⊗w0Avfrom Lecture?????)=ε(j(σ(x)Ω)R·(∑v∈WσGaf/Baf(yw0v-1w0)ev1*(w0RAvRw0L·σG/B(w))))ℓ(yw0v-1w0)+ℓ(w0v)=ℓ(yw0)⇕ℓ(w-v-1w0)+ℓ(w0v)=ℓ(w0)automatically satisfied.(Formula forAw0vR·from Lecture 6 andbeginning of Lecture 7),ev1*comm?????=ε(j(σ(x)Ω)R·∑v∈Wℓ(wv-1)+ℓ(v)=ℓ(w)σGaf/Baf(yω(v)-1)ev1*(w0Rw0L·σG/B(wv-1)))=ε(∑v∈Wℓ(wv-1)+ℓ(v)=ℓ(w)(j(σ(x)Ω)R·σGaf/Baf(yω(v)-1))ev1*(w0Rw0L·σG/B(wv-1)))((id⊗ev)Δ·j(σ)=j(σ)⊗1in Lecture 9)=∑v∈Wℓ(wv-1)+ℓ(v)=ℓ(w)ε(jσ(x)ΩR·σGaf/Baf(yω(v)-1))ε(ev1*(w0Rw0L·σG/B(wv-1)))(εis a homomorphism).=∑v∈Wℓ(wv-1)+ℓ(v)=ℓ(w)ε(jσ(x)ΩR·σGaf/Baf(yω(v)-1))δv,w(ε(ev1*(w0Rw0K·σG/B(wv-1)))=ε(σG/B(wv-1))=δv,wWhy?)=ε(j(σ(x)Ω)R·σGaf/Baf(yω(w)-1))=〈j(σ(x)Ω),σGaf/Baf(yω(w)-1)〉=jxyω(w)-1.
The fact that this is then equal to the integral is almost by definition of the Schubert basis and of the pairing 〈〉.
□
Remark:X_‾xΩ
is rational and irreducible (?).
The basis {σ[x]:x∈Waf-} for HT(ΩK)
For x∈Waf-, set
σ[x]=ϵ(x)c(σ(x)Ω)∈HT(ΩK).
This is an S-basis for HT(ΩK).
The automorphism ν of A_af
is used to obtain properties for this basis:
ν|Δ=id|Δ,ν|A_Ω=c.
Can check that
ν(a)=(-1)12degaw0ω(a)w0,a∈Waf
where, recall, ω(w)=w0ww0,ω(th)=?tω(h)=?????
Also have
ν(a)·c(σ)=c(a·σ).
Fact 2: For x∈Waf,y∈Waf-ν(Ax)·σ[y]={ϵ(x)σ[xy]ifxy∈Waf-,ℓ(x)+ℓ(y)=ℓ(xy),0otherwise.
Proof.
Follows from σ[x]=ϵ(x)ν(Ax)·1.
□
Fact 3: For t∈Γ-,x,z∈Waf-σ[t]=σ(ω(t))Ω.
Fact 4: For x,z∈Waf-σ[x]σ[z]∑y∈Wafyz∈Waf-ℓ(y)+ℓ(z)=ℓ(yz)ϵ(xy)jxyσ[yz].
Ideals in HT(ΩK) and A_af
Proposition: If M is an A_af-submodule
of HT(ΩK), then
1)
M is an ideal of HT(ΩK)
which is stable under A_;
2)
j(M)A_=A_j(M)
is a 2-sided ideal of A_af.
Proof.
Assume that M is an A_af-submodule
of HT(ΩK).
Then it is automatically A_-stable. If
σ∈HT(ΩK)
and m∈M, we have
σm=j(σ)·m.
Since M is A_af-stable,⇒j(σ)·m∈M⇒σm∈M. Hence
M⊂HT(ΩK) is
an ideal. Now for i∈I and m∈M,Aij(m)=j(m)Ai+j(Ai·m)ri⇒A_j(m)⊂j(M)A_.
Also have
j(m)Ai=Aij(m)-rij(Ai·m)⇒j(M)A_⊆A_j(M)⇒j(M)A_=A_j(M).
Thus j(M) is stable under both left and right multiplications by elements in both
j(HT(ΩK))
and A_. Hence
j(M) is a 2-sided
ideal of A_af.
□
Examples of ideals of HT(ΩK):
For β∈Δ+re, let
K(β)=∑x∈Waf-x·β<0sσ[x].
Since ℓ(zx)=ℓ(z)+ℓ(x)
and x·β<0⇒(zx)·β<0,
the formula in Fact 2 implies that K(β) is an
A_af-stable submodule of
HT(ΩK).
Hence it is an A_-stable ideal of
HT(ΩK).
The sum of these things will be the kernel of the map from HT(ΩK)
to qH(G/B).
Future Lectures:
Compare H*(ΩK) and
qH*(G/B).
Compare moduli spaces and intersection of Schubert varieties; the stable Bruhat order.
Compare qH*(G/B) and
qH*(G/P).
Compare: σG/Bri*
in qH*(G/B),σ[rit-h]·
in HT(ΩK),σG/B[ri]·
in HT(G/B).
Notes and references
This is a typed version of Lecture Notes for the course Quantum Cohomology of G/P by Dale Peterson. The course was taught at MIT in the Spring of 1997.