## Notes on Schubert PolynomialsChapter 5

Last update: 2 July 2013

## Orthogonality

Recall that

$Pn = ℤ[x1,…,xn], Λn = ℤ[x1,…,xn]Sn$

where ${x}_{1},\dots ,{x}_{n}$ are independent indeterminates.

(5.1) ${P}_{n}$ is a free ${\Lambda }_{n}\text{-module}$ of rank $n!$ with basis

$Bn= { xα:0≤αi ≤i-1,1≤i≤n } .$ Proof. by induction on $n\text{.}$ The result is trivially true when $n=1,$ so assume that $n>1$ and that ${P}_{n-1}$ is a free ${\Lambda }_{n-1}\text{-module}$ with basis ${B}_{n-1}\text{.}$ Since ${P}_{n}={P}_{n-1}\left[{x}_{n}\right],$ it follows that ${P}_{n}$ is a free ${\Lambda }_{n-1}\left[{x}_{n}\right]\text{-module}$ with basis ${B}_{n-1}\text{.}$ Now $Λn-1[xn] =Λn[xn],$ because the identities $er(x1,…,xn) =∑x=0r (-xn)s er-s (x1,…,xn)$ show that ${\Lambda }_{n-1}\subset {\Lambda }_{n}\left[{x}_{n}\right],$ and on the other hand it is clear that ${\Lambda }_{n}\subset {\Lambda }_{n-1}\left[{x}_{n}\right]\text{.}$ Hence ${P}_{n}$ is a free ${\Lambda }_{n}\left[{x}_{n}\right]\text{-module}$ with basis ${B}_{n-1}\text{.}$ To complete the proof it remains to show that ${\Lambda }_{n}\left[{x}_{n}\right]$ is a free ${\Lambda }_{n}\text{-module}$ with basis $1,{x}_{n},\dots ,{x}_{n}^{n-1}\text{.}$ Since $\prod _{i=1}^{n}\left({x}_{n}-{x}_{i}\right)=0,$ we have $xnn=e1 xnn-1- e2xnn-2 +…+(-1)n-1 en,$ from which it follows that the ${x}_{n}^{n-i}$ $\left(1\le i\le n\right)$ generate ${\Lambda }_{n}\left[{x}_{n}\right]$ as a ${\Lambda }_{n}\text{-module.}$ On the other hand, if we have a relation of linear dependence $∑i=1nfi xnn-i=0$ with coefficients ${f}_{i}\in {\Lambda }_{n},$ then we have also $∑i=1nfi xjn-i=0$ for $j=1,2,\dots ,n,$ and since $det(xjn-i) =∏i it follows that ${f}_{1}=\dots ={f}_{n}=0\text{.}$ $\square$

As before, let $\delta =\left(n-1,n-2,\dots ,1,0\right)\text{.}$ By reversing the order of ${x}_{1},\dots ,{x}_{n}$ in (5.1) it follows that

(5.1') The monomials ${x}^{\alpha },\alpha \subset \delta$ (i.e., $0\le {\alpha }_{i}\le n-i$ for $1\le i\le n\text{)}$ form a ${\Lambda }_{n}\text{-basis}$ of ${P}_{n}\text{.}$

We define a scalar product on ${P}_{n},$ with values in ${\Lambda }_{n},$ by the rule

$(5.2) ⟨f,g⟩= ∂w0(fg) (f,g∈Pn)$

where ${w}_{0}$ is the longest element of ${S}_{n}\text{.}$ Since ${\partial }_{{w}_{0}}$ is ${\Lambda }_{n}\text{-linear,}$ so is the scalar product.

(5.3) Let $w\in {S}_{n}$ and $f,g\in {P}_{n}\text{.}$ Then

 (i) $⟨{\partial }_{w}f,g⟩=⟨f,{\partial }_{{w}^{-1}}g⟩$ (ii) $⟨wf,g⟩=\epsilon \left(w\right)⟨f,{w}^{-1}g⟩\text{.}$
where $\epsilon \left(w\right)={\left(-1\right)}^{\ell \left(w\right)}$ is the sign of $w\text{.}$ Proof. (i) It is enough to show that $⟨{\partial }_{i}f,g⟩=⟨f,{\partial }_{i}g⟩$ for $i\le i\le n-1\text{.}$ We have $⟨∂if,g⟩ = ∂w0 ((∂if)g) =∂w0si ∂i((∂if)g) = ∂w0si ( (∂if) (∂ig) )$ because ${\partial }_{i}f$ is symmetrical in ${x}_{i}$ and ${x}_{i+1}\text{.}$ The last expression is symmetrical in $f$ and $g,$ hence $⟨{\partial }_{i}f,g⟩=⟨{\partial }_{i}g,f⟩=⟨f,{\partial }_{i}g⟩$ as required. (ii) Again it is enough to show that $⟨{s}_{i}f,g⟩=-⟨f,{s}_{i}g⟩\text{.}$ We have $⟨sif,g⟩= ∂w0((sif)g) =∂w0si ∂isi(f(sig))$ and since ${\partial }_{i}{s}_{i}=-{\partial }_{i}$ this is equal to $-∂w0si∂i (f(sig))=- ∂w0(f(sig)) =-⟨f,sig⟩.$ $\square$

(5.4) Let $u,v\in {S}_{n}$ be such that $\ell \left(u\right)+\ell \left(v\right)=\left(\genfrac{}{}{0}{}{n}{2}\right)\text{.}$ Then

$⟨𝔖u,𝔖v⟩= { 1 if v= w0u, 0 otherwise.$ Proof. We have $⟨𝔖u,𝔖v⟩ = ⟨ ∂u-1w0 xδ,𝔖v ⟩ = ⟨ xδ, ∂w0u 𝔖v ⟩$ by (5.3). Also $\ell \left({w}_{0}u\right)=\ell \left({w}_{0}\right)-\ell \left(u\right)=\ell \left(v\right),$ hence $∂w0u𝔖v= { 1 if v=w0u, 0 otherwise.$ It follows that $⟨𝔖u,𝔖v⟩= { 0 if v≠w0u, ⟨xδ,1⟩= ∂w0(xδ)=1 if v=w0u.$ $\square$

(5.5) Let $u,v\in {S}_{n}\text{.}$ Then

$⟨ w0𝔖u, 𝔖vw0 ⟩ =ε(v)δuv.$ Proof. We have $⟨ w0𝔖u, 𝔖vw0 ⟩ = ⟨ w0𝔖u, ∂w0v-1w0 xδ ⟩ = ⟨ ∂w0vw0 (w0𝔖u), xδ ⟩ = ε(v) ⟨ w0∂v𝔖u, xδ ⟩$ by (5.3) and (2.12). By (4.2) the scalar product is therefore zero unless $\ell \left(u\right)-\ell \left(v\right)=\ell \left(u{v}^{-1}\right),$ and then it is equal to $\epsilon \left(v\right)⟨{w}_{0}{𝔖}_{u{v}^{-1}},{x}^{\delta }⟩\text{.}$ Now ${𝔖}_{u{v}^{-1}}$ is a linear combination of monomials ${x}^{\alpha }$ such that $\alpha \subset \delta$ and $|\alpha |=\ell \left(u\right)-\ell \left(v\right)\text{.}$ Hence ${w}_{0}\left({𝔖}_{u{v}^{-1}}\right){x}^{\delta }$ is a sum of monomials ${x}^{\beta }$ where $β=w0α+δ⊂w0 δ+δ=(n-1,…,n-1) .$ Now ${\partial }_{{w}_{0}}{x}^{\beta }=0$ unless all the components ${\beta }_{i}$ of $\beta$ are distinct; since $0\le {\beta }_{i}\le n-1$ for each $i,$ it follows that ${\partial }_{{w}_{0}}{x}^{\beta }=0$ unless $\beta =w\delta$ for some $w\in {S}_{n},$ and in that case $w0α=β-δ=wδ-δ$ must have all its components $\ge 0\text{.}$ So the only possibility that gives a nonzero scalar product is $w=1,\alpha =0,u=v,$ and in that case $⟨ w0𝔖u, 𝔖vw0 ⟩ = ε(v)⟨1,xδ⟩ = ε(v)∂w0 (xδ)=ε(v).$ $\square$

(5.6) The Schubert polynomials ${𝔖}_{w},w\in {S}_{n},$ form a ${\Lambda }_{n}\text{-basis}$ of ${P}_{n}\text{.}$ Proof. Let $u,v\in {S}_{n}$ and let $(1) w0𝔖u= ∑α⊂δ auαxα,$ $(2) ε(v)𝔖vw0= ∑β⊂δbvβ xβ,$ with coefficients ${a}_{u\alpha },{b}_{v\beta }\in {\Lambda }_{n}\text{.}$ Let ${c}_{\alpha \beta }=⟨{x}^{\alpha },{x}^{\beta }⟩\text{.}$ Then from (5.5) we have $∑α,βauα cαβbvβ= δuv,$ or in matrix terms $(3) ACBt=1$ where $A=\left({a}_{u\alpha }\right),$ $B=\left({b}_{v\beta }\right)$ and $C=\left({c}_{\alpha \beta }\right)$ are square matrices of size $n!,$ with coefficients in ${\Lambda }_{n}\text{.}$ From (3) it follows that each of $A,B,C$ has determinant $±1\text{;}$ hence the equations (2) can be solved for ${x}^{\beta },\beta \subset \delta ,$ as ${\Lambda }_{n}\text{-linear}$ combinations of the Schubert polynomials ${𝔖}_{w},w\in {S}_{n}\text{.}$ Since by (5.1') the ${x}^{\beta }$ from a ${\Lambda }_{n}\text{-basis}$ of ${P}_{n},$ so also do the ${𝔖}_{w}\text{.}$ $\square$

We have

$(5.7) ⟨f,g⟩= ∑w∈Snε (w)∂w (w0f) ∂ww0(g)$

for all $f,g\in {P}_{n}\text{.}$ Proof. Let $\Phi \left(f,g\right)$ denote the right-hand side of (5.7). We claim first that $(1) Φ(f,g) ∈Λn.$ For this it is enough to show that ${\partial }_{i}\Phi =0$ for $1\le i\le n-1\text{.}$ Let $Ai= { w∈Sn:ℓ (siw)> ℓ(w) } ,$ then ${S}_{n}$ is the disjoint union of $A$ and ${s}_{i}A,$ and ${s}_{i}A=A{w}_{0}\text{.}$ Hence $Φ(f,g)= ∑w∈Aiε (w) { ∂w(w0f) ∂i(∂siww0g) -∂i∂w (w0f) (∂siww0g) } .$ Since for all $\varphi ,\psi \in {P}_{n}$ we have $∂i(ϕ∂iψ-(∂iϕ)ψ) =(∂iϕ) (∂iψ)- (∂iϕ) (∂iψ)=0,$ it follows that ${\partial }_{i}\Phi \left(f,g\right)=0$ for all $i$ as required. Next, since each operator ${\partial }_{w}$ is ${\Lambda }_{n}\text{-linear,}$ it follows that $\Phi \left(f,g\right)$ is ${\Lambda }_{n}\text{-linear}$ in each argument. By (5.6) it is therefore enough to verify (5.7) when $f={w}_{0}{𝔖}_{u}$ and $g={𝔖}_{v{w}_{0}},$ where $u,v\in {S}_{n}\text{.}$ We have then $Φ ( w0𝔖u, 𝔖vw0 ) = ∑w∈Snε(w) ∂w-1(𝔖u) ∂w-1w0 (𝔖vw0)$ which by (4.2) is equal to $(2) ∑wε(w) 𝔖uw𝔖vw$ summed over $w\in {S}_{n}$ such that $ℓ(uw)=ℓ(u)-ℓ (w-1)=ℓ (u)-ℓ(w)$ and $ℓ(vw)= ℓ(vw0)- ℓ(w-1w0)= ℓ(w)-ℓ(v).$ Hence the polynomial (2) is (i) symmetric in ${x}_{1},\dots ,{x}_{n}$ (by (1) above), (ii) independent of ${x}_{n},$ (iii) homogeneous of degree $\ell \left(u\right)-\ell \left(v\right)\text{.}$ Hence it vanishes unless $\ell \left(u\right)=\ell \left(v\right)$ and $u={w}^{-1}=v,$ in which case it is equal to $\epsilon \left(w\right)=\epsilon \left(v\right)\text{.}$ Hence $Φ ( w0𝔖u, 𝔖vw0 ) =ε(v)δuv= ⟨w0𝔖u,𝔖vw0⟩$ by (5.5). This completes the proof of (5.7). $\square$

Now let $x=\left({x}_{1},\dots ,{x}_{n}\right)$ and $y=\left({y}_{1},\dots ,{y}_{n}\right)$ be two sequences of independent variables, and let

$(5.8) Δ=Δ(x,y) =∏i+j≤n (xi-yj)$

(the "semiresultant"). We have

$(5.9) Δ(wx,x)= { 0 if w≠w0, ε(w0) aδ(x) if w=w0.$

For

$Δ(wx,x)= ∏i+j≤n (xw(i)-xj)$

is non-zero if and only if $w\left(i\right)\ne j$ whenever $i+j\le n,$ that is to say if and only if $w\ne {w}_{0}\text{;}$ and

$Δ(w0x,x) = ∏i+j≤n (xn+1-i-xj) = ∏j;tk (xk-xj)= ε(w0) aδ(x).$

The polynomial $\Delta \left(x,y\right)$ is a linear combination of the monomials ${x}^{\alpha },\alpha \subset \delta ,$ with coefficients in $ℤ\left[{y}_{1},\dots ,{y}_{n}\right]={P}_{n}\left(y\right),$ hence by (4.11) can be written uniquely in the form

$Δ(x,y)= ∑w∈Sn𝔖w (x)Tw(y)$

with ${T}_{w}\left(y\right)\in {P}_{n}\left(y\right)\text{.}$ By (5.5) we have

$Tw(y)= ⟨ Δ(x,y), w0𝔖ww0 (-x) ⟩ x$

where the suffix $x$ means that the scalar product is taken in the $x$ variables. Hence

$(1) Tw(y) = ∂w0 ( Δ(x,y)w0 (𝔖ww0(-x)) ) = aδ(x)-1 ∑v∈Snε (v)Δ (vx,y)vw0 (𝔖ww0(-x))$

by (2.10), where $v\in {S}_{n}$ acts by permuting the ${x}_{i}\text{.}$

Now this expression (1) must be independent of ${x}_{1},\dots ,{x}_{n}\text{.}$ Hence we may set ${x}_{i}={y}_{i}$ $\left(1\le i\le n\right)\text{.}$ But then (5.9) shows that the only non-zero term in the sum (1) is that corresponding to $v={w}_{0},$ and we obtain

$Tw(y)= 𝔖ww0(-y).$

Hence we have proved

(5.10) ("Cauchy formula")

$Δ(x,y)= ∑w∈Sn𝔖w (x)𝔖ww0 (-y).$

Remark. Let $n=r+s$ where $r,s\ge 1,$ and regard ${S}_{r}×{S}_{s}$ as a subgroup of ${S}_{n},$ with ${S}_{r}$ permuting $1,2,\dots ,r$ and ${S}_{s}$ permuting $r+1,\dots ,r+s\text{.}$ Let ${w}_{0}^{\left(r\right)},{w}_{0}^{\left(s\right)}$ be the longest elements of ${S}_{r},{S}_{s}$ respectively, and let $u={w}_{0}^{\left(r\right)}×{w}_{0}^{\left(s\right)}\text{.}$ If $w\in {S}_{n},$ we have ${\partial }_{u}{𝔖}_{w}={𝔖}_{wu}$ if $\ell \left(wu\right)=\ell \left(w\right)-\ell \left(u\right),$ that is to say if $wu$ is Grassmannian (with its only descent at $r\text{),}$ and ${\partial }_{u}{𝔖}_{w}=0$ otherwise. Hence by applying ${\partial }_{u}$ to the $x\text{-variables}$ in (5.10) we obtain

$∂uΔ(x,y)= ∑v∈Gr,s 𝔖v(x) 𝔖vuw0(-y)$

where ${G}_{r,s}\subset {S}_{n}$ is the set of Grassmannian permutations $v$ with descent at $r$ (i.e. $v\left(i\right) if $i\ne r\text{).}$ On the other hand, it is easily verified that

$∂uΔ(x,y)= ∏i=1r ∏j=1s (xi-yj)$

and that $v\prime =vu{w}_{0}$ is the permutation

$( v(r+1),…, v(r+s), v(1),…, v(r) )$

hence is also Grassmannian, with descent at $s\text{.}$

The shape of $v$ is

$λ=λ(v)= ( v(r)-r,…, v(2)-2, v(1)-1 )$

and the shape of $v\prime$ is say

$μ′=λ(v′)= ( v(r+s)-s,…,v (r+2)-2,v (r+1)-1 ) .$

The relation between these two partitions is

$μi=s-λr+1-i (1≤i≤r)$

that is to say $\lambda$ is the complement, say $\stackrel{ˆ}{\mu },$ of $\mu$ in the rectangle $\left({s}^{r}\right)$ with $r$ rows and $s$ columns. Hence, replacing each ${y}_{j}$ by $-{y}_{j},$ we obtain from (5.10) by operating with ${\partial }_{u}$ on both sides and using (4.8)

$(5.11) ∏i=1r ∏j=1s (xi+yj)= ∑sμˆ(x) sμ′(y)$

summed over all $\mu \subset \left({s}^{r}\right),$ where $\stackrel{ˆ}{\mu }$ is the complement of $\mu$ in $\left({s}^{r}\right)\text{.}$ This is one version of the usual Cauchy identity [Mac1979, Chapter I, (4.3)'].

Let ${\left({𝔖}_{w}\right)}_{w\in {S}_{n}}$ be the ${\Lambda }_{n}\text{-basis}$ of ${P}_{n}$ dual to the basis $\left({𝔖}_{w}\right)$ relative to the scalar product (5.2). By (5.3) and (5.5) we have

$⟨𝔖u,w0𝔖vw0⟩ =ε(vw0)δuv$

or equivalently

$⟨ 𝔖u(x),w0 𝔖vw0(-x) ⟩ =δuv$

which shows that

$(5.12) 𝔖w(x)=w0 𝔖ww0(-x)$

for all $w\in {S}_{n}\text{.}$ From (5.10) it follows that

$Δ(x,y)= ∑w∈Sn𝔖w (x)w0𝔖w(y)$

or equivalently

$(5.13) ∏1≤i

Let ${\left({x}_{\beta }\right)}_{\beta \subset \delta }$ be the basis dual to ${\left({x}^{\alpha }\right)}_{\alpha \subset \delta }\text{.}$ If

$𝔖u = ∑auα xα, 𝔖v = ∑bvβ xβ,$

then by taking scalar products we have

$∑αauα bvβ=δuv$

and therefore also

$∑wawα bwβ= δαβ,$

so that

$∑w∈Sn𝔖w (x)Sw(y) = ∑α,β ( ∑wawα bwβ ) xαyβ = ∑αxαyα.$

From (5.13) it follows that ${y}_{\alpha }$ is the coefficient of ${x}^{\alpha }$ in ${\prod }_{i and hence we find

$(5.14) xα=(-1)|β| ∏i=1n-1 eβi (xi+1,…,xn)$

where $\beta =\delta -\alpha \text{.}$

Let

$C(x,y)=ε(w0) Δ(w0x,y)= ∏i

If $f\left(x\right)\in {H}_{n}$ (4.11), let $f\left(y\right)$ denote the polynomial in ${y}_{1},\dots ,{y}_{n}$ obtained by replacing each ${x}_{i}$ by ${y}_{i}\text{.}$ Then we have

$(5.15) ⟨f(x),C(x,y)⟩ x =f(y),$

where as before the suffix $x$ means that the scalar product is taken in the $x$ variables. In other words, $C\left(x,y\right)$ is a "reproducing kernel" for the scalar product. Proof. From (5.13) we have $C(x,y)=∑w∈Sn ε(w0)𝔖w (w0x) 𝔖ww0(-y).$ Hence by (5.5) $⟨ C(x,y), 𝔖ww0(x) ⟩ x = ε(ww0) 𝔖ww0(-y) = 𝔖ww0(y).$ Hence (5.15) is true for all Schubert polynomials ${𝔖}_{u},u\in {S}_{n}\text{.}$ Since the scalar product is ${\Lambda }_{n}\text{-linear}$ it follows from (5.6) that (5.15) is true for all $f\in {H}_{n}\text{.}$ $\square$

Let ${\theta }_{yx}$ be the homomorphism that replaces each ${y}_{i}$ by ${x}_{i}\text{.}$ Then (5.15) can be restated in the form

$(5.15') θyx ⟨f(x),C(x,y)⟩x =f(x)$

for all $f\in {H}_{n}\text{.}$

Now let $z=\left[{z}_{1},\dots ,{z}_{n}\right]$ be a third set of variables and consider

$(1) ⟨ C(x,y),∂u v-1C(x,z) ⟩ x$

for $u,v\in {S}_{n},$ where ${\partial }_{u}$ and ${v}^{-1}$ act on the $x$ variables. By (5.3) this is equal to

$(2) ε(v) ⟨ C(x,z),v ∂u-1 C(x,y) ⟩ x$

and by (5.15') we have

$(3) θyx ⟨ C(x,y),∂u v-1C(x,z) ⟩ x =∂uv-1 C(x,z), (4) θzx ⟨ C(x,z),v ∂u-1 C(x,y) ⟩ x =v∂u-1 C(x,y).$

Since ${\theta }_{yx}$ and ${\theta }_{zx}$ commute, it follows from (1)-(4) that

$θyxv∂u-1 C(x,y) = ε(v)θzx ∂uv-1C (x,z) = ε(v)θyx ∂uv-1C (x,y).$

Hence we have

$(5.16) θ ( v∂u-1w0 Δ ) =ε(v)θ (∂uv-1w0Δ)$

for all $u,v\in {S}_{n},$ where $\Delta =\Delta \left(x,y\right)$ and $\theta ={\theta }_{yx}\text{.}$

Let ${E}_{n}$ denote the algebra of operators $\varphi$ of the form

$ϕ=∑w∈Sn ϕww,$

with coefficients ${\varphi }_{w}\in {ℚ}_{n}=ℚ\left({x}_{1},\dots ,{x}_{n}\right)\text{.}$ For such a $\varphi$ we have

$(5.17) ϕw=ε(w0) aδ-1θ (ϕ(w-1w0Δ))$

for all $w\in {S}_{n},$ where $\varphi$ and ${w}^{-1}{w}_{0}$ act on the $x$ variables in $\Delta \text{.}$

For $\theta \left(\varphi \left({w}^{-1}{w}_{0}\Delta \right)\right)={\sum }_{u\in {S}_{n}}{\varphi }_{u}\theta \left(u{w}^{-1}{w}_{0}\Delta \right),$ and by (5.8) $\theta \left(u{w}^{-1}{w}_{0}\Delta \right)=\Delta \left(u{w}^{-1}{w}_{0}x,x\right)$ is zero if $u\ne w,$ and is equal to $\epsilon \left({w}_{0}\right){a}_{\delta }$ if $u=w\text{.}$

Let $u\in {S}_{n},$ and let $\left({a}_{1},\dots ,{a}_{p}\right)$ be a reduced word for $u,$ so that ${\partial }_{u}={\partial }_{{a}_{1}}\dots {\partial }_{{a}_{p}}\text{.}$ Since ${\partial }_{a}={\left({x}_{a}-{x}_{a+1}\right)}^{-1}\left(1-{s}_{a}\right)$ for each $a\ge 1,$ it follows that we may write

$(5.18) ∂u=ε(w0) aδ-1∑v≤u αuvv,$

where $v\le u$ means that $v$ is of the form ${s}_{{b}_{1}}\dots {s}_{{b}_{q}},$ where $\left({b}_{1},\dots ,{b}_{q}\right)$ is a subword of $\left({a}_{1},\dots ,{a}_{p}\right)\text{.}$

The coefficients ${\alpha }_{uv}$ in (5.18) are polynomials, for it follows from (5.16) and (5.17) that

$(5.19) αuv = θ(∂u(v-1w0Δ)) = ε(v)θ (v∂u-1w0Δ).$

(5.20) For all $f\in {P}_{n}$ we have

$θ(∂u(Δf))= { w0f if u=w0, 0 otherwise.$ Proof. From (5.18) we have $θ(∂u(Δf)) =aδ-1∑v≤u αuvv(f)θ (vΔ).$ By (5.9) this is zero if $u\ne {w}_{0},$ and if $u={w}_{0}$ then by (2.10) $θ(∂w0(Δf)) = aδ-1∑w∈Sn ε(w)w(f)θ (wΔ) = aδ-1ε(w0) w0(f)ε(w0) aδ=w0(f)$ by (5.9) again. $\square$

The matrix of coefficients $\left({\alpha }_{uv}\right)$ in (5.18) is triangular with respect to the ordering $\le ,$ and one sees easily that the diagonal entries ${\alpha }_{uu}$ are non-zero (they are products in which each factor is of the form ${x}_{i}-{x}_{j}\text{).}$ Hence we may invert the equations (5.18), say

$(5.21) u=∑v≤u βuv∂v$

and thus we can express any $\varphi \in {E}_{n}$ as a linear combination of the operators ${\partial }_{w}\text{.}$ Explicitly, we have

$(5.22) ϕ=∑w∈Snθ (ϕ(∂w-1w0Δ)) ∂w.$ Proof. By linearity we may assume that $\varphi =f{\partial }_{u}$ with $f\in {Q}_{n}\text{.}$ Then $θ(ϕ(∂w-1w0Δ)) =fθ(∂u∂w-1w0Δ) .$ Now by (4.2) ${\partial }_{u}{\partial }_{{w}^{-1}{w}_{0}}$ is either zero or equal to ${\partial }_{u{w}^{-1}{w}_{0}},$ and by (5.20) $\theta \left({\partial }_{u{w}^{-1}{w}_{0}}\Delta \right)$ is zero if $w\ne u,$ and is equal to $1$ if $w=u\text{.}$ Hence the right-hand side of (5.22) is equal to $f{\partial }_{u}=\varphi ,$ as required. $\square$

In particular, it follows from (5.22) and (5.21) that

$(5.23) βuv=θ (u∂v-1w0Δ),$

hence is a polynomial.

The coefficients ${\alpha }_{uv},{\beta }_{uv}$ in (5.18) and (5.23) satisfy the following relations:

(5.24)
 (i) ${\beta }_{uv}=\epsilon \left(uv\right){\alpha }_{v{w}_{0},u{w}_{0}},$ (ii) ${\alpha }_{{u}^{-1}{v}^{-1}}={v}^{-1}\left({\alpha }_{uv}\right),$ (iii) ${\alpha }_{\stackrel{‾}{u},\stackrel{‾}{v}}=\epsilon \left(u{w}_{0}\right){w}_{0}\left({\alpha }_{uv}\right),$
for all $u,v\in {S}_{n},$ where $\stackrel{‾}{u}={w}_{0}u{w}_{0},$ $\stackrel{‾}{v}={w}_{0}v{w}_{0}\text{.}$ Proof. (i) By (5.23) and (2.12) we have $βuv = ε(v-1w0) θ(uw0∂w0v-1w0Δ) = ε(v-1w0) ε(uw0)θ (∂vw0w0u-1w0Δ) by (5.16) = ε(uv) αvw0,uw0. by (5.19).$ (ii) From (5.18) we have $θ(v∂u-1w0Δ) = ε(w0)v (aδ-1) ∑wv (αu-1,w-1) θ(vw-1w0Δ) = ε(v)v (αu-1,v-1) by (5.9),$ and likewise $θ(∂uv-1w0Δ) = ε(w0)aδ ∑wαuwθ (wv-1w0Δ) = αuv$ again by (5.9). Hence (ii) follows from (5.16). (iii) Since ${\partial }_{\stackrel{‾}{u}}=\epsilon \left(u\right){w}_{0}{\partial }_{u}{w}_{0}$ (2.12) we have $∑vαuv‾ v‾ = ε(uw0)w0 (∑vαuvv) w0 = ε(uw0)∑v w0(αuv) v‾$ and hence ${\alpha }_{\stackrel{‾}{uv}}=\epsilon \left(u{w}_{0}\right){w}_{0}\left({\alpha }_{uv}\right)\text{.}$ $\square$

(5.25) Let ${E}_{n}^{\prime }$ be the subalgebra of operators $\varphi \in {E}_{n}$ such that $\varphi \left({P}_{n}\right)\subset {P}_{n}\text{.}$ Then ${E}_{n}^{\prime }$ a free ${P}_{n}\text{-module}$ with basis ${\left({\partial }_{w}\right)}_{w\in {S}_{n}}\text{.}$ Proof. If $\varphi ={\sum }_{w\in {S}_{n}}{\varphi }_{w}{\partial }_{w}\in {E}_{n}^{\prime },$ then by (5.22) $ϕw=theat (ϕ(∂w-1w0Δ)) ∈Pn.$ On the other hand, the ${\partial }_{w}$ are a ${Q}_{n}\text{-basis}$ of ${E}_{n},$ and hence are linearly independent over ${P}_{n}\text{.}$ $\square$

## Notes and References

This is a typed excerpt of the book Notes on Schubert Polynomials by I. G. Macdonald.