The Schubert Calculus and Eigenvalue Inequalities for Sums of Hermitian Matrices

Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia

Last updated: 14 October 2014

The Symmetric Algebra and Schur Functions

The formulae and rules making up the Schubert Calculus appear in numerous other setting, many of which provide additional computational rules. These other settings include:

1) Representation theory of the symmetric group;
2) Representation theory of the full linear group;
3) Multiplication of Schur functions in the algebra of symmetric polynomials.

In this section we consider topic 3), show the relationships between it and the Schubert calculus, and derive some new results which give another characterization of the set Vdn.

Partitions, Young Diagrams

By a partition we mean a sequence of integers λ=(λ1,λ2,) satisfying λ1λ20. We say λ is a partition of w, or has weight w, if i=1λi =w. The length of λ is the number of nonzero integers among λ1,λ2,. If λ has length r, we write λ=(λ1,λ2,,λr). We let Π denote the set of all partitions, and Πd the set of partitions of length at most d. The set Π may be given a partial ordering , defined as follows: μλ μiλi fori=1,2,3,.

The diagram Y(λ) associated with a partition λ is a left-justified array of squares with λi squares in the ith row.

Example: λ=(4,2,1) Y(λ)= If μ and λ are partitions with μλ, then the skew-diagram Y(λμ) is obtained by removing Y(μ) from Y(λ). For example, if λ=(5,4,3,1,1) and μ=(4,3,1,1,0) then Y(λμ) looks like

Lastly, the partition dual to λ, denoted by λ, is the partition whose diagram is obtained from Y(λ) by interchanging rows and columns. The "jump" function δX can be used to express the components of λ in terms of the components of λ: λj= i=1 δλi(j).

Symmetric functions

Let A denote the ring of symmetric polynomials (also called symmetric functions) in an infinite number of variables x1,x2, with integer coefficients (properly called symmetric formal power series). We review some basis results about this ring (see [Sta1976] and [Lid1950]).

1) Let ar= σQr, i=1r xσ(i); ar is called the rth elementary symmetric function. The "fundamental theorem of symmetric functions" states that the elementary symmetric functions are algebraically independent and that A=Z[a1,a2,], i.e. every symmetric function in A may be expressed uniquely as a polynomial in a1,a2, with integer coefficients.
2) Let Gr, denote the set of all non-decreasing sequences of r integers chosen from 1,2,. Define hr=σGr, i=1r xσ(i); hr is called the rth complete homogeneous product. It is the sum of all monomials in A of degree r.
3) Let Sr= i=1 xir; Sr is called the rth power symmetric function. By convention, a0=h0=S0=1, and ar=hr=Sr=0 for r<0.

The ar and Sr are connected by the classical

Newton's identities:

S1-a1 = 0, S2-S1a1 +2a2 = 0, S3-S2a1+ S1a2-3a3 = 0, SN-SN-1 a1++ (-1)NNaN = 0,

If one solves for the ar's in terms of the Sr's, one obtains r!·ar= det(Zr), where Zr= ( S1 1 0 0 0 0 0 0 S2 S1 2 0 0 0 0 0 S3 S2 S1 3 0 0 0 0 r-1 Sr Sr-1 S1 ) (*) The Sr's may be expressed in a similar fashion in terms of the ar's; hence the power sums S1,S2, are also algebraically independent and A=Z[S1,S2,].

The following relations (due to Brioschi) connect the Sr's and the hr's: S1-h1 = 0, S2+S1h1 -h2 = 0, S3+S2h1+ S1h2-h3 = 0, SN+SN-1 h1+-hN = 0,

If one solves for the hr's in terms of the Sr's, one obtains r!·hr = det S1 -1 0 0 0 0 0 0 S2 S1 -2 0 0 0 0 0 S3 S2 S1 -3 0 0 0 0 -r+1 Sr Sr-1 S1 (**) = permanent(Zr). Again, the complete homogeneous product sums are algebraically independent, and A=Z[h1,h2,].

Another class of symmetric functions, the Schur symmetric functions, is also of particular interest. Let λ be a partition. The Schur symmetric function eλ may be defined in numerous ways. We give the definition originally used by Schur. Let Sr denote the symmetric group on r symbols, and let k(λ) be the character of Sr corresponding to λ, as given in Young diagram theory. The λ-immanent of an r-by-r matrix A=(Aij), denoted by A(λ), is defined as follows: A(λ)= σSr k(λ)(σ)· i=1r Ai,σ(i). The Schur function eλ is then defined by r!·eλ=Zr(λ) , where Zr is the matrix of power sums defined previously. We immediately note two special cases:

1) If λ=(1,1,1,,1) has weight r, then A(λ)=det(A), so e(1,1,1,,1)=det(Zr)/r!=ar.
2) If λ=(r,0,0,,0), then A(λ)=per(A), so e(r,0,0,,0)=per(Zr)/r!=hr.

We list some classical theorems concerning Schur functions.

The set {eλ|λΠ} forms a Z-basis for the Z-module A.

(Jacobi-Trudi) If λ is a partition of length r, then eλ=det (Hij), where Hij is the r-by-r matrix given by Hij= hλi-i+j.

(Naegelsbach, Aitken) If λ is a partition of length r, then eλ=det(Aij), where Aij is the r-by-r matrix given by Aij=aλi-i+j, and λ is the dual of λ.

As the reader might infer, identities 4.2 and 4.3 were proved for a definition of Schur functions other than that given by Schur. Classically, these results were obtained in the ring AN of symmetric polynomials in N variables x1,x2,,xN. In the ring AN, definitions exist for the elementary symmetric functions, complete homogeneous symmetric functions, power sums, and Schur functions which are analogous to the definitions of ar,hr,Sr, and eλ in the ring A. We will use the symbols ar,hr,Sr, and eλ to denote elements of AN as well as A.

In the ring AN, another definition of eλ is possible (and in fact was the one used in the classical proofs of 4.2 and 4.3). Let λ be a partition of length at most N, and define Δλ=det(Xij), where Xij= xjλi+N-i. We then have

(Jacobi) eλ= ΔλΔ(0,0,,0)

The quotients ΔλΔ(0,0,,0) are referred to by Muir as bi-alternants.

Identity 4.4 holds in the ring AN but has no meaning for A. Identities 4.2 and 4.3 hold in AN as well as A. Statement 4.1 holds as stated for A; but in AN, eλ=0 if λ has length more than N. However, the set {eλ|weightλN} is a basis for AN.

The mapping hrar extends to a ring automorphism D of A for which D(eλ)=eλ.


Since AZ[h1,h2,] Z[a1,a2,], any bijection from {h1,h2,} to {a1,a2,} extends to an automorphism of A. From identities 4.2 and 4.3, D(eλ) = D(det(hλi-i+j)) = det(D(hλi-i+j)) = det(aλi-i+j) = eλ.

The Littlewood-Richardson Rule

The Schur functions constitute a Z-basis for A; hence any product eλ·eμ may be expressed as a sum νΠ cλμν·eν. There is a certain well-studied rule for determining the integers cλ,μ,ν. Before describing the rule, we make a definition.

Let the sequence consisting of μ1 1's, μ2 2's, , μk k's, be denoted by ( 1μ1, 2μ2,, kμk ) . Suppose (α1,α2,) is a rearrangement of the sequence (1μ1,2μ2,) such that for any pair of indices i,i+1 the number of i's appearing among α1,α2,,αj is not less than the number of (i+1)'s appearing among α1,α2,,αj. Then (α1,α2,) is called a lattice permutation of (1μ1,2μ2,).

Example: (1,2,3,1,4,2) is a lattice permutation of (12,22,3,4), but (1,2,3,2,1,4) is not.

(The Littlewood-Richardson (LR) rule) Let λ,μ, and ν be partitions. The coefficient of eν in the product eλ·eμ is equal to zero unless λν. If λν, it is equal to the number of ways of filling the squares of the skew-diagram Y(νλ) if μ1 1's, μ2 's, subject to the following two conditions:

LR1: The inserted numbers are non-decreasing in each row and strictly increasing in each column.
LR2: The sequence (α1,α2,) obtained by reading from right to left across the first row, next right to left across the second row, etc., must be a lattice permutation of (1μ1,2μ2,).

Example: Using the LR rule, one can compute that e(2,1)· e(2,1)=2 e(3,2,1)+ e(4,2)+ e(2,2,1,1)+ e(3,3).

The LR rule was first stated in 1934 by D. E. Littlewood and A. R. Richardson ([Ll-R (Not included in Bibliography)]). The first proof is attributed to G. de B. Robinson in 1938, but that presentation is difficult to follow. Littlewood has offered proofs in [L1 (Not included in Bibliography)] and [L2 (Not included in Bibliography)] which seem nearly correct, but McConnell [McC1975] has pointed out problems in those efforts. Schutzenberger [Sch1976] has developed elegant (but non-elementary) combinatorial machinery which yields the LR rule as a corollary of a more general result. G. D. James has recently obtained the LR rule as a generalization of a well known result in the representation theory of the symmetric group [Jam1973]. The methods to be developed later in this section can be used to "fix" Littlewood's original argument and provide a fairly simple proof of the LR rule.

As a consequence of 4.7, we have the following

Let λ=(λ1,λ2,,λk) be a partition of weight N. Then eλ·hr=ν eν, where ν ranges over all partitions of weight N+r and length at most k+1 which satisfy ν1λ1 ν2λ2 νkλk νk+1, (*) i.e. λ must interlace ν.


Recall that hr=e(r,0,,0). By the LR rule, eλ·hr= cνeν, where cν equals the number of ways of inserting r 1's into the skew diagram Y(νλ), subject to the LR conditions. We must show that cν=1 if ν is of the form (*) and zero otherwise.

i) If λν or weight(ν)N+r, then cν=0 is automatic.
ii) If the skew diagram Y(νλ) has more than one square in any column, then any placement of r 1's into Y(νλ) must violate condition LR1. This is the case if ν has length greater than k+1 or if λ does interlace ν; hence in these cases, cν=0.
iii) If cases i) and ii) are excluded, then λν, weight(ν)=N+r and λ interlaces ν. Then the obvious (and only) placement of r 1's into Y(νλ) satisfies LR1 because all columns have one square only and this placement satisfies LR2 because only one symbol ("1") is involved. Hence cν=1.

We are ready to present a theorem which connects the multiplication of Schubert cycles in A(Gd,n) with the multiplication of Schur functions in A. The connection is made with the help of a certain ideal in A. Given 1d<n, let p be the partition of (n-d)d with d parts equal to n-d, and let Idn be the Z-submodule of A generated by all eμ for which μp. In other words, either μ1>n-d or μ has more than d parts. It is easy to verify using the LR rule that Idn is an ideal in A. The quotient A/Idn is generated freely as a Z-module, with basis {eλ|λp}. Products in A/Idn may be computed using the LR rule and ignoring terms involving those eμ in Idn. The reader may now anticipate that A(Gd,n) and A/Idn are isomorphic. We can describe an isomorphism using the map θd,n:Π Qd,n defined by θd,n(λ)= { , ifλp, n-d+1-λ1, n-d+2-λ2,, n-λd, otherwise. We note that in this instance refers to the empty sequence belonging to Qd,n. By convention, we shorten θd,n to θ where no ambiguity exists.

The rings A/Idn and A(Gd,n) are isomorphic. An isomorphism is given by eλ+Idn Ω(θ(λ)) and linear extension.


Let T be the map from A to A(Gd,n) defined by T(eλ)=Ω(θ(λ)) and linear extension. We need only show that T is a ring homomorphism onto A(Gd,n) with kernel Idn. Since θ is a map onto Qd,n, T maps onto A(Gd,n). It is easy to see that the kernel of T is Idn. Since the special Schur functions h1,h2, freely generated A as a ring, T is multiplicative if for each pair i,j T(hi·hj)= T(hi)· T(hj). Let Πij= { νΠ| 1) length(ν)=2, 2) ν1iν2, 3) weight(ν)=i+j } . We have T(hi·hj) = T(νΠijeν) = νΠij Ω(θ(ν)) = aθ(Πij) Ω(a) = σ(i)·σ(j) = T(hi)· T(hj).

The connection described in Theorem 4.9 was first noticed by Lesieur [Le (Not included in Bibliography)] in 1947.

The rings A(Gd,n) and A(Gn-d,n) are isomorphic. An isomorphism is given by Ω(a)Ω(a) and linear extension.


This follows from Theorem 4.5 and the fact that θ(λ)=θ(λ).

(Duality for Vdn) (a,b,c)Vdn (a,b,c) Vn-dn


This follows from Theorems 4.10 and 3.9.

Array formulation of the LR Rule

The Littlewood-Richardson rule, as originally states, requires deft combinatorial reasoning in applications. In this section we present a new formulation of the LR rule which facilitates a more geometrical type of argument.

Let eλ·eμ= cλμν eν and set d=max ( length(λ), length(μ), length(ν) ) . Then cλ,μ,ν equals the number of d-by-d integer matrices [nij] satisfying the following system of linear inequalities:

LRA1: nij0 for 1i,jd.
LRA2: i=1dnij=μi, i=1,2,,d.
LRA3: i=1dnij=νj-λj, j=1,2,,d.
LRA4: λj+i=1tnijλj+1+i=1t+1ni,j+1, 1id-1, 0td-1.
LRA5: j=1tnijj=1t+1ni+1,j, 1id-1, 0td-1.
λ11 λ22 λ22 λ22 λdd 0 0 0 0 n11 n11 n11 n11 n11 n11 n11 n11 n11 nij n11 n11 n11 nii n11 n11 n11 n11 n11 n11 n11 n11 n11 n11 n11 n11 n11 n11 0 n11 n11 n11 ndd μ1 μ2 μd ν11 ν22 ν22 ν22 νdd


First we dispose of the case λν. By the LR rule, cλμν=0; this agrees with the theorem, since there are no integer matrices which could satisfy LRA1 and LRA3.

Next suppose weight(λ)+weight(μ)weight(ν). By the LR rule, cλμν=0; this agrees with the theorem, since there are no matrices satisfying conditions LRA2 and LRA3 for which weight(λ)+weight(μ)weight(ν).

Now suppose λν and weight(λ)+weight(μ)=weight(ν). The theorem will be verified if we can establish a one-to-one correspondence between the ways of filling the squares of Y(νλ) with μ1 1's, μ2 2's, which conform to the LR restrictions, and the integer matrices [nij] satisfying LRA1 through LRA5. The following is such a correspondence.

Given a way of filling Y(νλ), let nij= the number of symbols 'i' placed in rowjof Y(νλ). Clearly this defines a one-to-one correspondence between the set of ways of filling Y(νλ) and the set of integer arrays satisfying LRA1, LRA2, and LRA3. It remains to show that if an arbitrary way of filling Y(νλ) satisfies the two LR conditions, then the corresponding array satisfies LRA4 and LRA5, and conversely.

Suppose that [nij] corresponds to a way of filling Y(νλ) which satisfies LR1 and LR2. Then [nij] must satisfy LRA4 and LRA5, as the following two arguments show.

1) Suppose that LRA4 fails to hold for [nij]. Then there is some j0 that λj0+ i=1t0 nij0< λj0+1+ i=1t0+1 nij0+1 for some t0. We assume that j0 is the minimal such index, and that t0 is the smallest is the smallest possible index corresponding to j0.

Let k1=λj0+ i=1t0 nij0 and k2=λj0+1+ i=1t0+1 nij0+1. By assumption, k1<k2. Now in row j0 of Y(νλ), the placement of 1's, 2's, , t0's extends out to column k1 at most; hence the square in row j0, column k2 of Y(νλ) contains a symbol greater than or equal to t0+1. However, the square in row j0+1, column k2 contains precisely the symbol t0+1 (because nt0+1,j0*1 can not be zero by the minimality of j0 and t0). But then this way of filling Y(νλ) does not satisfy LR1 ("symbols in a given column must be strictly increasing"), a contradiction.

2) Suppose LRA5 fails to hold for [nij]. Then there exist indices i0,t0 such that j=1t0 ni0j< j=1t0+1 ni0+1,j. We may assume i0 is the least such index, and t0 the least possible index corresponding to i0. In this case we must have ni0+1,t0+10. Let k1=j=1t0 ni0j and k2=j=1t0+1 ni0+1,j. Now for the way of filling Y(νλ) given by [nij], let the sequence (α1,α2,) be obtained by reading the symbols Y(νλ) from right to left in the first, second, , dth rows. Since ni0+1,t0+10, the symbol i0+1 occurs in row t0+1 of Y(νλ). Let the rightmost occurrence of i0+1 in row t0+1 appear in the sequence (α1,α2,) as αN. Then the number of occurrences of i0+1 among α1,α2,,αN is k2, while the number of occurences of i0 among α1,α2,,αN is k1. But k1<k2 contradicts the assumption that the given way of filling Y(νλ) satisfies LR2.

Arguments 1) and 2) may be more readily understood by working out a few examples. A reversal of the arguments shows that for any integer matrix satisfying LRA1–LRA5, the corresponding way of filling Y(νλ) satisfies LR1 and LR2. This establishes the one-to-one correspondence and completes the proof of Theorem 4.12.

An integer matrix [nij] satisfying conditions LRA1 through LRA2, with respect to partitions λ,ν, and ν, will be called a (λ,μ,ν) Littlewood-Richardson array ((λ,μ,ν)-LRA). Theorem 4.12 shows that if a (λ,μ,ν)-LRA exists, then the coefficient cλμν is non-zero. In order to determine more about the existence of LRA's for a given triple (λ,μ,ν), we consider a slightly more general object.

Definition An element (λ,μ,ν,N) belonging to d×d×d×d×d will be called a Littlewood-Richardson design (LRD) if the following conditions hold:

1) λ1λ2λd0,
2) N=[nij] satisfies conditions LRA1-LRA5.


i) If (λ,μ,ν,N) is an LRD which is integral, then N is a (λ,μ,ν)-LRA.
ii) If N is a (λ,μ,ν)-LRA, then (λ,μ,ν,N) is an integral LRD.

We now make a useful observation.

The set of Littlewood-Richardson designs in d×d×d×d×d is a closed convex cone.


The theorem follows from the fact that the set of LRD's is defined by a system of homogeneous linear inequalities.

The following theorem is fundamental.

Let (λ,μ,ν) be partitions of length d, d4. If there exists a (λ,μ,ν)-LRD, then there exists a (λ,μ,ν)-LRA.


Let (λ,μ,ν,N) be a (λ,μ,ν)-LRD; we show how to successively "perturb" N so that at each step we have a (λ,μ,ν)-LRD and the final perturbed matrix is integral.

Let N=(nij), 1i,jd. Condition LRA5 guarantees that N is upper triangular. This makes the result trivial if d=1 or d=2. For d=3, the only non-integral elements of N must lie in the submatrix N[1,2|2,3]. The presence of fractions means that there is some "play" in the inequalities LRA4 and LRA5. If we "rotate" the submatrix N[1,2|2,3] (i.e. add ξ to n12, subtract ξ from n13, add ξ to n23, and subtract ξ from n22) the right amount, we obtain an integral matrix, and LRA1-LRA5 still hold.

Now consider the case d=4. Since N is upper triangular, n11 and ndd are integral. We perturb N until n33 is integral as follows: decrease n22 and n33 by ξ, increase n12,n23, and n34 by ξ, and decrease n14 by ξ, where ξ=min{n14,n33-[n33]}. As a result, either n33 is integral or n14 is zero.

Case 1. n14=0
Perturb N further by decreasing n22 and n33 by ξ; increasing n12, n23, and n34 by ξ and then increasing n23 by another ξ; and decreasing n13 and n24 by ξ, where this time ξ=min { n13,n24, n33-[n33] } . One possible result is that n13=0. Then n12 and n22 are integral, so a rotation of N[2,3|3,4] yields a (λ,μ,ν)-LRA. The other possible result is that n24=0 or n33 is integral; in either case n33, n34, n24, and n14 are all integral, so a simple rotation of N[1,2|2,3] yields a (λ,μ,ν)-LRA.

Case 2. n33 is integral.
Then n34 is also integral, so we rotate N[1,2|3,4] until an integral element results, and still have a (λ,μ,ν)-LRD. The last non-integral elements all lie in some 2-by-2 submatrix; we again rotate to remove them, and wind up with a (λ,μ,ν)-LRA. This ends the proof.

Theorem 4.14 removes a difficult constraint in verifying the existence of a (λ,μ,ν)-LRA; i.e. the existence of any real matrix satisfying conditions LRA1-LRA5 is enough to guarantee the existence of an integral matrix satisfying those conditions. Relaxing this constraint allows the use of standard arguments from convexity theory.

Additional Characterizations of Vdn

We can use several of the preceding results to formulate new characterizations for the set Vdn, introduced in section II. Theorem 3.9 states that Vdn= { (a,b,c)| 1) a,b,cQd,n, 2) Ω(a)· Ω(b)· Ω(c)0 } . Here Ω(a),Ω(b), and Ω(c) are Schubert cycles (elements of the ring A(Gd,n)). This definition of Vdn is based on a multiplication statement in the ring A(Gd,n). We know from Theorem 4.9 that A(Gd,n) is ring isomorphic to A/Idn. Using this isomorphism, we can translate the condition Ω(a)· Ω(b)· Ω(c)0 in A(Gd,n) to an equivalent condition in A/Idn. Application of the Littlewood-Richardson rule then gives information about elements of Vdn.

Suppose Ω(a)· Ω(b)· Ω(c)=n ·Ω(1,2,,d) and Ω(a)· Ω(b)= sQd,n ns·Ω(s). Then nc=n.


Easy consequence of Theorem 3.7.

Recall the definition of the map θd,n which maps the set of partitions Πd onto the set Qd,n: θd,n(λ)= { , ifλp, n-d+1-λ1, n-d+2-λ2,, n-λd, otherwise. We can define a one-sided inverse for θd,n which maps elements of Qd,n into the set of partitions Π. We will call this one-sided inverse θd,n-1 and define it by θd,n-1(a)= ( n-d+1-a1, n-d+2-a2,, n-ad ) . By convention, θ-1=θd,n-1.

Let Ω(a)·Ω(b) =sQd,n ns·Ω(s) and eθ-1(a)· eθ-1(b)= νmν·eν. Then ns=mθ-1(s).


The isomorphism from A/Idn to A(Gd,n) given by eλ+Idn Ω(θ(λ)) has an inverse given by Ω(a) eθ-1(a) Idn. We are given that Ω(a)· Ω(b)= sns· Ω(s) in A(Gd,n). Applying θ-1 to each side yields (eθ-1(a)+Idn) (eθ-1(b)+Idn)= sns· (eθ-1(s)+Idn) or eθ-1(a)· eθ-1(b)+ Idn=sQd,n ns·eθ-1(s) +Idn. (*) In the ring A, we have eθ-1(a)· eθ-1(b) = νΠmνeν = νpmνeν+ νpmνeν. Mapping this equation into A/Idn yields eθ-1(a)· eθ-1(b)+ Idn=νp mνeν+Idn or eθ-1(a)· eθ-1(b)+Idn= sQd,n mθ-1(s)· (eθ-1(s)+Idn). (**) Comparing (*) to (**) shows that ns=mθ-1(s).

Ω(a)· Ω(b)· Ω(c)=n· Ω(1,2,,d) for some non-zeron there exists a ( θ-1(a), θ-1(b), θ-1(c) ) -LRA.


The theorem follows in a straightforward manner from lemmas 4.15 and 4.16 and Theorem 4.12.

We now have two methods of testing whether a product Ω(a)·Ω(b)·Ω(c), known to be a multiple of Ω(1,2,,d), is non-zero:

1) Compute (using the determinantal and Pieri's formula) Ω(a)·Ω(b)·Ω(c) in the ring A(Gd,n);
2) Determine (using the LR rule) whether a (θ-1(a),θ-1(b),θ-1(c))-LRA exists.

Note that method 1 yields more information than we seek: it identifies the multiple exactly. Method 2 can also be used to find the multiple exactly, but with less work can simply tell whether or not the multiple is non-zero.

We introduce some notation relevant to the next theorem: For aQd,n and aQd,n, define aa by (aa)(i)= ai+ai-i ( aa Qd,n+n-d ) .

("generalized pushing lemma") If (a,b,c)Vdn and (a,b,c)Vdn, then (a,b,c) (a,b,c) Vdn+n-d.


The proof is done by translating from sequences in Qd,n,Qd,n, and Qd,n+n-d to partitions in Πd. Let (a,b,c)Vdn and (a,b,c)Vdn. Then Ω(a)· Ω(b)· Ω(c) 0in A(Gd,n), Ω(a)· Ω(b)· Ω(c) 0in A(Gd,n). First assume both these products are non-zero multiples of Ω(1,2,,d) in A(Gd,n) and A(Gd,n), respectively. From Theorem 4.17, there exists a ( θd,n-1(a), θd,n-1(b), θd,n-1(c) ) -LRA, (which we will denote by N), and a ( θd,n-1(a), θd,n-1(b), θd,n-1(c) ) -LRA, (which we will denote by N). From Theorem 4.13, N+N is an LRA.

Let λ = θd,n+n-d-1 (aa), μ = θd,n+n-d-1 (bb), ν = θd,n+n-d-1 (cc). Again by Theorem 4.17, we can show that ( aa, bb, cc ) Vdn+n-d if there exists a (λ,μ,ν)-LRA, and this will establish the theorem. We claim that N+N is a (λ,μ,ν)-LRA. The verification amounts to the following exercise: [θd,n-1(a)](i)+ [θd,n-1(a)](i) = (n-d+i-ai)+ (n-d+i-ai) = (n+n-d)-d +i-(ai+ai-i) = (n+n-d)-d+i- [aa](i) = [θn+n-d-1(aa)](i). This identity, together with the fact that cc= c c, easily establishes the claim.

The theorem is now established in the case where the products of Schubert cycles are multiples of Ω(1,2,,d). If this is not the case, then by Theorem 3.10 we can find (x,y,z) (a,b,c), (x,y,z) (a,b,c) such that the products Ω(x)·Ω(y)·Ω(z) and Ω(x)·Ω(y)·Ω(z) are non-zero multiples of Ω(1,2,,d). Hence (x,y,z) (x,y,z) Vdn+n-d by our previous result. Since (x,y,z) (x,y,z) (a,b,c) (a,b,c), Theorem 2.11 guarantees the desired result.

In the next section, we will show that Theorem 4.18 is indeed a generalization of Horn's original "pushing" lemma.

Notes and References

This is an excerpt from Steven Andrew Johnson's 1979 dissertation The Schubert Calculus and Eigenvalue Inequalities for Sums of Hermitian Matrices.

page history