Last updated: 14 October 2014
The formulae and rules making up the Schubert Calculus appear in numerous other setting, many of which provide additional computational rules. These other settings include:
|1)||Representation theory of the symmetric group;|
|2)||Representation theory of the full linear group;|
|3)||Multiplication of Schur functions in the algebra of symmetric polynomials.|
In this section we consider topic 3), show the relationships between it and the Schubert calculus, and derive some new results which give another characterization of the set
By a partition we mean a sequence of integers satisfying We say is a partition of or has weight if The length of is the number of nonzero integers among If has length we write We let denote the set of all partitions, and the set of partitions of length at most The set may be given a partial ordering defined as follows:
The diagram associated with a partition is a left-justified array of squares with squares in the row.
Example: If and are partitions with then the skew-diagram is obtained by removing from For example, if and then looks like
Lastly, the partition dual to denoted by is the partition whose diagram is obtained from by interchanging rows and columns. The "jump" function can be used to express the components of in terms of the components of
Let denote the ring of symmetric polynomials (also called symmetric functions) in an infinite number of variables with integer coefficients (properly called symmetric formal power series). We review some basis results about this ring (see [Sta1976] and [Lid1950]).
|1)||Let is called the elementary symmetric function. The "fundamental theorem of symmetric functions" states that the elementary symmetric functions are algebraically independent and that i.e. every symmetric function in may be expressed uniquely as a polynomial in with integer coefficients.|
|2)||Let denote the set of all non-decreasing sequences of integers chosen from Define is called the complete homogeneous product. It is the sum of all monomials in of degree|
|3)||Let is called the power symmetric function. By convention, and for|
The and are connected by the classical
If one solves for the in terms of the one obtains where The may be expressed in a similar fashion in terms of the hence the power sums are also algebraically independent and
The following relations (due to Brioschi) connect the and the
If one solves for the in terms of the one obtains Again, the complete homogeneous product sums are algebraically independent, and
Another class of symmetric functions, the Schur symmetric functions, is also of particular interest. Let be a partition. The Schur symmetric function may be defined in numerous ways. We give the definition originally used by Schur. Let denote the symmetric group on symbols, and let be the character of corresponding to as given in Young diagram theory. The of an matrix denoted by is defined as follows: The Schur function is then defined by where is the matrix of power sums defined previously. We immediately note two special cases:
|1)||If has weight then so|
|2)||If then so|
We list some classical theorems concerning Schur functions.
The set forms a for the
(Jacobi-Trudi) If is a partition of length then where is the matrix given by
(Naegelsbach, Aitken) If is a partition of length then where is the matrix given by and is the dual of
As the reader might infer, identities 4.2 and 4.3 were proved for a definition of Schur functions other than that given by Schur. Classically, these results were obtained in the ring of symmetric polynomials in variables In the ring definitions exist for the elementary symmetric functions, complete homogeneous symmetric functions, power sums, and Schur functions which are analogous to the definitions of and in the ring We will use the symbols and to denote elements of as well as
In the ring another definition of is possible (and in fact was the one used in the classical proofs of 4.2 and 4.3). Let be a partition of length at most and define where We then have
The quotients are referred to by Muir as bi-alternants.
Identity 4.4 holds in the ring but has no meaning for Identities 4.2 and 4.3 hold in as well as Statement 4.1 holds as stated for but in if has length more than However, the set is a basis for
The mapping extends to a ring automorphism of for which
Since any bijection from to extends to an automorphism of From identities 4.2 and 4.3,
The Schur functions constitute a for hence any product may be expressed as a sum There is a certain well-studied rule for determining the integers Before describing the rule, we make a definition.
Let the sequence consisting of be denoted by Suppose is a rearrangement of the sequence such that for any pair of indices the number of appearing among is not less than the number of appearing among Then is called a lattice permutation of
Example: is a lattice permutation of but is not.
(The Littlewood-Richardson (LR) rule) Let and be partitions. The coefficient of in the product is equal to zero unless If it is equal to the number of ways of filling the squares of the skew-diagram if subject to the following two conditions:
|LR1:||The inserted numbers are non-decreasing in each row and strictly increasing in each column.|
|LR2:||The sequence obtained by reading from right to left across the first row, next right to left across the second row, etc., must be a lattice permutation of|
Example: Using the LR rule, one can compute that
The LR rule was first stated in 1934 by D. E. Littlewood and A. R. Richardson ([Ll-R (Not included in Bibliography)]). The first proof is attributed to G. de B. Robinson in 1938, but that presentation is difficult to follow. Littlewood has offered proofs in [L1 (Not included in Bibliography)] and [L2 (Not included in Bibliography)] which seem nearly correct, but McConnell [McC1975] has pointed out problems in those efforts. Schutzenberger [Sch1976] has developed elegant (but non-elementary) combinatorial machinery which yields the LR rule as a corollary of a more general result. G. D. James has recently obtained the LR rule as a generalization of a well known result in the representation theory of the symmetric group [Jam1973]. The methods to be developed later in this section can be used to "fix" Littlewood's original argument and provide a fairly simple proof of the LR rule.
As a consequence of 4.7, we have the following
Let be a partition of weight Then where ranges over all partitions of weight and length at most which satisfy i.e. must interlace
Recall that By the LR rule, where equals the number of ways of inserting into the skew diagram subject to the LR conditions. We must show that if is of the form (*) and zero otherwise.
We are ready to present a theorem which connects the multiplication of Schubert cycles in with the multiplication of Schur functions in The connection is made with the help of a certain ideal in Given let be the partition of with parts equal to and let be the of generated by all for which In other words, either or has more than parts. It is easy to verify using the LR rule that is an ideal in The quotient is generated freely as a with basis Products in may be computed using the LR rule and ignoring terms involving those in The reader may now anticipate that and are isomorphic. We can describe an isomorphism using the map defined by We note that in this instance refers to the empty sequence belonging to By convention, we shorten to where no ambiguity exists.
The rings and are isomorphic. An isomorphism is given by and linear extension.
Let be the map from to defined by and linear extension. We need only show that is a ring homomorphism onto with kernel Since is a map onto maps onto It is easy to see that the kernel of is Since the special Schur functions freely generated as a ring, is multiplicative if for each pair Let We have
The connection described in Theorem 4.9 was first noticed by Lesieur [Le (Not included in Bibliography)] in 1947.
The rings and are isomorphic. An isomorphism is given by and linear extension.
This follows from Theorem 4.5 and the fact that
This follows from Theorems 4.10 and 3.9.
The Littlewood-Richardson rule, as originally states, requires deft combinatorial reasoning in applications. In this section we present a new formulation of the LR rule which facilitates a more geometrical type of argument.
Let and set Then equals the number of integer matrices satisfying the following system of linear inequalities:
First we dispose of the case By the LR rule, this agrees with the theorem, since there are no integer matrices which could satisfy LRA1 and LRA3.
Next suppose By the LR rule, this agrees with the theorem, since there are no matrices satisfying conditions LRA2 and LRA3 for which
Now suppose and The theorem will be verified if we can establish a one-to-one correspondence between the ways of filling the squares of with 1's, 2's, which conform to the LR restrictions, and the integer matrices satisfying LRA1 through LRA5. The following is such a correspondence.
Given a way of filling let Clearly this defines a one-to-one correspondence between the set of ways of filling and the set of integer arrays satisfying LRA1, LRA2, and LRA3. It remains to show that if an arbitrary way of filling satisfies the two LR conditions, then the corresponding array satisfies LRA4 and LRA5, and conversely.
Suppose that corresponds to a way of filling which satisfies LR1 and LR2. Then must satisfy LRA4 and LRA5, as the following two arguments show.
1) Suppose that LRA4 fails to hold for Then there is some that for some We assume that is the minimal such index, and that is the smallest is the smallest possible index corresponding to
Let and By assumption, Now in row of the placement of 1's, 2's, extends out to column at most; hence the square in row column of contains a symbol greater than or equal to However, the square in row column contains precisely the symbol (because can not be zero by the minimality of and But then this way of filling does not satisfy LR1 ("symbols in a given column must be strictly increasing"), a contradiction.
2) Suppose LRA5 fails to hold for Then there exist indices such that We may assume is the least such index, and the least possible index corresponding to In this case we must have Let and Now for the way of filling given by let the sequence be obtained by reading the symbols from right to left in the first, second, rows. Since the symbol occurs in row of Let the rightmost occurrence of in row appear in the sequence as Then the number of occurrences of among is while the number of occurences of among is But contradicts the assumption that the given way of filling satisfies LR2.
Arguments 1) and 2) may be more readily understood by working out a few examples. A reversal of the arguments shows that for any integer matrix satisfying LRA1–LRA5, the corresponding way of filling satisfies LR1 and LR2. This establishes the one-to-one correspondence and completes the proof of Theorem 4.12.
An integer matrix satisfying conditions LRA1 through LRA2, with respect to partitions and will be called a Littlewood-Richardson array Theorem 4.12 shows that if a exists, then the coefficient is non-zero. In order to determine more about the existence of LRA's for a given triple we consider a slightly more general object.
Definition An element belonging to will be called a Littlewood-Richardson design (LRD) if the following conditions hold:
|2)||satisfies conditions LRA1-LRA5.|
|i)||If is an LRD which is integral, then is a|
|ii)||If is a then is an integral LRD.|
We now make a useful observation.
The set of Littlewood-Richardson designs in is a closed convex cone.
The theorem follows from the fact that the set of LRD's is defined by a system of homogeneous linear inequalities.
The following theorem is fundamental.
Let be partitions of length If there exists a then there exists a
Let be a we show how to successively "perturb" so that at each step we have a and the final perturbed matrix is integral.
Let Condition LRA5 guarantees that is upper triangular. This makes the result trivial if or For the only non-integral elements of must lie in the submatrix The presence of fractions means that there is some "play" in the inequalities LRA4 and LRA5. If we "rotate" the submatrix (i.e. add to subtract from add to and subtract from the right amount, we obtain an integral matrix, and LRA1-LRA5 still hold.
Now consider the case Since is upper triangular, and are integral. We perturb until is integral as follows: decrease and by increase and by and decrease by where As a result, either is integral or is zero.
Case 2. is integral.
Theorem 4.14 removes a difficult constraint in verifying the existence of a i.e. the existence of any real matrix satisfying conditions LRA1-LRA5 is enough to guarantee the existence of an integral matrix satisfying those conditions. Relaxing this constraint allows the use of standard arguments from convexity theory.
We can use several of the preceding results to formulate new characterizations for the set introduced in section II. Theorem 3.9 states that Here and are Schubert cycles (elements of the ring This definition of is based on a multiplication statement in the ring We know from Theorem 4.9 that is ring isomorphic to Using this isomorphism, we can translate the condition in to an equivalent condition in Application of the Littlewood-Richardson rule then gives information about elements of
Suppose and Then
Easy consequence of Theorem 3.7.
Recall the definition of the map which maps the set of partitions onto the set We can define a one-sided inverse for which maps elements of into the set of partitions We will call this one-sided inverse and define it by By convention,
Let and Then
The isomorphism from to given by has an inverse given by We are given that in Applying to each side yields or In the ring we have Mapping this equation into yields or Comparing (*) to (**) shows that
The theorem follows in a straightforward manner from lemmas 4.15 and 4.16 and Theorem 4.12.
We now have two methods of testing whether a product known to be a multiple of is non-zero:
|1)||Compute (using the determinantal and Pieri's formula) in the ring|
|2)||Determine (using the LR rule) whether a exists.|
Note that method 1 yields more information than we seek: it identifies the multiple exactly. Method 2 can also be used to find the multiple exactly, but with less work can simply tell whether or not the multiple is non-zero.
We introduce some notation relevant to the next theorem: For and define by
("generalized pushing lemma") If and then
The proof is done by translating from sequences in and to partitions in Let and Then First assume both these products are non-zero multiples of in and respectively. From Theorem 4.17, there exists a (which we will denote by and a (which we will denote by From Theorem 4.13, is an LRA.
Let Again by Theorem 4.17, we can show that if there exists a and this will establish the theorem. We claim that is a The verification amounts to the following exercise: This identity, together with the fact that easily establishes the claim.
The theorem is now established in the case where the products of Schubert cycles are multiples of If this is not the case, then by Theorem 3.10 we can find such that the products and are non-zero multiples of Hence by our previous result. Since Theorem 2.11 guarantees the desired result.
In the next section, we will show that Theorem 4.18 is indeed a generalization of Horn's original "pushing" lemma.
This is an excerpt from Steven Andrew Johnson's 1979 dissertation The Schubert Calculus and Eigenvalue Inequalities for Sums of Hermitian Matrices.