Unipotent Hecke algebras: the structure, representation theory, and combinatorics
Arun Ram
Department of Mathematics and Statistics
University of Melbourne
Parkville, VIC 3010 Australia
aram@unimelb.edu.au
Last updated: 26 March 2015
This is an excerpt of the PhD thesis Unipotent Hecke algebras: the structure, representation theory, and combinatorics by F. Nathaniel Edgar Thiem.
A basis with multiplication in the case
Unipotent Hecke algebras
The group
Let
be the general linear group over the finite field with elements. Define the subgroups
where a monomial matrix is a matrix with exactly one nonzero entry in each row and column
If necessary, specify the size of the matrices by a subscript such as
etc. If and are matrices,
then let
be the block diagonal matrix
Let
be the matrix with in position
ones on the diagonal and zeroes elsewhere; write
Let
denote the diagonal matrix with in the slot and ones elsewhere, and let
be the identity matrix with the
and
columns interchanged. That is,
where is the identity matrix. Then
The Chevalley group relations for are (see also Section 3.3.1)
where is the Kronecker delta.
A pictorial version of
For the results that follow, it will be useful to view these elements of as braid-like diagrams instead of
matrices. Consider the following depictions of elements by diagrams with vertices, strands between the vertices, and various objects that slide around on the
strands. View
where each diagram has two rows of vertices. Multiplication in corresponds to the concatenation of two diagrams;
for example, is
In the following Chevalley relations, curved strands indicate longer strands, so for example (UN1) indicates that
and slide along the strands they are on (no matter how long). The Chevalley
relations are
The unipotent Hecke algebra
Fix a nontrivial group homomorphism
and a map
Then
is a group homomorphism. Since for all
write
The unipotent Hecke algebra of the triple is
The type of a linear character
is the composition such that
Let be a composition. Suppose and are two type
linear characters of Then
Proof.
Note that for
Thus, normalizes
and acts on linear characters of by
This action preserves the type of The map
is a isomorphism, so
It therefore suffices to prove that acts transitively on the linear characters of type
By Proposition 3.2 (c), the number of of type linear characters is
Therefore acts transitively on the type linear characters of
Proposition 4.1 implies that given any fixed character
it suffices to specify the type of the linear character
Note that the map (given by example)
is a bijection. In the following sections identify the compositions
with the map
using this bijection.
The classical examples of unipotent Hecke algebras are the Yokonuma algebra
[Yok1969-2] and the Gelfand-Graev Hecke algebra [Ste1967].
An indexing for the standard basis of
The group has a double-coset decomposition
so if
then the isomorphism in (4.9) implies that the set
is a basis for [CRe1981, Prop. 11.30].
Suppose
is an matrix with polynomial entries. Let
be the degree of the polynomial Define the
degree row sums and the degree column sums of to be the compositions
where
Let
For example,
Suppose
is a matrix with
Let
where
and by convention is the empty strand (not a strand). For example,
is
For each define a matrix
pictorially by
where is the
entry of and the top vertex associated with
goes to the bottom vertex associated with Note that if
then both its strand and corresponding vertices vanish.
The fact that guarantees that we are left with two rows of exactly vertices.
For example, if is as in then
is
Viewed as a matrix,
Let be as in (4.11) and be as in (4.12). The map
given by (4.14) is a bijection.
Remark. When this theorem says that the map
of (4.13) is a bijection between and
Proof.
Using the remark following the theorem, it is straightforward to reconstruct from
Therefore the map is invertible, and it suffices to show
(a)
the map is well-defined
(b)
the map is surjective.
To show (a) and (b), we investigate the diagrams of elements in
Suppose Let
so that
are the labels above the vertices of and
is the permutation determined the diagram (and ignoring the labels By (4.8),
Recall that if and only if
implies
That is, if and only if for all
such that
Compare (A) and (B) to obtain that if and only if for all
such that
(i)
If and
then
(ii)
If and
then
(iii)
If
then if and only if
If
and
then
We can visualize the implications of the conditions in the following way. Partition the
vertices of by For example,
partitions
according to
Suppose the top vertex is not next to a dotted line and the
bottom vertex is immediately to the left of a dotted line.
Then condition (i) implies that
so
Similarly, condition (ii) implies
and conditions (iii) and imply
In the case condition (III) implies that every
is of the form
where
and
is as in (4.13). In fact, this
observation proves that the map
is a bijection between and
(mentioned in the remark).
Note that since the diagrams satisfy (I), (II) and (III),
proving (a). On the other hand, (I), (II) and (III) imply that each
must be of the form for some
proving (b).
Remark. This bijection will prove useful in developing a generalization of the RSK correspondence in Chapter 5.
Let
and for let
Let Then
Proof.
Theorem 4.2 gives
We can obtain a matrix
from a matrix by selecting for each
a monic polynomial with nonzero constant term and degree
Conversely, every
arises uniquely out of such a construction.
For a fixed the total number of ways to choose appropriate
polynomials is
The result follows by summing over all
Multiplication in
This section examines the implications of the unipotent Hecke algebra multiplication relations from Chapter 3 in the case
The final goal is the algorithm given in Theorem 4.5.
Pictorial versions of
Note that the map
is a surjective group homomorphism. Since adjust the decomposition of (3.4) as follows. Let
with
for minimal. Then has a unique decomposition
For write
Fix a composition The decomposition
implies
Pictorially, let
Examples:1.
If
decomposes according to
with
then
and
2.
If then (4.17) implies
The elements also interact with
and as follows (see also Section 3.3.1)
or pictorially,
and for
Suppose
with
Then using (4.17), (E3), (E1) and (E2), we can rewrite as
where
is given by
for
if the crossing in crosses the strands coming from the
and top vertices.
Example (continued). Suppose
decomposes according to
and
(as in Example 1 above). Consider the ordering
or
Then
where by (4.17),
Therefore, by renormalizing
where
Pictorially, by sliding the down along
the strands until they get stuck, this computation gives
where
(as in (4.23)), since
etc.
Relations for multiplying basis elements.
Let
decompose according to a minimal expression in as in (4.16). Let and use
(N3) and (N4) to write
for some (see (4.7)). Then use (4.22) to write
(This form corresponds to of Corollary 3.9).
Example (continued). If is as in (4.24) and
then
Consider the crossing in (4.25) corresponding to
There are two possibilities.
Case 1 the strands that cross at do not cross again as they go up to the top of the diagram
Case 2 the strands that cross at cross once on the way up to the top of the diagram
In the first case,
so (UN1), (UN2) and (E4) imply
where
In the second case,
Use (UN3) and (N1) to split the sum into two parts corresponding to and
where
Now use (UN1), (UN2), (U2), (U1) and (E4) on the second sum to get
where
and is defined by
Remarks:(a)
We could have applied these steps for any and so we can iterate the process with each sum.
(b)
The most complex step in these computations is determining The following section
will develop an efficient algorithm for computing the right-hand side of
Computing via painting, paths and sinks.
Painting algorithm
Suppose
decomposes according to
(assume Paint flows down strands (by gravity). Each step is illustrated
with the example
decomposed according to
(1)
Paint the left [respectively right] strand exiting red [blue] all the way to the bottom
of the diagram.
(2)
For each crossing that the red [blue] strand passes through, paint the right [left] strand (if possible) red [blue] until that strand either reaches
the bottom or crosses the blue [red] strand of (1).
Let
Sinks and paths. The diagram has a crossed sink at
if is a crossing between a red strand
and a blue one, or
Note that since is decomposed according to a minimal expression in
there will be no crossings of the form
The diagram has a bottom sink at
if a red strand enters bottom vertex and a blue strand enters the
bottom vertex, or
A red [respectively blue] path from a sink (either crossed or bottom) in
is an increasing sequence
such that in
(a)
is directly connected (no intervening crossings) to
by a red [blue] strand,
(b)
if is a crossed sink, then
(b')
if is a bottom sink, then
•
in a red path, the bottom vertex connects to the crossing
with a red strand.
•
in a blue path, the
bottom vertex connects to the crossing with a blue strand.
Let
The weight of a path is
Each sink in (either crossed
or bottom has an associated polynomial
given by
Example (continued). If
decomposes according to
(as in (4.28)), then
with
and
The corresponding polynomial is
Let
and be as in (4.27) and
suppose is painted as above. Then
Proof.
In the painting,
and
Substitutions due to crossed sinks correspond to the normalizations in relation (U2), and the sum over bottom sinks comes from applications of relation (E4).
Example (continued). Recall
Then at
and
The only bottom sink is at Therefore,
(for example, was computed in (4.33)).
A multiplication algorithm
(The algorithm). Let
and An algorithm for multiplying
and is
(1)
Decompose
according to some minimal expression in (as in (4.16)).
(2)
Put
into the form specified by (4.25), with
(3)
Complete the following
(a)
If
then apply relation (4.26).
(b)
If
then apply relation (4.27), using
and Lemma 4.4 to compute
(4)
If then reapply (3) to each sum with
and with
(a)
after using (3a) or using (3b), in the first sum,
(b)
after using (3b), in the second sum.
(5)
Set all diagrams not in to zero.
Sample computation. Suppose and
for all (i.e.. the Gelfand-Graev case). Then
Suppose
1. Theorem 4.5 (1): Let
decompose according to
with
2. Theorem 4.5 (2): By (4.25)
with
(so and
(as in (4.23)).
3. Theorem 4.5 (3b): Since
paint
to get
(as in (4.29)),
Now apply (4.27),
where
and by Lemma 4.4,
4. Theorem 4.5 (4): Set with
in the first sum and in the second sum.
5. Theorem 4.5 (3a) (3b): In the first sum,
so paint
to get
In the second sum,
so apply (4.26),
where
Now apply (4.27) to the first sum,
where
and
6. Theorem 4.5 (4): Set with
in the first sum, in the second sum, and
in the third sum.
7. Theorem 4.5 (3a) (3a) (3b): In the first sum
so apply (4.26); in the second sum
so apply (4.26); in the third sum,
so paint to get
where
and
Now apply (4.27) to the third sum
where and
8. Theorem 4.5 (5): The first sum contains no elements of so set it to zero.
The second sum contains elements of when
so set The third sum contains
elements of when
so set
All the terms in the fourth sum are basis elements.
Notes and References
This is an excerpt of the PhD thesis Unipotent Hecke algebras: the structure, representation theory, and combinatorics by F. Nathaniel Edgar Thiem.