Algebras

Algebras

Let $R$ be a integral domain and let ${A}_{R}$ be an algebra over $R$, so that ${A}_{R}$ has an $R$-basis $\left\{{b}_{1},\dots ,{b}_{d}\right\}$, $A R = R -span b 1 … b d and b i b j = ∑ k = 1 d r i j k b k , with r i j k ∈ R ,$ making ${A}_{R}$ a ring with identity. Let $𝔽$ be the field of fractions over $R$, let $\stackrel{‾}{𝔽}$ be the algebraic closure of $𝔽$, and set $A = 𝔽 ‾ ⊗ R A R = 𝔽 ‾ -span b 1 … b d$ with multiplication determined by the multiplication in ${A}_{R}$. Then $A$ is an algebra over $\stackrel{‾}{𝔽}$.

A trace on $A$ is a linear map $\stackrel{\to }{t}:A\to \stackrel{‾}{𝔽}$ such that $t → a 1 a 2 = a 2 a 1 , for all a 1 a 2 ∈ A .$ A trace $\stackrel{\to }{t}$ on $A$ is nondegenerate if for each $b\in A$ there is an $a\in A$ such that $\stackrel{\to }{t}\left(ba\right)\ne 0$.

Let $A$ be a finite dimensional algebra over a field $𝔽$; let $\stackrel{\to }{t}$ be a trace over $A$. Define a symmetric bilinear form $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩:A×A\to 𝔽$ on $A$ by $⟨{a}_{1},{a}_{2}⟩=\stackrel{\to }{t}\left({a}_{1}{a}_{2}\right)$, for all ${a}_{1},{a}_{2}\in A$. Let $B$ be a basis of $A$. Let $G={\left(⟨b,b\prime ⟩\right)}_{b,b\prime ,\in ,B}$ be the matrix of the form $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩$ with respect to $B$. The following are then equivalent.

1. The trace $\stackrel{\to }{t}$ is nondegenerate.
2. $\mathrm{det}G\ne 0$.
3. The dual basis ${B}^{*}$ to the basis $B$ with respect to the form $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩$ exists.

 Proof. (b) $⇔$ (a): The trace $\stackrel{\to }{t}$ is degenerate if there is an element $a\in A$, $a\ne 0$, such that $\stackrel{\to }{t}\left(a,c\right)=0$ for all $c\in B$. If ${a}_{b}\in \stackrel{‾}{𝔽}$ are such that $a = ∑ b ∈ B a b , then 0 = a c = ∑ b ∈ B a b b c$ for all $c\in B$. So $a$ exists if and only if the columns of $G$ are linearly dependent, i.e. if and only if $G$ is not invertible. (c) $⇔$ (b): Let ${B}^{*}=\left\{{b}^{*}\right\}$ be the dual basis to $\left\{b\right\}$ with respect to $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩$ and let $P$ be the change of basis matrix from $B$ to ${B}^{*}$. Then $d * = ∑ b ∈ B P d b b , and δ b c = b d * = ∑ b ∈ B P d c b c = G P t b c .$ So ${P}^{t}$, the transpose of $P$, is the inverse of the matrix $G$. So the dual basis to $B$ exists if and only if $G$ is invertible, i.e. if and only if $\mathrm{det}G\ne 0$. $\square$

Let $A$ be an algebra and let $\stackrel{\to }{t}$ be a nondegenerate trace on $A$. Define a symmetric bilinear form $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩:A×A\to \stackrel{‾}{𝔽}$ on $A$ by $⟨{a}_{1},{a}_{2}⟩=\stackrel{\to }{t}\left({a}_{1},{a}_{2}\right)$, for all ${a}_{1},{a}_{2}\in A$. Let $B$ be a basis of $A$ and let ${B}^{*}$ be the dual basis to $B$ with respect to with respect to $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩$.

1. Let $a\in A$. Then $a = ∑ b ∈ B b a b * is an element in the center Z A of A$ and $\left[a\right]$ does not depend on the choice of basis $B$.
2. Let $M$ and $N$ be $A$-modules and let $\phi \in {\mathrm{Hom}}_{\stackrel{‾}{𝔽}}\left(M,N\right)$ and define $φ = ∑ b ∈ B b φ b * .$ Then $\left[\phi \right]\in {\mathrm{Hom}}_{A}\left(M,N\right)$ and $\left[\phi \right]$ does not depend on the choice of basis $B$.

 Proof. (a): Let $c\in A$. Then $c a = ∑ b ∈ B c b a b * = ∑ b ∈ B ∑ d ∈ B c b d * d a d * = ∑ d ∈ B d a ∑ b ∈ B d * c b b * = ∑ d ∈ B d a d * c = a c ,$ since $⟨cd,{d}^{*}⟩=\stackrel{\to }{t}\left(cb{d}^{*}\right)=\stackrel{\to }{t}\left({d}^{*}cb\right)=⟨{d}^{*}c,b⟩$. So $\left[a\right]\in Z\left(A\right)$. Let $D$ be another basis for $A$ and let ${D}^{*}$ be the dual basis to $D$ with respect to $⟨\phantom{\rule{.5em}{0ex}},\phantom{\rule{.5em}{0ex}}⟩$. Let $P={P}_{db}$ be the transition matrix from $D$ to $B$ and let ${P}^{-1}$ be the inverse of $P$. Then $d = ∑ b ∈ B P d b b and d * = P -1 b ~ d b ~ * ,$ since $d d ~ * = ∑ b ∈ B P d b ∑ b ~ ∈ B P -1 b ~ d ~ b ~ * = ∑ b b ~ ∈ B P d b P -1 b ~ d ~ δ b b ~ = δ d d ~ .$ So $∑ d ∈ D d a d * = ∑ d ∈ D ∑ b ∈ B P d b b a ∑ b ~ ∈ B P -1 b ~ d b ~ * = ∑ b b ~ b a t ~ * δ b b ~ = ∑ b ∈ B b a b * .$ So $\left[a\right]$ does not depend on the choice of the basis $B$. (b): The proof of part (b) is the same as the proof for part (b) except $a$ is replaced by $\phi$. $\square$

Reference

[HA] T. Halverson and A. Ram, Partition algebras, European Journal of Combinatorics 26, (2005), 869-921; arXiv:math/040131v2.