## Artin groups and Coxeter groups

Last update: 26 January 2014

## The Division Algorithm

Let $U,V$ and $W$ be positive words. Say $U$ divides $W$ (on the left) if $W ≡ UV(if working in F+), W ⩦ UV(if working in GM+),$ and write $U|W$ (interpreted in the context of ${F}^{+}$ or ${G}_{M}^{+}\text{).}$

We present an algorithm which is used later in the theory of divisibility in ${G}_{M}^{+}\text{.}$ For example the algorithm can be used to decide whether a given letter divides a positive word, and to determine the smallest common multiple of a letter and a word if it exists.

3.1. Let $a\in I$ be a letter. The simplest positive words which are not multiples of $a$ are clearly those in which $a$ does not appear. Further, the words of the form ${⟨ba⟩}^{q}$ with $q<{m}_{ab}$ and ${m}_{ab}\ne 2$ are also not divisible by $a\text{.}$ Of course many other quite simple words have this property, for example concatenations of the previous types of words in specific order, called $a\text{-chains,}$ which we will define shortly. At the same time we will define what we mean by the source and target of an $a\text{-chain.}$

Definition.

 (i) A primitive $a\text{-chain}$ is a positive word $W$ such that ${m}_{ab}=2$ for all letters $b$ in $W$ (so $b\not\equiv a$ and $ab=ba\text{).}$
We call $a$ the source and target of $W\text{.}$ (Note vacuously the empty word is a primitive $a\text{-chain.)}$
 (ii) An elementary $a\text{-chain}$ is a positive word of the form ${⟨ba⟩}^{q}$ with ${m}_{ab}>2$ and $0 The source is $a,$ and the target is $b$ if $q$ is even, and $a$ if $q$ is odd. (iii) An $a\text{-chain}$ is a product $C\equiv {C}_{1}\cdots {C}_{k}$ where for each $i=1,\dots ,k,$ ${C}_{i}$ is a primitive or elementary ${a}_{i}\text{-chain}$ for some ${a}_{i}\in I,$ such that ${a}_{1}=a$ and the target of ${C}_{i}$ is the source of ${C}_{i+1}\text{.}$

[Ed: This may be expressed as: $a≡a1⟶C1 a2⟶C2a3 ⟶⋯⟶Cn-1 ak⟶Ck ak+1≡b ]$

The source of $C$ is $a$ and the target of $C$ is the target of ${C}_{k}\text{.}$ If this target is $b$ we say: $C$ is a chain from $a$ to $b\text{.}$

[Example. $I=\left\{a,b,c,d\right\},$ ${m}_{ab}={m}_{bc}=2,$ ${m}_{cd}=4,$ ${m}_{ac}={m}_{ad}={m}_{bd}=2\text{.}$
$c,$ $cd,$ $dcd,$ ${c}^{2}dc{d}^{7}$ are primitive $a\text{-chains.}$
$b,$ $ba$ are elementary $a\text{-chains.}$
$a,$ $ab,$ $c,$ $cb$ are elementary $b\text{-chains.}$
$dcd,$ $dc,$ $d$ are elementary $c\text{-chains.}$
$\underset{\underset{{C}_{1}}{⏟}}{aba}\underset{\underset{{C}_{2}}{⏟}}{cd}\underset{\underset{{C}_{3}}{⏟}}{bc}\underset{\underset{{C}_{4}}{⏟}}{ab}\underset{\underset{{C}_{5}}{⏟}}{d{c}^{2}{d}^{2}}\underset{\underset{{C}_{6}}{⏟}}{ba}$ is a $d\text{-chain.}$ $d ⟶primC1 d ⟶el.C2 c ⟶el.C3 b ⟶el.C4 a ⟶primC5 a ⟶el.C6 b.]$

 (iv) There is a unique decomposition of a given $a\text{-chain}$ into primitive and elementary factors if one demands that the primitive factors are as large as possible. The number of elementary factors is the length of the chain.

Remark. If $C$ is a chain from $a$ to $b$ then $\text{rev} C$ is a chain from $b$ to $a\text{.}$

Let $C\equiv {C}_{1}\cdots {C}_{k}$ be a chain from $a$ to $b$ (where ${C}_{i}$ is a primitive or elementary chain from ${a}_{i}$ to ${a}_{i+1}$ for $i=1,\dots ,k\text{)}$ and $D$ a positive word such that $a$ divides $CD\text{.}$ Then $b$ divides $D,$ and in particular $a$ does not divide $C\text{.}$

 Proof. The last claim follows from the first by putting $D$ equal to the empty word. We prove the first claim by induction on $k\text{.}$ Suppose $k=1\text{.}$ (a) Suppose $C\equiv {x}_{1}\cdots {x}_{m}$ is primitive, so ${m}_{a{x}_{i}}=2$ for all $i\text{.}$ Then ${x}_{1}\cdots {x}_{m}D⩦aV$ for some positive word $V\text{.}$ [Recall the Reduction Lemma (2.1): If $X,Y$ are positive words and $a,b\in I$ such that $aX⩦bY$ then there exists a positive word $W$ such that $X ⩦ ⟨ba⟩mab-1 Wand Y ⩦ ⟨ab⟩mab-1 W.]$ By (2.1), ${x}_{2}\cdots {x}_{m}D⩦{⟨a{x}_{1}⟩}^{{m}_{a{x}_{1}}-1}W\equiv aW$ for some positive word $W\text{.}$ Continuing in this fashion yields that $a$ divides $D$ and we are done (since $a$ is the target of $C\text{).}$ (b) Suppose $C\equiv {⟨ba⟩}^{q}$ is elementary, where ${m}_{ab}>2$ and $0 Then $D≡⟨ba⟩q D⩦aV$ for some positive word $V\text{.}$ By (2.1), ${⟨ab⟩}^{q-1}D⩦{⟨ab⟩}^{{m}_{ab}-1}W$ for some positive word $W\text{.}$ So by cancellation (special case of (2.1)), $D⩦ { ⟨ab⟩mab-q W if q is odd, or ⟨ba⟩mab-q W if q is even.$ So $D$ is divisible by $a$ if $q$ is odd, and $b$ if $q$ is even, which is in each case the target of $C$ and we are done. This begins the induction. Suppose now $k>1\text{.}$ By the inductive hypothesis ${a}_{k}$ divides ${C}_{k}D,$ and by (a) and (b), $b\equiv {a}_{k+1}$ divides $D$ and we are done. $\square$

3.2. For all positive words $W$ and letters $a\in I$ we will define a recursively calculable positive word ${T}_{a}\left(W\right)$ such that $W⩦{T}_{a}\left(W\right)$ and either ${T}_{a}\left(W\right)$ begins with $a$ or ${T}_{a}\left(W\right)$ is an $a\text{-chain.}$ To simplify the description we need other operations on positive words:

(i) Every positive $W$ has a unique factorization $W≡Ca(W) Da(W)$ where ${C}_{a}\left(W\right)\equiv {C}_{0}$ or ${C}_{a}\left(W\right)\equiv {C}_{0}{C}_{1}$ where ${C}_{0}$ is a primitive $a\text{-chain}$ and ${C}_{1}$ and elementary $a\text{-chain}$ such that the word length of ${C}_{a}\left(W\right)$ is as large as possible.

(ii) If $C$ is a primitive $a\text{-chain}$ put $C+≡aC$ which is positive equivalent to $Ca$ by commutativity.

If $C\equiv {C}_{0}{⟨ba⟩}^{q}$ where ${C}_{0}$ is primitive, put $C+≡ { aC if q=mab -1, or C0⟨ba⟩q+1 otherwise.$ In each case ${C}^{+}⩦Cc$ where $c$ is the target of $C\text{.}$

[Ed: Note that, if $q={m}_{ab}-1,$ $C+≡aC⩦ C0a⟨ba⟩q ≡C0⟨ab⟩mab ⩦C0⟨ba⟩mab ≡Cc,$ where $c$ is the target of $C\text{.}$

N.B.: The point of this definition is to construct a word ${C}^{+}$ which is positive equivalent to $Cc$ $\text{(}C\equiv {C}_{0}{C}_{1}$ a chain from $a$ to $c\text{)}$ and such that either ${C}^{+}$ starts with $a,$ or ${C}^{+}\equiv {C}_{0}{C}_{1}^{+}$ where ${C}_{1}^{+}$ is also elementary, but longer than ${C}_{1}\text{.]}$

(iii) If $D$ is any nonempty positive word denote by ${D}^{-}$ the word obtained by deleting the first letter of $D\text{.}$ [Ed: Thus $CD⩦{C}^{+}{D}^{-}\text{.]}$

Definition. For $W$ empty or beginning with $a$ put $Ta(W)≡W.$ For all other words define ${T}_{a}\left(W\right)$ recursively: let $L\left(W\right)=\ell$ and suppose the $a\text{-chain}$ ${C}_{a}\left(W\right)$ has target $c\text{.}$ Then put $Sa(W)≡ { Ca(W)Tc (Da(W)) if c is not the first letter of Tc(Da(W)) Ca(W)+ Tc(Da(W))- otherwise,$ and ${T}_{a}\left(W\right)\equiv {S}_{a}^{\ell }\left(W\right)$ (the result of applying ${S}_{a}$ $\ell$ times).

[Observations. Let $W$ be positive and $a\in I\text{.}$ Then

 (A) ${T}_{a}\left(W\right)\equiv {S}_{a}\left(W\right)\equiv W$ if $W$ is an $a\text{-chain}$ or begins with $a,$ (B) ${T}_{a}\left(W\right)⩦{S}_{a}\left(W\right)⩦W\text{.}$

 Proof. (A) The result if clear if $W$ begins with $a$ or is empty. Suppose $W$ is a nonempty $a\text{-chain,}$ so $W\equiv {C}_{a}\left(W\right){D}_{a}\left(W\right)$ where ${C}_{a}\left(W\right)$ is a nonempty chain from $a$ to $c,$ say and ${D}_{a}\left(W\right)$ is a $c\text{-chain.}$ By an inductive hypothesis, since $L\left({D}_{a}\left(W\right)\right) ${T}_{c}\left({D}_{a}\left(W\right)\right)\equiv {S}_{c}\left({D}_{a}\left(W\right)\right)\equiv {D}_{a}\left(W\right)$ so that ${S}_{a}\left(W\right)\equiv {C}_{a}\left(W\right){D}_{a}\left(W\right)\equiv W$ noting that $c$ cannot be the first letter of ${D}_{a}\left(W\right),$ whence ${T}_{a}\left(W\right)\equiv {S}_{a}^{\ell }\left(W\right)\equiv W,$ and (i) is proved. (B) Again the result is clear if $W$ begins with $a$ or is empty. Otherwise, we may suppose that ${C}_{a}\left(W\right)$ is nonempty. Thus $L\left({D}_{a}\left(W\right)\right) and by an inductive hypothesis ${T}_{c}\left({D}_{a}\left(w\right)\right)⩦{D}_{a}\left(W\right)\text{.}$ Since ${C}^{+}{D}^{-}⩦CD$ it is clear (either way) that $Sa(W)⩦Ca (W)Tc(Da(W)) ⩦Ca(W)Da (W)≡W.$ Now since $L\left({S}_{a}\left(W\right)\right)=L\left(W\right)$ we may repeat this step $\ell$ times to show that ${T}_{a}\left(W\right)⩦{S}_{a}\left(W\right)⩦W\text{.}$ $\square \phantom{\rule{1em}{0ex}}\text{]}$

Let $W$ be positive and $a\in I\text{.}$ Then

 (i) ${T}_{a}\left(W\right)$ is an $a\text{-chain}$ or begins with $a,$ (ii) ${T}_{a}\left(W\right)\equiv W$ if and only if $W$ is an $a\text{-chain}$ or begins with $a,$ (iii) ${T}_{a}\left(W\right)⩦W\text{.}$

 Proof. The proof for all three parts follows by induction on the wordlength, where the induction basis is trivial. [ Ed: Namely when $L\left(W\right)=0,$ ${T}_{a}\left(W\right)\equiv W\text{.}$ More concretely, if $L\left(W\right)=1,$ then $W$ is $a$ or an $a\text{-chain,}$ and the algorithm again leaves $W$ unchanged. ] (i) [We may suppose that $L\left(W\right)>0$]. From the definition of ${S}_{a}\left(W\right)$ and the induction hypothesis the following is true. If ${S}_{a}\left(W\right)$ is neither an $a\text{-chain}$ nor a word beginning with $a,$ then ${C}_{a}\left({S}_{a}\left(W\right)\right)$ has length strictly greater than ${C}_{a}\left(W\right)\text{.}$ Hence the result ${S}_{a}^{k}\left(W\right)$ of $k$ successive applications of ${S}_{a}$ must eventually, for $k=\ell$ at least, be either an $a\text{-chain}$ or word beginning with $a\text{.}$ [Note: If $W$ is an $a\text{-chain}$ or begins with $a$ then the first statement is clear since, by Observation (A), ${S}_{a}\left(W\right)\equiv W\text{.}$ Otherwise, ${C}_{a}\left(W\right)$ is non-empty. Then, by the induction hypothesis, if $c$ is not its first letter ${T}_{c}\left({D}_{a}\left(W\right)\right)$ is a $c\text{-chain}$ and composes with ${C}_{a}\left(W\right)$ to make ${S}_{a}\left(W\right)$ an $a\text{-chain.}$ Otherwise ${S}_{a}\left(W\right)={C}_{a}{\left(W\right)}^{+}{T}_{c}{\left({D}_{a}\left(W\right)\right)}^{-}$ and, by the N.B. above, ${C}_{a}{\left(W\right)}^{+}$ either starts with $a$ or is of such a form that is must divide ${C}_{a}\left({S}_{a}\left(W\right)\right)\text{.}$ This last implies that $L(Ca(Sa(W))) ≥L(Ca(W)+)= L(Ca(W))+1.$ Note that, by Observation (A), each further application of ${S}_{a}$ either leaves the word unchanged (when it is already either an $a\text{-chain}$ or a word beginning with $a\text{)}$ or strictly increases the prefix ${C}_{a}\text{.}$ Thus, after $k$ successive applications, either ${S}_{a}^{k}\left(W\right)$ is an $a\text{-chain}$ or starts with $a,$ or $L\left({C}_{a}\left({S}_{a}^{k}\left(W\right)\right)\right)\ge L\left({C}_{a}\left(W\right)\right)+k>k\text{.}$ This last case gives an obvious contradiction once $k=L\left(W\right)=\ell ,$ since ${S}_{a}$ preserves the total word length, so that $L\left({C}_{a}\left({S}_{a}^{k}\left(W\right)\right)\right)\le L\left(W\right)\text{.}$ Hence ${T}_{a}\left(W\right)$ must either be an $a\text{-chain}$ or start with $a\text{.}$ (One may like to observe that, once $k>L\left({D}_{a}\left(W\right)\right),$ the contradiction is already reached). Claims (ii) and (iii) are immediate from Observations (A) and (B) respectively. The original text reads as follows. ] (ii) From the definition of ${S}_{a}\left(W\right)$ and the induction hypothesis it follows that, for an $a\text{-chain}$ or word $W$ beginning with $a,$ ${S}_{a}\left(W\right)\equiv W$ and thence that ${T}_{a}\left(W\right)\equiv W\text{.}$ The converse follows by (i). (iii) The proof by induction is trivial, given that ${C}^{+}⩦Cc\text{.}$ $\square$

3.3. With the help of the projection operators ${T}_{a}$ defined in 3.2 one gets a simple algorithm for producing a common multiple of a letter $a$ and a word $W,$ if one exists. If a positive word $V$ begins with $a,$ put $Ra(V)≡V.$ If $V$ is a chain from $a$ to $b$ then put $Ra(V)≡Ta (Vb).$

Definition. Let $W$ be a positive word and $a\in I\text{.}$ Then the $a\text{-sequence}$ of $W$ is the sequence at positive words ${W}_{i},$ for $i=1,2,\cdots$ where $W1 ≡ Ta(W)and Wi+1 ≡ Ra(Wi)$ for all $i>1\text{.}$

[Ed: Note that the following lemma anticipates the following section (§4) by effectively demonstrating that if a common multiple exists then the $a\text{-sequence}$ of $W$ terminates in a word $W\prime$ which is a least common multiple for $a$ and $W\text{.}$ That is both $a$ and $W$ divide $W\prime ,$ and $W\prime$ divides any other common multiple of $a$ and $W\text{.}$

The original text is translated below, but we give here an expanded version as follows:

 (i) If $V$ is a common multiple of $a$ and $W$ then ${W}_{i}$ divides $V$ for all $i>0,$ and ${W}_{j}\equiv {W}_{j+1}$ for $j>L\left(V\right)-L\left(W\right)$ (the sequence terminates). (ii) Conversely, if ${W}_{j}\equiv {W}_{j+1}$ for some $j,$ then ${W}_{j}$ is a common multiple of $a$ and $W$ (and, by (i), a least common multiple).

 Proof. (ii) Suppose ${W}_{j}\equiv {W}_{j+1}\text{.}$ By definition ${W}_{j}$ begins with $a\text{.}$ Certainly $W⩦{T}_{a}\left(W\right)\equiv {W}_{1}$ (by Lemma 3.2). Suppose $W$ divides ${W}_{i}\text{.}$ Now $Wi+1≡ { Wi if Wi begins with a Ta(Wib) if Wi is a chain from a to b$ But ${T}_{a}\left({W}_{i}b\right)⩦{W}_{i}b$ which is divisible by ${W}_{i}$ and hence by $W\text{.}$ By induction $W$ divides ${W}_{j},$ so ${W}_{j}$ is a common multiple of $a$ and $W\text{.}$ (i) Suppose now that $V$ is a common multiple of $a$ and $W\text{.}$ Since $W⩦{T}_{a}\left(W\right),$ ${W}_{1}$ divides $V\text{.}$ Suppose ${W}_{i}$ divides $V\text{.}$ Either ${W}_{i}$ begins with $a,$ in which case ${W}_{i+1}\equiv {W}_{i}$ and certainly ${W}_{i+1}$ divides $V,$ or ${W}_{i}$ is an $a\text{-chain}$ and ${W}_{i+1}\equiv {R}_{a}\left({W}_{j}\right)\equiv {T}_{a}\left({W}_{i}b\right)$ where $b$ is the target of ${W}_{i}\text{.}$ But $V⩦aY⩦{W}_{i}Z$ for some positive words $T$ and $Z,$ so by (3.1) $V⩦WibZ′⩦ Ta(Wib)Z≡ Wi+1Z′$ for some positive word $Z\text{.}$ By induction this shows ${W}_{i}$ divides $V$ for all $i\text{.}$ If ${W}_{i}\not\equiv {W}_{i+1}$ then $L\left({W}_{i+1}\right)=L\left(W\right)+i\text{.}$ Since $L\left({W}_{i+1}\right)\le L\left(V\right),$ we must then have $i\le L\left(V\right)-L\left(W\right)\text{.}$ Thus ${W}_{j}\equiv {W}_{j+1}$ for all $j>L\left(V\right)-L\left(W\right)\text{.}$ $\square \phantom{\rule{1em}{0ex}}\text{]}$

Let $W$ be a positive word, $a$ a letter, and ${W}_{i},$ $i=1,2,\dots ,$ the $a\text{-sequence}$ of $W\text{.}$ Then there exists a common multiple $V$ of $a$ and $W$ precisely when ${W}_{j}\equiv {W}_{j+1}$ for $j>L\left(V\right)-L\left(W\right)\text{.}$

 Proof. When ${W}_{j}\equiv {W}_{j+1},$ it follows by the definition of the $a\text{-sequence}$ that $a$ divides ${W}_{j}\text{.}$ Because each ${W}_{i}$ clearly divides ${W}_{i+1}$ [Ed: and because ${W}_{1}⩦W$], then every ${W}_{j}$ is a common multiple of $a$ and $W\text{.}$ Conversely, if $V$ is a common multiple, then it follows by the definition of ${W}_{i}$ together with (3.1) and (3.2) that $V$ is divisible by every ${W}_{i}\text{.}$ From ${W}_{j}\not\equiv {W}_{j+1}$ it follows that $L\left({W}_{j+1}\right)=L\left(W\right)+j\text{.}$ From ${W}_{j+1}|V$ it then follows that $L\left(W\right)+j\le L\left(V\right)\text{.}$ Thus $j>L\left(V\right)-L\left(W\right)$ implies that ${W}_{j}\equiv {W}_{j+1}\text{.}$ $\square$

3.4. When a positive word $W$ is of the form $W\equiv UaaV$ where $U$ and $V$ are positive words and $a$ is a letter then we say $W$ has a quadratic factor. A word is square-free relative to a Coxeter matrix $M$ when $W$ is not positive equivalent to a word with a quadratic factor. The image of a square free word in ${G}_{M}^{+}$ is called square-free.

Let $W$ be a square-free positive word and $a$ a letter such that $aW$ is not square free. Then $a$ divides $W\text{.}$

 Proof. First we prove the following lemma. $\square$

Lemma. Let $V$ be a positive word which is divisible by $a$ and contains a square. Then there is a positive word $\stackrel{\sim }{V}$ with $\stackrel{\sim }{V}⩦V$ which contains a square and which begins with $a\text{.}$

The proof of the Lemma is by induction on the length of $V\text{.}$ Decompose $V,$ as in 3.2, in the form $V≡Ca(V)Da (V).$ Without loss of generality we may assume that $V$ is a representative of its positive equivalence class which contains a square and is such that $L\left({C}_{a}\left(V\right)\right)$ is maximal.

When ${C}_{a}\left(V\right)$ is the empty word it follows naturally that $\stackrel{\sim }{V}=V$ satisfies the conditions for $\stackrel{\sim }{V}\text{.}$ For nonempty ${C}_{a}\left(V\right)$ we have three cases

 (i) ${C}_{a}\left(V\right)$ contains a square. Then $\stackrel{\sim }{V}\equiv {T}_{a}\left(V\right)$ satisfies the conditions for $\stackrel{\sim }{V}\text{.}$ (ii) ${D}_{a}\left(V\right)$ contains a square. By the induction assumption, one can assume, without loss of generality that ${D}_{a}\left(V\right)$ begins with the target of the $a\text{-chain}$ ${C}_{a}\left(V\right)\text{.}$ Thus, since the length of ${C}_{a}\left(V\right)$ is maximal, ${C}_{a}\left(V\right)$ is of the form ${C}_{0}{⟨ba⟩}^{{m}_{ab}-1},$ where ${C}_{0}$ is a primitive $a\text{-chain.}$ From this if follows that when ${{D}_{a}\left(V\right)}^{-}$ contains a square then $\stackrel{\sim }{V}\equiv a{C}_{a}\left(V\right){D}_{a}{\left(V\right)}^{-}$ satisfies the conditions for $\stackrel{\sim }{V},$ and otherwise $\stackrel{\sim }{V}\equiv {a}^{2}{C}_{a}\left(V\right){D}_{a}{\left(V\right)}^{--}$ does. (iii) Neither ${C}_{a}\left(V\right)$ or ${D}_{a}\left(V\right)$ contain a square. Then $V$ is of the form $V\equiv {C}_{0}{⟨ba⟩}^{q}{D}_{a}\left(V\right)$ where $q\ge 1,$ and ${D}_{a}\left(V\right)$ begins with $a$ if $q$ is even, and $b$ if $q$ is odd. Then from the fact that $a$ divides $V$ the reduction lemma is applicable and the relations imply that there exists $E$ such that $Da(V)⩦ ⟨ba⟩mab E.$ Then $V∼ ≡ aC0 ⟨ba⟩mab-1 ⟨ba⟩qE if mab is even V∼ ≡ aC0 ⟨ba⟩mab-1 ⟨ab⟩qE if mab is odd,$ satisfy the conditions.

This finishes the proof of the lemma.

 Proof of 3.4. By the Lemma there exists a positive word $U,$ such that $U$ contains a square and $aW⩦aU\text{.}$ It follows from the reduction lemma that $U⩦W$ and, since $W$ is square free that $U$ does not contain a square. So $U$ begins with $a$ and $W$ is divisible by $a\text{.}$ $\square$

3.5. By applying 3.4 we get the following lemma which will be needed later.

If $W$ is a square free positive word and $a$ is a letter then the $a\text{-sequences}$ ${W}_{i}$ of $W$ are also square free.

 Proof. ${W}_{1}$ is square free since ${W}_{1}⩦W\text{.}$ Assume ${W}_{i}$ is square free. Then either ${W}_{i+1}\equiv {W}_{i}$ or ${W}_{i+1}⩦{W}_{i}{b}_{i}$ where ${b}_{i}$ is the target of the chain ${W}_{i}\text{.}$ If ${W}_{i}{b}_{i}$ is not square free then ${b}_{i} \text{rev} {W}_{i}$ is not square free and by 3.4, the ${b}_{i}$ chain ${W}_{i}$ is not divisible by ${b}_{i},$ in contradiction to 3.1. $\square$

3.6. Using the operators ${T}_{a}$ we can give a division algorithm, which, when given positive words $V$ and $W$ such that $W$ divides $V,$ constructs a positive word $V:W$ such that $V⩦W·(V:W).$

Definition. For $W\equiv {a}_{1}\cdots {a}_{k},$ $V:W$ is the word $V:W≡Tak ( Tak-1 ( ⋯ ( Ta2 ( Ta1 (V) ) - ) - ⋯ ) - ) - .$

## Notes and references

This is a translation, with notes, of the paper, Artin-Gruppen und Coxeter-Gruppen, Inventiones math. 17, 245-271, (1972).

Translated by: C. Coleman, R. Corran, J. Crisp, D. Easdown, R. Howlett, D. Jackson and A. Ram at the University of Sydney, 1996.