Epistemic Status: Evergreen
A note on terminology
Non-Commutative Algebra might not be the right term for this. It made sense to me at the time.
I’m writing this post mostly for myself, since I know it will come up again sometime in the future, and I’ll save myself some time later by having it.
Recently at work, I was working on a project that ended up involving transformations of 3D orientations, and we’re mostly using quaternions to represent them. In some places we’re using transformation matrices. I was struggling to understand how to rearrange some of the equations we were working with, and it all came down to not understanding how to deal with non-commutative operations, i.e. operations where .
After thinking about it a little bit and realizing this is really an abstract algebra question, it doesn’t matter what the underlying group is, I consulted my copy of A Book of Abstract Algebra.
Some basic group theory
In chapter 3 we’re given a basic definition of a group.
By a group we mean a set with an operation which satisfies the axioms:
- (G1) The operator is associative, i.e.
- (G2) There is an element in such that and for every element in
- (G3) For every element in , there is an element in such that and .
Usually, this operator is referred to as multiplication but we have to remember that these sets and operators may not be numbers and multiplication. They just have to be sets of things that follow these rules. They could be complex things like matrices and quaternions, or even more exotic things.
Critically, in the definition of a group, the operator is not commutative. This describes things like matrix multiplication, and quaternion multiplication, which are both non-commutative.
Where I was getting tripped up was understanding how you can move things from one side of an equation to the other when the operator is non-commutative. Chapter 4 provides some answers. It explains the cancellation law using the inverse of the operator, denoted as .
First it provides this theorem:
Theorem 1 If is a group and , , and are elements of , then
(i) implies and
(ii) implies
Then the proof:
Suppose
ThenPart (ii) is proved analogously.
In general, we cannot cancel in the equation .
This gave me some of the answers, but not all of them. The important thing to note here is that left-multiplication is canceled by multiplying by the inverse on the left and the same for right-multiplication. What you cannot do is cancel left-multiplication with right-multiplication by the inverse, or vise-versa.
Generalizing to moving terms around
I don’t necessarily want to cancel things out. I just want to be able to move terms around. We just have to remember that left-multiplication is not the same as right-multiplication and not mix them up.
Suppose I take the equation and I want to isolate , I can do the following.
We could also right-multiply in the first step and end up with a similar result.
We can’t simplify the right any further because multiplication in this group is non-commutative. We can’t get the and to cancel because is stuck in the middle.
More details on dealing with inverses
The above should be enough for most things, but there are a few more things from Chapter 4 that are helpful to remember.
Theorem 2 *The proof is left as an exercise for the reader. gives us a nice definition of inverses.
Theorem 2 if is a group and and are elements of , then
implies and
Then Theorem 3 gives us some nice information for computing inverses.
Theorem 3 If is a group and and are elements of , then
(i) and
(ii)
The proof of (i) is as follows by showing that :
Since the product of and is equal to , it follows by Theorem 2 that they are each other’s inverses. Thus, .
Through associativity this generalizes to:
That is, the inverse of a product is the product of the inverses in reverse order. This can be useful for quickly rearranging an equation.
If you liked this post, you might also like to read my other post about abstract algebra.