This document assumes familiarity with the Multivectors chapter, in particular the discussions of conjugations and the geometric and outter products.

In this chapter we discuss multivectors explicitly from a programming perspective, and consider how
best to represent and manipulate them in C/C++ .
This is a nontrivial subject for nonsmall *N* and the casual reader should skip to this section for now,
concentrating instead on subsequent chapters desribing the uses and benefits of multivectors,
returning here if and when they decide to exploit multivector methods in a programming application.

Compactly stored multivectors are a variably sized data type with extensive bit shifting involved in their processing,
something which C is bad at and C++ worse. For *N*£5 one can represent multivectors
in non-compacted forms but for higher *N* the storage space required to represent a
general non-sparse multivector can be prohibitive.
One can more compactly express some multivectors using factored forms, *eg.* representing an *N*-D *k*-blade
with *k* *N*-D 1-vectors rather than ^{N}*C*_{k} extended basis *k*-blade coordinates, and a general
*k*-vector as a sum of such factored *k*-blades. We will here refer to the number of *k*-blades necessary
to represent a particular *k*-vector as the *degree* of that *k*-vector.
** Existing Multivector Software**

Matlab based GABLE supports *N*£3 and is available at
www.carol.wins.uva.nl/~leo/GABLE/index.html

Maple based CLIFFORD supports *N*£9 and is available from
http://math.tntech.edu/rafal/cliff5.

C++ BOOST library sources supporting *N* £ 5 by default are available at
www.jaapsuter.com .

C++ CLU sources available from www.perwass.de/cbup/clu.html

Gaigen generates fast C++ sources for a given *N*£8 geometric algebra and
is avaliable at www.science.uva.nl/ga/gaigen/.

C++ MV 1.3.0 sources supporting *N*£63 are available from
this site as described below, however the author has developed MV up to 1.6 with significant functionality extensions and bug fixes.
Anybody interested in using MV 1.6 (provided as a library, with or without sources) should contact the author.
** Representing Multivectors**

*"We share a philosophy about linear algebra: we think basis-free, we write basis-free ,
but when the chips are down we close the office door and compute with matrices like fury."
--- Irving Kaplanski
*

The principle candidate "orderings" of a 2^{N} element extended basis
{1,**e**_{i},e_{ij},...,e_{12...N}}
for Â_{N} are, in the case *N*=3, the *grade-ordered* basis ,
{1,**e _{1}**,

We will use the notation a

The signing ("factor ordering") of basis

The simplest way to impliment an *N*-D multivector is as a 2^{N} entry 1D array but this can be grossly
inefficient for sparse multivectors when *N* is large.
A "full" *N*=26 dimensional multivector's worth of 4-byte-float coordinates requires a true gigabyte
(2^{30} bytes) of storage
so we can safely assume that *N*<64 when considering possible implementations and we
might fervently hope for applications with *N*<16. We will consider
general homegenised Minkowski spacetime Â_{5,2} later in this work, so *N*³7
is of definite interest. Large *N* sparse multivectors arise in muliparticle spacetime algebras
when each particles is correlated to few other particles.
** Small N**

For

```
```

typedef struct multivectorstruct * multivector;

typedef struct { bladesflag present; mvcapacity capacity; } mvheader;

struct multivectorstruct { mvheader h; real coeff[OPEN_ENDED]; }

so that a multivector type is a pointer to a dynamically sized structure holding a "packed" array of real coordinates.

`OPEN_ENDED`

is here assumed `#define`

d to whichever of `( )`

,
`0`

,
`1`

, or `void`

best provides
a Type

`real`

is `float, double`

or a fixed-point type of choice.
We will later be creating
a union typefor a `real`

or a pointer, so `real`

should ideally be a siimilar bit width
to the pointer type.
For a more general Clifford algebraic multivector,
we must replace it with a complex or other type for which multiplication and addition are defined.
The

`bladesflag`

type is bitfield capable of holding 2`coeff[`*j*]

where Because C's standard braindead

`malloc()`

heap library does not provide access to the size of an allocated block,
we store an indicator of the current size of the dynamic array `coeff[]`

in `capacity`

, and must explicitly check and possibly
"resize" the data whenever the nonsparsity of the represented multivector increases.
This may involve a "reallocation" of memory and a new pointer contents, so we must be aware that
`mv`

operations may change the `multivector`

handle as well as the pointed-to data.
For

`bladesflag`

can be a byte but it is perhaps more natural to adopt a 32-bit type
so that we can handle `mvcapacity`

integer type can then be held in a byte if desired, but alignment
issues then arise.
[ Rather than storing a 2^{L}-bit bitfield, one can of course store a list of basis bladeindices so that, for example p+2.5**e _{1}**+7.2e

For *N*>5 rather than increase the `bladesflag`

width (to 32 32-bit words for *N*=10, for example)
we retain *L*=5 and adopt a hierachical structure

```
```

typedef union { real r; multivector m;} mvcoeff;

struct multivectorstruct { mvheader h; mvcoeff coeff[OPEN_ENDED];}

where `multivector`

and `mvheader`

are defined as previously.

For multivectors within e_{12345} we set `level=0`

and interpret the coefficients as reals.
For multivectors with blades outside e_{12345} we set `level`

>0 and interpret
the coeff entries as `multivector`

s (*ie.* pointers to a multivector structure) of a strictly lower level.

Thus, for example, we represent **e _{4}** +

`level`

=1; `present`

= 3
and `coeff[0]`

and `coeff[1]`

containing pointers to
level 0 multivectors containing Note that we have the lower level "cioefficient" multivector on the left of higher level blades, in accordance with the left-to-right increasing subscript ordering for same level blades like e

This system enables us to represent a multivector of any dimension

If we disallow `mvcoeff`

arrays of dimension zero
then `mvcapacity`

need only hold values in the range [1,2^{L}] inclusive which we
represent with *L* bits by interpreting 0 as one cell, 1 as two cells, *etc.* .
`level`

is unlikely to exceed six and remains zero or one for *N*£10. It is thus possible
to combine
`level`

and `capacity`

into a single byte for *N*£40
Restricting `mvlevel`

to a byte limits *N*£1280 .

If we must allow *N*>5 (and we * will* here be interested in Â

`level`

<16 we attain
```
```

typedef unsigned int uint;

typedef struct { uint bladesflag : 16 ; uint level : 4 ; uint capacity : 4 ; uint spareflags : 8 ; } mvheader;

typedef union { real r; multivector m;} mvcoeff;

struct multivectorstruct { mvheader h; mvcoeff coeff[OPEN_ENDED];}

where the 8 bit

`spareflags`

is a bifield provided principly to ensure memory word alignment but consequently
available for various indicators (normalised, singular, freed, a reference count, and so forth) or sufficient to hold the
negative and zero signature bits for the four
basis 1-vectors associated with `level`

if desired. Each bit taken from `spareflags`

and added to `level`

doubles the the number of levels and so This system is used in MV 1.1 (previously available from this site) and throughout the discussions and
example code below. But when the "leaf" `real`

coordinate type (a 64-bit `double`

perhaps) is significantly wider than the
`multivector`

pointer type (a 32-bit pointer perhaps) it is storage inefficient.

MV 1.2 more firmly seperates the `level=0`

and higher (leaf and node) multivector types
with
```
```

typedef struct multivectorstruct0 * multivector0; struct multivectorstruct0 { mvheader h; real[OPEN_ENDED] r;}

typedef struct multivectorstruct * multivector; struct multivectorstruct { mvheader h; multivector[OPEN_ENDED] m;}

The `m`

array of a `multivector`

is allowed to hold either
`multivector`

or `multivector0`

pointers . Technically this should
more properly be an array of `union`

elements of the two pointer types, universally complicating
the referencing syntax so as to avoid the proud and thrusting casting statements that
would otherwise be necessary in particular source contexts, rather than merely appropriate.

Though not strictly necessary, it is arguably sound programming practice to insist that the final unused cells
in a non-leaf
`struct multivectorstruct`

contain `NULL`

rather than "obsolete" pointer values.
This is attainable by explicit zeroings in the "compaction" and "pruning" code loops but is ultimately a matter of taste.

Compaction purists might, for *N*£32, favour the minimal levelity *L*=2, *ie.* a unary-or-binary tree representation
using packed structs and one byte header
```
```

typedef struct { uint bladesflag : 2 ; uint level : 5 ; uint capacity : 1 } mvheader;

A levelity of *L*=3 allows `bladesflag`

to fit in a byte but *L*=4 is often preferable.
A single-set-bit mask for a 16-bit bladesflag can be held and shifted in a 32-bit word without worrying about
loosing the top bit and falsely suceeding a `<=`

test with another bladesflag.

Though an even *L* also provides minor advantages in implimenting an optimised dual primitive, this author still favours *L*=5
on occassion.

Assuming
**e _{1}**,

Supporting maximal dimension *N* suggests an (*N*+1)-bit `mvgradesflag`

data type
whose *i*^{th} bit flags a non-zero grade *i* component. We need *N*+1 bits because both 0 (scalar part)
and *N* (pseudoscalar part) are meaninful. Restricting *N*£7 allows an 8-bit
`mvgradesflag`

data type and in general the `mvgradeflag`

type is one bit wider than
the `fullbladeindex`

type and so more likely to be limiting. MV 1.6 supports *N*£63 to
restrict the `mvgradeflag`

types to 64-bit.
** mv C++ wrapper class **

C++ , a disasterously ill thought out language, disallows *flexible array members* in classes,
making a C++ dynamically compacted multivector class vastly harder to impliment
without adding a further layer of runtime indirection, as for example
```
```

class mv { multivector m ; } // added indirection at root

or
```
```

class mv { mvheader h; union { real * r ; mv * m; } coeff; } // added indirection at every level

Fortunately the former approach does have much to commend it. If `a`

and `b`

are multivector implimentations we would not wish `b = 4.2*a;`

to necessarily construct a
full copy of `a`

and then multiply every "leaf" real coefficient by `4.2`

. Similarly if `~`

is overloaded to
provide multivector reverse we would not wish `c = a * b * (~a);`

implimenting c=aba^{§}
to actually compute a reversed copy of `a`

if avoidable.
Having defined a C `multivector`

type being a pointer to an open-ended `struct`

type we can define a higher order
abstraction
```
```

class mv { real s ; multivector m ; mvoperation op; }

providing "handles" to our `multivector`

s for which we can exploit C++'s albeit rudimentary operator overloading and
garbage collection facilities.

`s`

is here a *scale factor* scalar multiplier considered to apply to the entire contained multivector
so that multiplying an `mv`

multivector representation by a scalar is just one multiply.
This could potentially be a wider real type than the `multivector`

leaf real. We might choose to
hold `multivector`

coordinates in `float`

s but `s`

in a `long double`

for example.

Even though the scale factor of an `mv`

is theoretically transparent to the user
in practice numeric precision issues mean we would ideally wish for the various magnituide measures
of the `multivector`

to be of roughly unit order so that the coordinates are not all small
(other than for 0) nor are any really huge. This is particularly true if we use a limited precision
numeric type for the "leaf" coordinates to reduce memory load. Further, it is useful for
the scale factor to provide a rough and ready
measure of the magnitude of the `mv`

, within a factor of 2^{±4} say,
to allow for rapid recognition of neglegible contributions to a process.

`mvoperation`

is a bitsfield or similar flags type indicating the "buffered" presence of conjugations
and other single-operand multivector operations such as dual. Primatives that multiply or add `mv`

s
are then at liberty to avoid conjugation or dual evaluations when such is possible, postponing
their actual computation until such time as they are either unavoidable or "cancelled out"
by subsequently requested manipulations.

Care must be taken in adding a reference count to the `h`

header field of the underlying C
`multivector`

accessed `struct`

and creating contructors and an assignment `operator=`

override for `mv`

that make
"shallow" rather than "deep" copies. Having done so one can construct a C++ based `mv`

class having no "sight of" the specifics (such as the levility *L*) of the underlying C (or assembly) `multivector`

suite. Multiple `mv`

s may then reference a single `multivector`

and will only need to
create a private copy when about to modify their element `m`

if `m->h.refcount > 1`

.
** Versors and Other Factorisations**

A *k*-blade has an extended basis representation comprising ^{N}*C*_{k} coordinates,
while a *k*-versor requires å_{j=0}^{k} ^{N}*C*_{j} coordinates. But each
can also be represented by *k* *N*-D 1-vectors, *eg.* with a real *k*×*N* matrix .

A *k*-vector of degree *j* is expressible as the weighted sum of *j* *k*-blades and
so may be represented
by *j**k* 1-vectors *eg.* in a *j*×*k*×*N* real array , rather than
by ^{N}*C*_{k} extended basis coordinates .

These are particular examples of what we might call the *compacticity of factored forms* ,
wherein a multivector is stored in an "unevaluated product" factored form
a=b¨c¨..¨g which typically requires far less storage space
than a basis coordinates representation.

For associative ¨ we can evaluate
a¨x as b¨(c¨(..¨(g¨x))...)
which may be a "sparser" computation than "expanding" a before applying it to x ,

For *N*=3 the four reals quaternian representation of 2-versors is more efficient than the six reals
for the factored form, but for *N*³5, ^{N}*C*_{2}=½*N*(*N*-1) > 2*N* so we gain even
for *k*=2. For *k* near ½*N* and large *N*, ^{N}*C*_{k} is vastly greater than *k**N* so factored forms may be
essential for some applications.

If memory and processor load permit, one can expand a out when computing with it, regarding the factored form as
a compact "storage" form but we can often avoid such expansions.
Operations such as inverse may be significantly facilitated by working on factored forms,
and reversing and automorphic conjugatios can be applied without having to expand them.

What we are essentially doing with such representations is storing * multivector-valued expressions*
rather than extended basis coordinate representations, enabling us to continue to operate at

We can adapt and extend such blades and versor representations to allow storage of multivectors in "factorised" forms by restricting the
number of levels allowed to three less than that allowed for by the bitwidth of the `level`

type and exploiting the
freed `level`

values to flag alternate formats in which `bladesflag`

is interpreted as
an integer `M` less than the `capacity`

with `multivector`

`a`

considered to hold
the *M* term product
` a->coeff[0].m * a->coeff[1].m * ... * a->coeff[M-1].m `

where `*`

is the geometric, outter (Ù),
or intersective (Ç) product.

Our C `multivector`

structure can thus support factored forms, and
we will maintain such forms rather than expand them when such is computationally benefical.

However MV 1.2 `multivector`

functions expand factored forms out when adding them rather
than seeking common factors and do not themselves attempt factorisations.

MV 1.2 allows the user to contruct such factored forms at the `mv`

C++ class level
by setting a global boolean flag `g_mvfactored`

. When this is set, the `*`

and
`^`

operators for the `mv`

handles build factored `multivector`

representations,
rather then immediately evaluating the products.

The `+`

and `-`

`mv`

operators do not attempt to maintain such factorisations
in MV 1.2, expanding (multiplying) them out into full expanded basis coordinate form prior to adding.
** Programing Multivector Products**

Having covered conjugations and the various restricted products such as ¿ and Ù in the
Multivectors chapter,
we approach the
general problem of implimentating a general multivector product for our C/C++ large *N* mutivector representations of the form a^{^1} ¨ b^{^2}
where ^{^1} and ^{^2} are arbitary combinations of the ^{§},^{#}, ^{§} and negation conjugations and ¨ is
the geometric product, a restriction of it, or a commutator product. Generalising the product primitive in this
way will greatly facilitate the hierachical implimentation and makes for a compacter and robsuter codebase.

It is important to approach multivector products in as general a manner as possible, because
we * will* want to impliment the various alternative products like Forced Euclidean
and Intersective and we

We first consider ¨ as the geometric product or a restriction of it such as Ù or ¿ or

Let us first suppose that

`multivector`

representations `a`

and `b`

for
a and b have the same level `a->coeff[i].m`

and b`b->coeff[j].m`

are multivectors in the space spanned by
ea

How we handle this depends on whether

a

Since f

f

`multivector`

s (or of two `real`

scalars when When

If `a`

's level is less than that of `b`

then we can simply multiply a^{^1}
into `b`

's multivector coefficents, although again we may need to add some involutions if ^{^2}
is reversing.
If `a`

's level is greater than that of `b`

then (assuming no reverse in ^{^1})
we commute b^{^2} across f_{i}^{^1} incurring an involution if blade f_{i}
is odd. If ^{^1} contains ^{§} then the ^{^1} on a_{i} also inccurs an ^{#} if
f_{i} is odd.

Thus we have a recursive strategum that ultimately requires two primitives: the optimised multiplication of two level 0
`multivectors`

and the product of two (base level) pure blades from a space of dimension *L*. The former can of course itself
exploit the latter.
** Commutator Products**

We can evaluate ¨ = __×__ or __~__
by means of the geometric product but this is grotesquely computationally inefficient, with many nonzero blade
coordinates being explicitly calculated twice only to cancel eachother out or double.
If the blade-blade product primitive supports the commutation products __×__ and __~__, as it can easily be made to do,
then so too will the base level multivector product primitive. Higher level multivector commutations are more
problematic and perhaps best implimented seperately,
rather than adding conditionals within the inner summation loop
of the multivector product primitive.
For a__×__b º ½(ab-ba) , we have

(a_{i}^{^1} f_{i}^{^1}) __×__
(b_{j}^{^2} g_{j}^{^2})
=
½(
a_{i}^{^1} f_{i}^{^1} b_{j}^{^2} g_{j}^{^2}
- b_{j}^{^2} g_{j}^{^2} a_{i}^{^1} f_{i}^{^1} )

= ½(
a_{i}^{^1} b_{j}^{^2}' f_{i}^{^1} g_{j}^{^2}
- b_{j}^{^2} a_{i}^{^1}' g_{j}^{^2} f_{i}^{^1} )
=
½(a_{i}^{^1} b_{j}^{^2}' -/+
b_{j}^{^2} a_{i}^{^1}' ) f_{i}^{^1} g_{j}^{^2}
with the - occuring *whenn* f_{i} and g_{j} commute,
and where ^{^1}' is ^{^1} with an added ^{#} if g_{j} is odd and
^{^2}' is ^{^2} with an added ^{#} if f_{i} is odd.
** Addition**

When calculating the blade products f_{i}^{^1} ¨ g_{j}^{^2}
we may get the same resultant blade for multiple *i*,*j* combinations and so have to * add*
together the associated a

`multivector`

representation makes addition
a somewhat fiddly procedure akin to merging two seperately sorted lists. One potential strategy
is a "bucket sort" type approach wherein the level `multivector`

being constructed by the
å`multivector`

"register" `reg`

of maximal capacity 2`reg->coeff[k].m`

of the register
rather than continuously shuffling data to always occupy the lowest possible cells.
Since in our recursive descent we will always be moving to a strictly lower level, we can, for a given level, keep this register singular and global rather than requiring multiple instances on the stack. On exit from the summation loops, the register

`multivector`

can be
"condensed" into a compacted "result" `multivector`

of the required `capacity`

.
By the above means, it is relatively straightforward to impliment all the mutliple "flavours" of conjugated and/or restricted geometric products in a single C function that is passed two

`multivectors`

and a product code incorporating two 4 or 6-bit conjugation specifiers and the restriction type,
although the commutator and anticommutatior products are sufficiently different to warrant seperate rotines for
nonzero levels. We simplify the code listed here by removing provision for factored forms. Note that C symbol
`^`

here denotes exclusive-or (XOR) rather than geometric outter product Ù . The alignment
of comments has suffered the transition to HTML somewhat.
First we have the same-level blade by blade product primative. Note that `bladeindex`

types
are *L*-bit bitflags specifying single-level blades whereas
`fullbladeindex`

types are *N*-bit bitflags specifying blades in the full algebra.

Since a same-level blade-blade product primative takes two *L*-bit bitflags and (in the case of an orthonormal basis)
returns either an *L*-bit blade indicator (typically the exclusive or of the inputs)
and a sign indication bit, or an indicator that the product is zero. The hardest part tends to be the sign indicator
and it is natural to pretabulate this for common products
with 2^{L}×2^{L}
entry 1-bit _LUTs each requiring 2^{2L-3} bytes. For *L*=5 this is a 2^{7}-byte *LUT* for each pretabulated product
while for *L*=4 it is 2^{5}-byte.

We discuss here the code necessary to perform such a blade-blade product in the absence of a *LUT*, which is of course the code necessary
to fill such a *LUT*.
```
```

boolean OddBlade(bladeindex bi) { return(BitParity(bi));} // 1 iff bi has odd number nonzero bits

boolean AntiCommutingBlades(bladeindex bi,bladeindex bj) // return 1 iff blades anticommute

{ if(OddBlade(bj)) return(BitParity(bi & (~bj))); else return(BitParity(bi & bj));

}

bladeindex BladeBladeProduct(bladeindex bi,bladeindex bj,mvproduct * ppro,mvlevel level,bladesflag inhibit)

{ // Mutiply (logical) two same-level blades, returning index of resulting blade, or MV_INDEX_NIL if product vanishes .

// Sets or clears MV_LEFT_NEGATION bit of pro according to sign of blade product (leaves extant if product vanishes).

// level is required to access correct signatures - unneccesary if Euclidean.

// inhibit forces particular results to MV_INDEX_NIL.

// Works for commutative and anticommuatative products.

bladeindex br = bi^bj; // exclusive orr gives product blade index

// Restrictive product filters

if( (*ppro) & (MV_ONTO_CONTRACTIVE_PRODUCT|MV_BY_CONTRACTIVE_PRODUCT|MV_INNER_PRODUCT|MV_OUTTER_PRODUCT) )

{ if (((*ppro) & MV_OUTTER_PRODUCT) && ((bi & bj)!=0) ) return(MV_INDEX_NIL);

bladeindex bri = br & bi;

bladeindex brj = br & bj;

if( (((*ppro) & MV_ONTO_CONTRACTIVE_PRODUCT)&& bri )

|| (((*ppro) & MV_BY_CONTRACTIVE_PRODUCT) && brj )

|| (((*ppro) & MV_INNER_PRODUCT) && bri && brj )

) return(MV_INDEX_NIL);

}

// Commutation product filters

if( (((*ppro) & MV_COMMUTATOR_PRODUCT) && (!AntiCommutingBlades(bi,bj)) )

|| (((*ppro) & MV_ANTICOMMUTATOR_PRODUCT) && ( AntiCommutingBlades(bi,bj)) )

) return(MV_INDEX_NIL);

if( inhibit && ((1 << br)&inhibit)) return(MV_INDEX_NIL); // vanishes due to inhibition

// index established, now have to determine sign and spot zero if null signature involved

boolean sign=0;

bladeindex bit = 1 << (MV_DIM0-1); // start with highest axis bit

boolean bjparity = BitParity(bj) ; // parity of nonzero bj bits below and including current bit

do

{ if(bi & bit)

{ if(bj & bit) // 1-blade present in both so signature contributes

{ if (!((*ppro) & MV_EUCLIDEAN_PRODUCT)) // Check signatures enabled

{ if(mv_nullsigs[level] & bit) return(MV_INDEX_NIL);// vanishes due to null signature

if(mv_negsigs[level] & bit) sign ^= 1; // accrue sign flip due to negative signature

}

bjparity ^= 1;

}

sign ^= bjparity ; // must comute bi axis across all lower axies present in bj

}

else if(bj & bit) bjparity ^= 1;

bit>>=1;

} while(bit);

// Now derive sign changes due to specified conjugations

mvconjugation cnj = (*ppro) & MV_CONJUGATION_ALL;

if(cnj) REPORT2(LOCAL_NEWS,6,("[LftPcnj %X to lft blade %O -> %X] ",cnj, bi, level, mv_conj[cnj][level] & (1 << bi)));

if(cnj && (mv_conj[cnj][level] & (1<

cnj = ((*ppro) >>MV_CONJUGATION_SHIFT) & MV_CONJUGATION_ALL;

if(cnj) REPORT2(LOCAL_NEWS,6,("[RgtPcnj %X to rgt blade %O ->%X] ",cnj, bj, level, mv_conj[cnj][level] & (1 << bj)));

if(cnj && (mv_conj[cnj][level] & (1 << bj))) sign ^=1; // bj conjugation

#if MV_LEFT_NEGATION==1

*ppro = ((*ppro) & ~1) | sign;

#else

if(sign) *ppro |= MV_LEFT_NEGATION; else *ppro &= ~MV_LEFT_NEGATION;

#endif

return(br);

}

The above can be simplified and optimised in various ways. The logical evaluation of the sign change accrued
by the commutations irrespective of signatures (and so level) can be seperated out and pretabulated in a level independant
2^{L}×2^{L} bit array
implimented as a `bladeindex`

indexed array of `bladesflag`

s.

The sign change
accrued by negative signatures is available as the parity of (`bi & bj & mv_negsigs[level]`

) .

The signatures sign flip can be amalgamated with the commutation sign bit into
a level-dependant 2^{L}×2^{L} bit array
reducing the basis blade blade product of same-level blades indexed by `bi`

and `bj`

to forming the resultant blade index
as the exclusive or `bi ^ bj`

with sign given by entry (`bi`

,`bj`

)
of a 2D 1-bit *LUT*. The above code can of course be used to build such a *LUT*.

Next, the base level generic multivector product.
`rrMUL(x.y)`

is assumed to be a macro or inline function
for an optimised multiplication of two leaf `real`

types.
```
```

multivector MultivectorProduct0(multivector r, multivector a, multivector b, mvproduct product,bladesflag inhibit)

{ // Fast MultivectorProduct assuming a and b both have level 0 (works for commutator products too)

ASSERT(a->h.level==0);

ASSERT(b->h.level==0);

if((a==MV_ZERO)||(b==MV_ZERO)) { if(r!=MV_ZERO) r->h.present=MV_FLAG_NONE; return(r); } // rapid zero case

ZeroMultivector(mv_reg[0][0],0); // clear mv register

bladeindex inda,cnta,cntr; inda=cnta=cntr=0; bladesflag bfa=a->h.present;// prepare for outter loop

while(bfa)

{ uint32 shf=BSF_32(bfa); bfa >>= shf; bfa>>=1;inda += (bladeindex)shf; // fast low->high scan for nonzero bit

bladeindex indb=0; bladeindex cntb=0; bladesflag bfb=b->h.present; // prepare for inner loop

while(bfb)

{ uint32 shfb=BSF_32(bfb);bfb >>= shfb; bfb>>=1; indb += (bladeindex)shfb;

if(cntb > b->h.capacity) REPORT2(LOCAL_NEWS,1,("\n cntb=%X exceeds capacity %X prse=%X bfb=%X bfa=%X",cntb,b->h.capacity,b->h.present,bfb,bfa))

mvproduct pro = product;

bladeindex ri = BladeBladeProduct(inda, indb, &pro, 0, inhibit); // Do blade product first

if(ri!=MV_INDEX_NIL) // and check non vanishing.

{ bladesflag rf=(1 << ri);

ASSERT2(ValidReal(a->coeff[cnta].r) && ValidReal(b->coeff[cntb].r) ); // Debug catch bad float data

Real c=rrMUL(a->coeff[cnta].r, b->coeff[cntb].r); // multiply real coordinates

if(!EQUIVALENT_REALS(c,Real_0))

{ if(pro & MV_LEFT_NEGATION) c=-c; // negate if BladaBladeProduct set NEGATION bit

if( mv_reg[0][0]->h.present & rf) mv_reg[0][0]->coeff[ri].r += c;

else mv_reg[0][0]->coeff[ri].r = c;

mv_reg[0][0]->h.present |= rf;

}

} // endif ri

indb++; cntb++;

} // endwhile bfb

inda++; cnta++;

} // endwhile bfa

r=Condense(r,mv_reg[0][0]);

return(r);

}

These allow the general generic recursive multivector product function:
```
```

multivector MultivectorProduct(multivector r, multivector a, multivector b, mvproduct product,bladesflag inhibit)

{ // return a*b where * is product specified by product (does not work for commutator products)

if((a==MV_ZERO)||(b==MV_ZERO)) { if(r!=MV_ZERO) r->h.present=MV_FLAG_NONE; return(r); } // rapid zero case

if(r==a) { r = MultivectorProduct(NULL,a,b,product,inhibit); KillMultivector(&a); return(r); }

if(r==b) { r = MultivectorProduct(NULL,a,b,product,inhibit); KillMultivector(&b); return(r); }

if(a->h.level >= b->h.level)

{ if(a->h.level > b->h.level)

{ // --------------------------------------------------

// can subsume b into a's multivector coefficients

// --------------------------------------------------

bladeindex inda=0; bladeindex cnta=0; bladeindex cntr=0;

bladesflag bfa=a->h.present;

ZeroMultivector(r,a->h.level);

r = EnsureCapacity2(r, a->h.level, BitsSet(a->h.present), MV_DIM0);

while(bfa)

{ uint32 shf=BSF_32(bfa); bfa >>= shf; bfa>>=1; inda += (bladeindex)shf; // fast scan and move to nonzero bit

if((!(product & MV_ONTO_CONTRACTIVE_PRODUCT)) || (inda==0) )

{ mvproduct pro=product;

if(OddBlade(inda)) { pro ^= MV_RIGHT_INVOLUTION; if(product & MV_LEFT_REVERSE) pro ^= MV_LEFT_INVOLUTION;}

r->coeff[cntr].m = MultivectorProduct(r->coeff[cntr].m, a->coeff[cnta].m, b, pro, inhibit);

if(r->coeff[cntr].m!=MV_ZERO) { r->h.present |= (1 << inda);cntr++;}

}

cnta++; inda++;

} // endwhile

return(r);

} // endif a->lev > b->lev

if(a->h.level==0) return(MultivectorProduct0(r,a,b,product,inhibit)); // Rapid both 0 level case

// --------------------------------------------------------------------------------

// a->h.level = b->h.level > 0

// Bilinearly expand product into uncompressed multivector in mv_reg[a->h.level][0]

// ------------------------------------------------------------------------------------

ZeroMultivector(mv_reg[a->h.level][0],a->h.level);

mv_reg[a->h.level][0]=EnsureCapacity(mv_reg[a->h.level][0],a->h.level,MV_MAX_CAPACITY);

bladeindex inda=0; bladeindex cnta=0; bladeindex cntr=0;

bladesflag bfa=a->h.present;

while(bfa)

{ uint32 shf=BSF_32(bfa); bfa >>= shf; bfa>>=1; inda += (bladeindex)shf; // fast scan and move to nonzero a bit

boolean odda = OddBlade(inda);

bladeindex indb=0; bladeindex cntb=0; bladesflag bfb=b->h.present;

while(bfb)

{ uint32 shf=BSF_32(bfb); bfb >>= shf; bfb>>=1; indb += (bladeindex)shf;

mvproduct pro = product;

bladeindex ri = BladeBladeProduct(inda, indb, &pro, a->h.level, inhibit); // may toggle MV_LEFT_NEGATION bit in pro

if(ri!=MV_INDEX_NIL)

{ if(odda)

{ pro ^= MV_RIGHT_INVOLUTION; // need to commute b's coeff across a's blade

if(product&MV_LEFT_REVERSE) pro ^= MV_LEFT_INVOLUTION; // and move a coeff accross a's blade if a reversed

}

if((product&MV_RIGHT_REVERSE)&& OddBlade(indb)) pro ^= MV_RIGHT_INVOLUTION; // and b's coeff across b's blade if b reversed

bladesflag rf=(1 << ri);

if( mv_reg[a->h.level][0]->h.present & rf)

{ mv_reg[a->h.level-1][1] = MultivectorProduct( mv_reg[a->h.level-1][1], a->coeff[cnta].m , b->coeff[cntb].m , pro);

mv_reg[a->h.level][0]->coeff[ri].m = MultivectorAdd(mv_reg[a->h.level][0]->coeff[ri].m, mv_reg[a->h.level-1][1]);

}

else

{ mv_reg[a->h.level][0]->coeff[ri].m = MultivectorProduct( mv_reg[a->h.level][0]->coeff[ri].m, a->coeff[cnta].m , b->coeff[cntb].m , pro);

if(mv_reg[a->h.level][0]->coeff[ri].m != MV_ZERO) mv_reg[a->h.level][0]->h.present |= rf;

}

} // endif ri

indb++; cntb++;

} // endwhile bfb

inda++; cnta++;

} // endwhile bfa

r=Condense(r,mv_reg[a->h.level][0]);

return(r);

} // endif

{ // --------------------------------------------------------------------------

// b->h.level > a->h.level so subsume a into b's multivector coefficients

// --------------------------------------------------------------------------

bladeindex indb=0; bladeindex cntb=0; bladeindex cntr=0;

bladesflag bfb=b->h.present;

ZeroMultivector(r,b->h.level);

r = EnsureCapacity(r, b->h.level, BitsSet(b->h.present));

while(bfb)

{ uint32 shf=BSF_32(bfb); bfb >>= shf; bfb>>=1; indb += (bladeindex)shf;

if((!(product & MV_BY_CONTRACTIVE_PRODUCT)) || (indb==0) )

{ mvproduct pro = product;

if( (pro & MV_RIGHT_REVERSE) && OddBlade(indb) ) pro ^= MV_RIGHT_INVOLUTION;

r->coeff[cntr].m = MultivectorProduct(r->coeff[cntr].m, a, b->coeff[cntb].m, pro, inhibit);

if(r->coeff[cntr].m!= MV_ZERO) { r->h.present |= (1 << indb);cntr++;}

}

cntb++;indb++;

} // endwhile

return(r);

}

}

The adminstrative routines `ZeroMultivector`

and `Condense`

are ommitted here, their functions being obvious but tedious. They
can be found in the MV sources.
** C++ product wrapper**

The C++ abstractified `mv`

type can be multiplied thus. The primative is
complicated slightly by abstracting out ("buffering") dual as well as conjugations.
```
```

mv mv_product(const mv &a,const mv &b,mvproduct pro)

{ pro ^= ( a.conj() | ( b.conj() << MV_CONJUGATION_SHIFT ) );

if(a.getDual() && (!(g_N & 1))) pro ^= MV_RIGHT_INVOLUTION; // need to commute i across b

mv r = mv(MultivectorProduct(NULL, a.m, b.m, pro )); r.s = rrMUL(a.s,b.s);

if(a.getDual() ^ b.getDual()) r.setDual(true);

if(a.getDual() && b.getDual() && g_i_neg) r.s = -r.s; // double dual may entail sign flip

return(r);

}

mv operator*(const mv &a, const mv &b) { return(mv_product(a,b,MV_GEOMETRIC_PRODUCT ));}

mv operator^(const mv &a, const mv &b) { return(mv_product(a,b,MV_OUTTER_PRODUCT ));}

mv operator|(const mv &a, const mv &b) { return(mv_product(a,b,MV_ONTO_CONTRACTIVE_PRODUCT ));}

** Dual **

Suppose we are at the `l`^{th}
level in the heireachy with a multivector whose coefficients come from an `lL` dimensional multivector
space . We then have
(å_{j=0}^{Jz} a_{j}b_{j})^{*} =
å_{j=0}^{Jz} (a_{j}^{*1})(b_{j}^{*2})
where ^{*1} is the dual over the `lL`-dimesnional space inhabitted by a_{j}
and b_{j}^{*2} is the dual of basis blade b_{j} in its *L*-dimensional space
(or (*N*-`lL`)-dimesional space if we are at the top level).
For odd *L* we have to take b_{j}^{#}^{*2} if `l` is also odd.
** Programing Meet and Join**

Recall that if the join *j*=a_{k}Èb_{l} is known or guessed
(such as taking *j*=(a_{k}Ùb_{l})^{~} when nonvanishing)
, then the meet
is directly computable as
a_{k}Çb_{l} = (a_{k}*j ^{-1}* )

This computation returning zero indicates that

One infelicity of univalued-function based languages like C is that integer and modulo (remainder) divisions
are optimally evaluated simultaneously but provided seperately. Meet and join are similar in that they are naturally computed together
.
The problem of finding the
meet is one of finding a ½(*k*+*l*-`d`)-blade in (a_{k}Db_{l})^{*} present in a_{k} , while that of finding the join is
one of either extending a_{k}Db_{l} by the meet or constructing
an (*N*-½(*k*+*l*+`d`))-blade wholly outside both a_{k} and b_{k} and then dualing it.
Conventional approaches
attempt to do * one* of these
(or construct
a ½(-

The cornerstone of the *meet construction algorithm* described below is that if 1-vector **c**
Î s=(a_{k}Db_{l})^{*} then
projection ¯_{ak}(**c**) lies in the meet a_{k}Çb_{l}
while rejection ^_{ak}(**c**) lies outside the join a_{k}Èb_{l}.

[ Proof :
**c**'=¯_{ak}(**c**) lies in a_{k} and so has a component in the meet and a component **c**" in a_{k}'. But
**c**" must arise as a projection of a 1-vector **s**" in s into a_{k} with **c**" = ¯_{ak}(**s**") = ¯_{ak'}(**s**") = 0
since s excludes the disjoint. Because ¯_{ak}(**c**) lies in the meet, it lies outside the disjoint
and so inside (a_{k}Db_{k})^{*}.
Since **c** also lies inside (a_{k}Db_{k})^{*} so too must ^_{ak}(**c**)=**c**-**c**'
which is therefore a 1-vector lieing outside both a_{k} and the disjoint and so also lieing outside b_{l}
and hence outside the join.
.]

Note that it is often computationally prudent to disregard scale within the iterative process and
normalise the join (and rescale the meet) on completion. However, care must be taken to avoid blade magnitudes
getting progresively smaller leading to precision errors.

Meet Construction Algorithm

Given the delta product a_{k}Db_{l}, one can construct the meet and join by algorithms such as the following.
However, calculating the delta product is often problematic in that we must impose `d`, the grade of
the highest nonvanishing compoent. What "nonvanishing" means can in practice be unclear, given possible nearzero "error" coordinates of higher grade blades.
We discuss this further below,
but for now we assume `d` to be duduceable from our representatation of a_{k}b_{l}.
The algorithm is:

- Check for and seperately handle special cases of zero (or nearzero) a
_{k}or b_{l}. The join is independant of the scales of a_{k}and b_{l}so if either are unhelpfully small or large renormalise them either exactly or approximately. -
We initially reduce the computational requirements by swapping the arguments if necessary to ensure
that
*k*£*l*. We do this simply because we will be working with a_{k}more than with b_{l}and lower grades make for sparser products. If we sign the join for positive maximal coordinate we won't need to later "compensate" for this swap and can "forget" having made it. - Compute a
_{k}Db_{l}and set`d`equal to its grade. If`d`=0 set j=a_{k}and Exit. - Set j=
*i*and`E`=_{j}*N*- ½(*k*+*l*+`d`) , the excessity of the grade of j over the grade of the desired join. If we know in advance that the join excludes a partcicular`q`-blade f_{q}then we can set j= ±f_{q}*i*and`E`=_{j}*N*-*q*-½(*k*+*l*+`d`) . - If
`E`=0 then Exit._{j} - Set m=1 and
`E`= ½(_{m}*k*+*l*-`d`) , the deficit of the grade of m=1 under the grade of the desired meet. - If
`E`=0 then set j=a_{m}_{k}Ùb_{l}=a_{k}Db_{l}and Exit. - Set s= (a
_{k}Db_{l})¿j , an (*N*-`q`-`d`)-blade spanning the space from which we will extract constructive 1-vectors. It is prudent to rescale s if necessary so that its maximal absolute coordinate is nonsmall. - FOR
*i*= 1 TO*N*

If s is a 1-vector we set**c**=s, otherise set**c**= ¯_{s}(**e**) , although if s has a reasonable magnitude then_{i}**c**=(**e**¿s)¿s will suffice and avoids inverting blade s. If_{i}**c**is small (**e**(near) orthogonal to s) then Goto 14._{i} - Decompose
**c**into components**c**'=¯_{ak}(**c**) and**d**=^_{ak}(**c**) =**c**-**c**' inside and outside a_{k}, and this time we do require correctly signed and scaled projection using a_{k}^{-1}. - If
**c**'=¯_{ak}(**c**) is robustly nonzero we can accordingly "add" it to our constructed meet with m = mÙ**c**' and deduct 1 from`E`. If this reduces_{m}`E`to zero we set j=(a_{m}_{k}Db_{l})Ùm and Exit. - If
**d**=^_{ak}(**c**)=**c**-**c**' is robustly nonzero we can "remove" it from our constructed join with j =**d**¿j and deduct 1 from`E`. If this reduces_{j}`E`to zero then Exit. Since_{j}**c**was not small, at least one of the step 11 and 12 construction operations will have occurred. - If desired, remove
**c**from s so reducing its grade via s=**c**¿s . - NEXT
*i*. The algorithm will Exit prior to completing this loop .

On exiting from the above algorithm, j will contain the unnormalised and quasi-arbitarily signed join so we normalise and selectively negate it to taste and then set m to either (a

Refinements

One can simplify the algorithm by
adopting only one of the strategies, taking `E _{j}` steps to construct j only, or

Rather than looping through the basis 1-vectors in index order and using only robustly nonzero projections, one can iterate through just the basis factors of the (

The meet construction algorith provides both the meet and the join. It also provides
an *m*-frame for the meet (the **c**' in the algorithm), but no *d*-frame for the disjoint part of the join.
It computes the join by "paring down" the pseudoscalar *i* rather than constructively.
** Multivector Programming Issues**

** Whittling **

The manipulation of blades expressed "expanded out" with regard to a particular extended frame
rather than in a factored form frequently requires the judicious "pruning" or ignoring of small
non zero coordinates. When forming a restrictive product like ¿ or Ù, we know the resultant
grades, but products such as D require deciding which nonzero coordinates in a particular expansion are "genuine" and which are
"failed zeroes" ; values which should theroretically be zero but actually aren't as a result of limited
computational precision. Similar problems arise in some forms of matrix based analyisis.

It becomes a practical necessity to have
some means of "cleaning up" or ignoring the "failed-zero detritus" when performing
multivector operations such as D or ^{-1} or ^{↓} since their presence can profoundly miseffect the result.
Even in operations when the detritus is numerically benign, it still serves to slow computations and bloat
storage requirements. Typically we might scan a multivector for its highest absolute coordinate, or compute the sum of all the absolute coordinates, or some other measure,
and then discard any blades whose absolute coordinate is less than some small arbitary fraction of that value. 2^{-16} say. We will refer to
the practice of clearing the "near zero" coordinates in such a manner as *whittling* the multivector.
We are computationally effecting it, but only in the sense of bringing it nearer to what we consider its "true" value to be.
Whittling is frame dependant and makes little sense to the mathematician, but programmers used to thinking of "transparently tidying"
data prior to computations will be comfortable with the concept.
** Multivector Inspection**

When forming delta products and similar computations it is useful to have a primitive that examines a
multivector as expressed with regard to a particular extended basis and returns the value range of the coordinates
for each grade and which blades have the maximal and minimal coordinates. Since the scalar and pseudoscalar grades have
only one coordinate each, there are effectively 2*N* reals defining the "numeric spread" of the coordinates.
** The Delta Product**

When implimenting a delta product we will typically multiply the `mv`

representors of a and b into an `mv c`

and calculate a whittling threshold for `multivector c.m`

based on the maximal or summed coordinate moduli. We then establish the maximal grade having a coordinate greater than this
threshold and discard
all blades other than those of this grade and coordinate greater than the threshold.
Having done this, there is a good chance the maximum survining coordinate modulus, `K`

say,
of `c.m`

may be less than we would like so
we might consider transparently "rescaling" c by dividing `c.s`

and multiplying `c.m`

by `(1/K)`

.
** Null basis vectors**

The assumption of an orthonormal basis to simplify the primitives bites back when one
works with *GHC* representations. Null *e*_{¥} is typically implimented as **e _{+}**+

The

`BladeBladeProduct`

routine can be adapted to return no, one, or two
blades together with associated signs (as opposed to reals for a general nononorthogonal basis)
and this allows null (a

If we have *N*=`lL`+2 so that the top level contains only *e*_{0} and *e*_{¥} then it may be
worth implimenting the topmost level as noncompressed in that it always has four
cells (containing MV_ZERO if empty) and a `bladesflag`

value (implied or actual) of 0xF.
If `l`=1 so that *N*=*L*+2 as might be the case for
Â_{3}^{%} or Â_{4,1}^{%} then (factored representations aside)
we have only two levels, a compressed `multivector0`

base level containing
up to 2^{L} reals an uncompressed top level containing four `multivector0`

pointers.

A key advantage of including *e*_{0} and *e*_{¥} in the coordinate basis is that products
such as *e*_{¥}(*e*_{¥}a) zero logically rather than via vanishing arithmetic computation of
(**e _{+}**+

`multivector0`

pointers.
MV 1.6+ supports non-orthogonal null basis vectors.

Having a pointer-based hierachical linking of "multivector subcomponents" risks high dimension multiblade
multivectors being "spread out" through memory by the heap manager to the possible detriment of cache-based memory access optimisations
and this issue may support raising the levelity `L` from 4 back to 5 or higher.
One might think that the hierachy system imposes a "performance grain"
on the modelled geometry with multiplication by **e _{1}** being typically faster than multiplication
by

`multivector`

s
in the form a + b`mv`

aThese matters should be taken carefully into consideration when implimenting the higher dimensional embeddings described subsequently since the choice of "indexing" a basis extension

`multivector`

type may significantly effect computation times.
The

`mv`

C++ class provided by MV is designed to be easy to use but programmers should remain
computaionally aware when using it.
Suppose, for example, that `f`

is defined as an `mv`

valued function of an `mv`

valued
parameter. We could invoke its d-directed gradient as
` (f(a + e*d) - f(a - e*d))/(2*e) `

where `e`

is a small `float`

value but this will have the effect
of creating an `mv class`

containing an `multivector struct`

containing small "coordinates" and a
large scaling factor. This effect will be compounded if we then take second or higher derivatives by subtracting gradients.
Rather than transparently rescaling f(a + e*d) - f(a - e*d) prior to dividing its scale

`s`

by
`2*e`

, it is more efficient to invoke ` f(a+e*d)/(2*e) - f(a-e*d)/(2*e) `

. The `mv`

addition requires multiplication by the individual scales of the operands anyway,
so this adds just one division to the computational load but yields an `mv`

with `s=1`

.
C++ lacks the basic functionality of precedence reconfiguration for overloaded operator symbols

`+`

,
`^`

, `^`

for `mv`

arguments as the outter product Ù and `+`

as multivector addition then the expression `a^b + c*d`

will evaluate as `a^(b+(c*d))`

rather than the `(aÙb) + (c*d)`

we desire, with Ù regretably taking precendence over + .
It is accordingly prudent to use explicit brackets in any `mv`

expressions involving operators
other than `+`

,`-`

,`*`

, and `/`

.
This author has coded and tested a C++ multivector class

`mv`

exploiting the techniques described above. MV 1.3 supports
`fullbladeindex`

type.
MV stores multivectors in "coordinate form" with regard to a particular
extended basis, but compacted so that "empty" (zero) coordinates require no space.
MV also supports "factored forms" so that blades, versors, and more general product forms can be stored
more compactly still as "unevaluated products", though no attempt is made in 1.2 to
maintain such forms under addition.
MSVC/Watcom C++ sources for MV 1.3.0
are available here mv130.zip .
MV 1.3 is provided as a rough-and-ready MSVC console project rather than a library,
with simple textual display primitives for displaying multivector values and no GUI.
The intention is to provide an easily usable cut-and-paste sourcebase
for a C++ class representing a Â_{p,q}
multivector where *p*+*q*=*N* is potentially so large as to prohibit
non-compactified representations, but without compromising
performance for *p*+*q* £ 5. Though provision is made for null
signature basis blades in the product primatives, some operations
like inverse and projection assume **e _{i}**

MV 1.3 provides support for multiply Regeneralised Homeogenised Coordinates though I have to date found little use for this.

MV1.3 is here provided as is. It was developed as a theoretical exercise and proof of
concept rather than for a particular application. While having had some testing and debugging it has
not yet proved itself in any major application.
Concise and meaningful code has been chosen in preference to
obsfucating optimisation techniques, and there remains scope for improvements and optimisations.
** MV 1.3 Files**

The files provided in MV 1.3 are:

global.hpp

This contains defintions of various common types and macros like `uint32`

and `MAX`

used by MV but also likely to be needed by more general code not requiring access to multivectors.

mv.hpp

This header file should be `#include`

d in any source file wishing to use the `mv`

class.
The maximal dimension supported by the code is `#define`

d in this file. Setting this to values below 64 or 32
automatically enables various optimisations and compaction techniques.

multivec.hpp

This header file should be `#include`

d (instead of mv.hpp which is included from within multivec.hpp) in any source file wishing to
access both the `mv`

class and the more fundamental underlying `multivector`

C structures
"wrapped" by the `mv`

class.

multivec.cpp

This contains the source for the low level C routines (suitable for assembly optimisations) that manipulate
`multivector`

structures. Such structures must be explicitly created and freed by the user when
used directly.

mv.cpp

This contains the source for the C++ wrapper routines presenting `multivector`

functionality through
the convenient `mv`

class interface. In the MSVC project, this also contains the MSVC console application
equivalent of `main()`

with some example initialisation and use of the MV suite.

Glossary Contents Author

Copyright (c) Ian C G Bell 2004, 2014

Web Source: www.iancgbell.clara.net/maths

Latest Edit: 28 Jun 2015.