Sierpinski (<1K) Multivector Programming
"If I had eight hours to chop down a tree, I'd spend six sharpening my axe." --- Abraham Lincoln

    This document assumes familiarity with the Multivectors chapter, in particular the discussions of conjugations and the geometric and outter products.

    In this chapter we discuss multivectors explicitly from a programming perspective, and consider how best to represent and manipulate them in C/C++ . This is a nontrivial subject for nonsmall N and the casual reader should skip to this section for now, concentrating instead on subsequent chapters desribing the uses and benefits of multivectors, returning here if and when they decide to exploit multivector methods in a programming application.

    Compactly stored multivectors are a variably sized data type with extensive bit shifting involved in their processing, something which C is bad at and C++ worse.  For N£5 one can represent multivectors in non-compacted forms but for higher N the storage space required to represent a general non-sparse multivector can be prohibitive. One can more compactly express some multivectors using factored forms, eg. representing an N-D k-blade with k N-D 1-vectors rather than NCk extended basis k-blade coordinates, and a general k-vector as a sum of such factored k-blades.   We will here refer to the number of k-blades necessary to represent a particular k-vector as the degree of that k-vector.

Existing Multivector Software
    Matlab based GABLE supports N£3  and is available at
    Maple based CLIFFORD supports N£9 and is available from
    C++ BOOST library sources supporting N £ 5 by default are available at .
    C++ CLU sources available from
    Gaigen generates fast C++ sources for a given N£8 geometric algebra and is avaliable at
    C++ MV 1.3.0 sources supporting N£63 are available from this site  as described below, however the author has developed MV up to 1.6 with significant functionality extensions and bug fixes. Anybody interested in using MV 1.6 (provided as a library, with or without sources) should contact the author.

Representing Multivectors
"We share a philosophy about linear algebra: we think basis-free, we write basis-free , but when the chips are down we close the office door and compute with matrices like fury." --- Irving Kaplanski

    The principle candidate "orderings" of a 2N element extended basis {1,ei,eij,...,e12...N} for ÂN are, in the case N=3,  the grade-ordered basis , {1,e1,e2,e3,e23,e31,e12,e123} and the bitwise-ordered basis {1,e1,e2,e12,e3,e13,e23,e123}. Grade-ordering facilitates storage overlapping of known-sparse multivectors, but bitwise-ordering is in many ways preferable: facilitating implementation of multivector manipulations via direct bitwise manipulations of indices; and providing N-invarient storage (increasing N does not effect the representation of e12, for example).
    We will use the notation a[.i.] for 0 £ i < 2N to indicate the ith component of the 2N dimensional 1-vector representation of a multivector a corresponding to the ith element in a given ordered basis.
    The signing ("factor ordering") of basis k-blades is also a matter of preference. One can order and sign a bivector basis "dually" { (e123)2 e1*, (e123)2 e2*, (e123)2 e3*   } = { e23,e31,e12 } or "index sequentially" { e12, e13, e23 } . We adopt the latter here.

    The simplest way to impliment an N-D multivector is as a 2N entry 1D array but this can be grossly inefficient for sparse multivectors when N is large. A "full" N=26 dimensional multivector's worth of 4-byte-float coordinates requires a true gigabyte (230 bytes) of storage so we can safely assume that N<64 when considering possible implementations and we might fervently hope for applications with N<16. We will consider general homegenised Minkowski spacetime Â5,2 later in this work, so N³7 is of definite interest. Large N sparse multivectors arise in muliparticle spacetime algebras when each particles is correlated to few other particles.

Small N
    For N³3 it is prudent to adopt a compact representation such as
 typedef struct multivectorstruct * multivector;
 typedef struct { bladesflag present; mvcapacity   capacity; } mvheader;
 struct multivectorstruct { mvheader h; real coeff[OPEN_ENDED]; }   

so that a multivector type is a pointer to a dynamically sized structure holding a "packed" array of real coordinates.
    OPEN_ENDED is here assumed #defined to whichever of ( ), 0, 1, or void best provides a flexible array element dynamically-sized "open-ended" structure    in the given C compiler.
    Type real is float, double or a fixed-point type of choice. We will later be creating a union typefor a real or a pointer, so real should ideally be a siimilar bit width to the pointer type. For a more general Clifford algebraic multivector, we must replace it with a complex or other type for which multiplication and addition are defined.
    The bladesflag type is bitfield capable of holding 2L bit flags where L is the maximal dimension (the levelity) this "flat" representation can handle . Typically L =3,4, or 5. A nonzero bit in position i indicates a nonzero a[.i.] held in coeff[j] where i is the jth nonzero bit in present. Whenever a nonzero coordinate is added or removed from the multivector, the coeff entries may require some "shunting" in memory to release or fill the appropriate gap. Thus operations in which sparsity is changed are likely to involve modest memory moves and products will envolve an element of sorting.
    Because C's standard braindead malloc() heap library does not provide access to the size of an allocated block, we store an indicator of the current size of the dynamic array coeff[] in capacity, and must explicitly check and possibly "resize" the data whenever the nonsparsity of the represented multivector increases. This may involve a "reallocation" of memory and a new pointer contents, so we must be aware that mv operations may change the multivector handle as well as the pointed-to data.
    For L£3 , bladesflag can be a byte but it is perhaps more natural to adopt a 32-bit type so that we can handle L£5. The mvcapacity integer type can then be held in a byte if desired, but alignment issues then arise.

    [  Rather than storing a 2L-bit bitfield, one can of course store a list of basis bladeindices so that, for example p+2.5e1+7.2e13 would be stored as { (p,0x0), (2.5;0x1), (7.2;0x5) } rather than { (p,2.5,7.2) ; 0x23 } . This has the advantage that the coordinates can be stored in any order but is somewhat flabby and makes operations such as taking the scalar part or recognising a 1-vector far slower. If blades are simply added to the end of the list rather than scanning for an existing presence, the list can exceed 2L entries. This author favours the )incode(bladesflag) bitsfield approach. ]

Large N

    For N>5 rather than increase the bladesflag width (to 32 32-bit words for N=10, for example) we retain L=5 and adopt a hierachical structure
 typedef   union { real r; multivector m;}   mvcoeff;
 struct multivectorstruct {   mvheader h; mvcoeff coeff[OPEN_ENDED];}

where multivector and mvheader are defined as previously.
     For multivectors within e12345 we set level=0 and interpret the coefficients as reals.   For multivectors with blades outside e12345 we set level>0 and interpret the coeff entries as multivectors (ie. pointers to a multivector structure) of a strictly lower level.
    Thus, for example, we represent e4 + e6 + e123456 = (e4)(1) + (1 + e12345)(e6) as a multivector with level=1; present = 3   and coeff[0] and coeff[1] containing pointers to level 0 multivectors containing e4 and (1 + e12345) respectively.  
    Note that we have the lower level "cioefficient" multivector on the left of higher level blades, in accordance with the left-to-right increasing subscript ordering for same level blades like e23.
    This system enables us to represent a multivector of any dimension N as a "tree" having level 0 at the leaves and level |(N-1)/L] at the root, with sparse multivectors being stored resonably compactly. Multivector operations are programmed recursively and are typically faster over the subspace spanned by e12345 than that spanned by, say, e34567.

    If we disallow mvcoeff arrays of dimension zero then mvcapacity need only hold values in the range [1,2L] inclusive which we represent with L bits by interpreting 0 as one cell, 1 as two cells, etc. . level is unlikely to exceed six and remains zero or one for N£10. It is thus possible to combine level and capacity into a single byte for N£40 Restricting mvlevel to a byte limits N£1280 .

    If we must allow N>5 (and we will here be interested in Â5,2) it may make sense to reduce the levelity to L=4 so that level 0 multivectors exist within e1234 . Interpreting capacity i as indicating i+1 cells in the dynamic array to keep [1,32] within a nybble ; and assuming N£64 so that level<16 we attain
 typedef unsigned int uint;
 typedef struct { uint bladesflag : 16 ; uint level : 4 ; uint capacity : 4 ; uint spareflags    : 8 ;   } mvheader;
 typedef   union { real r; multivector m;}   mvcoeff;
 struct multivectorstruct { mvheader h;  mvcoeff coeff[OPEN_ENDED];}

where the 8 bit spareflags is a bifield provided principly to ensure memory word alignment but consequently available for various indicators (normalised, singular, freed, a reference count, and so forth) or sufficient to hold the negative and zero signature bits for the four basis 1-vectors associated with level if desired. Each bit taken from spareflags and added to level doubles the the number of levels and so N.

    This system is used in MV 1.1 (previously available from this site) and throughout the discussions and example code below. But when the "leaf" real coordinate type (a 64-bit double perhaps) is significantly wider than the multivector pointer type (a 32-bit pointer perhaps) it is storage inefficient.
    MV 1.2 more firmly seperates the level=0 and higher (leaf and node) multivector types with
 typedef struct multivectorstruct0 * multivector0; struct multivectorstruct0 { mvheader h;  real[OPEN_ENDED] r;}
 typedef struct multivectorstruct * multivector;   struct multivectorstruct { mvheader h;  multivector[OPEN_ENDED] m;}

The m array of a multivector is allowed to hold either multivector or multivector0 pointers . Technically this should more properly be an array of union elements of the two pointer types, universally complicating the referencing syntax so as to avoid the proud and thrusting casting statements that would otherwise be necessary in particular source contexts, rather than merely appropriate.

    Though not strictly necessary, it is arguably sound programming practice to insist that the final unused cells in a non-leaf struct multivectorstruct contain NULL rather than "obsolete" pointer values. This is attainable by explicit zeroings in the "compaction" and "pruning" code loops but is ultimately a matter of taste.

    Compaction purists might, for N£32, favour the minimal levelity L=2, ie. a unary-or-binary tree representation using packed structs and one byte header
 typedef struct { uint bladesflag : 2 ; uint level : 5 ;   uint capacity : 1 } mvheader;

    A levelity of L=3 allows bladesflag to fit in a byte but L=4 is often preferable. A single-set-bit mask for a 16-bit bladesflag can be held and shifted in a 32-bit word without worrying about loosing the top bit and falsely suceeding a <= test with another bladesflag.
    Though an even L also provides minor advantages in implimenting an optimised dual primitive, this author still favours L=5 on occassion.

    Assuming e1,e2,...eN to be an orthonormal basis for Âp,q allows for significant computational optimisations. Orthogonality ensures that the geometric product of any two (extended) basis blades is a single scaled basis blade greatly simplifying coding , and normality ensures that the scaling factor accrued is either 1,-1, or 0 keeping blade manipulations logical rather than arithmetic. We can then represent the signatures of the   N basis 1-vectors by means of two N-element bit-arrays. One array flags negative signatures and the other null signature. If a nullsignature bit is set, the corresponding negative signature bit is ignored and available for another boolean property of null basis vectors (such as whether its quasi-inverse lies immediately above or below it in the basis ordering).

    Supporting maximal dimension N suggests an (N+1)-bit mvgradesflag data type whose ith bit flags a non-zero grade i component. We need N+1 bits because both 0 (scalar part) and N (pseudoscalar part) are meaninful. Restricting N£7 allows an 8-bit mvgradesflag data type and in general the mvgradeflag type is one bit wider than the fullbladeindex type and so more likely to be limiting. MV 1.6 supports N£63 to restrict the mvgradeflag types to 64-bit.

mv C++ wrapper class
    C++ , a disasterously ill thought out language, disallows flexible array members in classes, making a C++ dynamically compacted multivector class vastly harder to impliment without adding a further layer of runtime indirection, as for example
 class mv { multivector m ; }    // added indirection at root

 class mv { mvheader h; union { real * r ; mv * m; } coeff; }   // added indirection at every level

    Fortunately the former approach does have much to commend it. If a and b are multivector implimentations we would not wish b = 4.2*a; to necessarily construct a full copy of a and then multiply every "leaf" real coefficient by 4.2. Similarly if ~ is overloaded to provide multivector reverse we would not wish c = a * b * (~a); implimenting c=aba§ to actually compute a reversed copy of a if avoidable. Having defined a C multivector type being a pointer to an open-ended struct type we can define a higher order abstraction
 class mv { real s ; multivector m ; mvoperation op; }

providing "handles" to our multivectors for which we can exploit C++'s albeit rudimentary operator overloading and garbage collection facilities.

    s is here a scale factor scalar multiplier considered to apply to the entire contained multivector so that multiplying an mv multivector representation by a scalar is just one multiply. This could potentially be a wider real type than the multivector leaf real. We might choose to hold multivector coordinates in floats but s in a long double for example.
    Even though the scale factor of an mv is theoretically transparent to the user in practice numeric precision issues mean we would ideally wish for the various magnituide measures of the multivector to be of roughly unit order so that the coordinates are not all small (other than for 0) nor are any really huge. This is particularly true if we use a limited precision numeric type for the "leaf" coordinates to reduce memory load. Further, it is useful for the scale factor to provide a rough and ready measure of the magnitude of the mv , within a factor of 2±4 say,   to allow for rapid recognition of neglegible contributions to a process.

    mvoperation is a bitsfield or similar flags type indicating the "buffered" presence of conjugations and other single-operand multivector operations such as dual. Primatives that multiply or add mvs are then at liberty to avoid  conjugation or dual evaluations when such is possible, postponing their actual computation until such time as they are either unavoidable or "cancelled out" by subsequently requested manipulations.

    Care must be taken in adding a reference count to the h header field of the underlying C multivector accessed struct and creating contructors and an assignment operator= override for mv that make "shallow" rather than "deep" copies. Having done so one can construct a C++ based mv class having no "sight of" the specifics (such as the levility L) of the underlying C (or assembly) multivector suite. Multiple mvs may then reference a single multivector and will only need to create a private copy when about to modify their element m if m->h.refcount > 1 .  

Versors and Other Factorisations
    A k-blade has an extended basis representation comprising NCk coordinates, while a k-versor requires åj=0k NCj coordinates. But each can also be represented by k N-D 1-vectors, eg. with a real k×N matrix .
    A k-vector of degree j is expressible as the weighted sum of j k-blades and so may be represented by jk 1-vectors eg. in a j×k×N real array , rather than by NCk extended basis coordinates .
    These are particular examples of what we might call the compacticity of factored forms , wherein a multivector is stored in an "unevaluated product" factored form a=b¨c¨..¨g which typically requires far less storage space than a basis coordinates representation.
    For associative ¨ we can evaluate   a¨x as b¨(c¨(..¨(g¨x))...) which may be a "sparser" computation than "expanding" a before applying it to x ,
    For N=3 the four reals quaternian representation of 2-versors is more efficient than the six reals for the factored form, but for N³5,  NC2N(N-1) > 2N so we gain even for k=2. For k near ½N and large N, NCk is vastly greater than kN so factored forms may be essential for some applications.
    If memory and processor load permit, one can expand a out when computing with it, regarding the factored form as a compact "storage" form but we can often avoid such expansions. Operations such as inverse may be significantly facilitated by working on factored forms, and reversing and automorphic conjugatios can be applied without having to expand them.
    What we are essentially doing with such representations is storing multivector-valued expressions rather than extended basis coordinate representations, enabling us to continue to operate at N for which basis coordinate representatations would exceed available memeory. We are entering the realm of symbolic computations.

    We can adapt and extend such blades and versor representations to allow storage of multivectors in "factorised" forms by restricting the number of levels allowed to three less than that allowed for by the bitwidth of the level type and exploiting the freed level values to flag alternate formats in which bladesflag is interpreted as an integer M less than the capacity with multivector a considered to hold the M term product a->coeff[0].m *  a->coeff[1].m * ... *  a->coeff[M-1].m where * is the geometric, outter (Ù), or intersective (Ç) product.
    Our C multivector structure can thus support factored forms, and we will maintain such forms rather than expand them when such is computationally benefical.
    However MV 1.2 multivector functions expand factored forms out when adding them rather than seeking common factors and do not themselves attempt factorisations.
    MV 1.2 allows the user to contruct such factored forms at the mv C++ class level by setting a global boolean flag g_mvfactored . When this is set, the * and ^ operators for the mv handles build factored multivector representations, rather then immediately evaluating the products.
    The + and - mv operators do not attempt to maintain such factorisations in MV 1.2, expanding (multiplying) them out into full expanded basis coordinate form prior to adding.

Programing Multivector Products
    Having covered conjugations and the various restricted products such as ¿ and Ù in the Multivectors chapter, we approach the general problem of implimentating a general multivector product for our C/C++ large N mutivector representations of the form a^1 ¨ b^2 where ^1 and ^2 are arbitary combinations of the §,#, § and negation conjugations and ¨ is the geometric product, a restriction of it, or a commutator product. Generalising the product primitive in this way will greatly facilitate the hierachical implimentation and makes for a compacter and robsuter codebase.

    It is important to approach multivector products in as general a manner as possible, because we will want to impliment the various alternative products like Forced Euclidean and Intersective and we will also wish to compute restrictive products like Ù or ¿ or restrictions to particular grades as rapidly as possible, with minimal code duplication.

Geometric and Restricted Products
    We first consider ¨ as the geometric product or a restriction of it such as Ù or ¿ or * (but not D which we evaluate by computing and examining the full geometric product).
    Let us first suppose that multivector representations a and b for a and b have the same level l so that a = åi=0a aifi and b = åj=0b bjgj where a and b are "nonsparsity" or "basis-degree" integers in the range 0 to 2L , ai = a->coeff[i].m and bj = b->coeff[j].m are multivectors in the space spanned by e123...lL , and fi and gj are basis blades in the space associated with level l, ie. the space spanned by L-blade elL+1Ù elL+2Ù...Ù e(l+1)L
    a^1 ¨ b^2   =   (åi=0a aifi)^1 ¨ (åj=0b bjgj)^2   =   åi=0a åj=0b  (aifi)^1 ¨ (bjgj)^2
    How we handle this depends on whether ^1 and ^2 are reversing conjugations. Assuming for the moment that they are not we have
    a^1 ¨ b^2   =   åi=0a åj=0b  (ai^1 fi^1) ¨ (bj^2 gj^2) .
    Since fi is from a distinct space to bj we can commute the bj^2 across fi^1   provided we add an involution # to the ^2 acting on bj whenever blade fi is of odd grade. Letting ^2' denote this modified conjugation we have a^1 ¨ b^2   =   åi=0a åj=0b  (ai^1 ¨ bj^2') (fi^1 ¨ gj^2) .
    fi^1 ¨ gj^2 is a product of two basis blades from the L-dimensional space associated with level l and can be immediately evaluated. If it vanishes either because of a common null signature 1-vector or because ¨ is restrictive then the associated ai^1 ¨ bj^2' product can be neglected. Otherwise fi^1 ¨ gj^2 will be a basis blade multiplied by ± 1 and any negation so introduced can be subsumed into either ^1 or ^2' for the product ai^1 ¨ bj^2' which is a product of two level l-1 multivectors (or of two real scalars when l=0) so we have the basis for a recursive procedure.
    When ^1 or ^2 involve § then things are complicated only slightly with the inclusion of some further grade-triggered involutions necessiated to compensate for "sliding" the "components" into their desired "seperation" positions.

    If a's level is less than that of b then we can simply multiply a^1 into b 's multivector coefficents, although again we may need to add some involutions if ^2 is reversing. If a's level is greater than that of b then (assuming no reverse in ^1) we commute b^2 across fi^1 incurring an involution if blade fi is odd. If ^1 contains § then the ^1 on ai also inccurs an # if fi is odd.

    Thus we have a recursive strategum that ultimately requires two primitives: the optimised multiplication of two level 0 multivectors and the product of two (base level) pure blades from a space of dimension L. The former can of course itself exploit the latter.

Commutator Products
    We can evaluate ¨ = × or ~ by means of the geometric product but this is grotesquely computationally inefficient, with many nonzero blade coordinates being explicitly calculated twice only to cancel eachother out or double. If the blade-blade product primitive supports the commutation products × and ~, as it can easily be made to do, then so too will the base level multivector product primitive. Higher level multivector commutations are more problematic and perhaps best implimented seperately, rather than adding conditionals within the inner summation loop of the multivector product primitive. For a×b º ½(ab-ba) , we have
      (ai^1 fi^1) × (bj^2 gj^2)   =   ½( ai^1 fi^1 bj^2 gj^2 - bj^2 gj^2 ai^1 fi^1 )
  =   ½( ai^1 bj^2' fi^1 gj^2 - bj^2 ai^1' gj^2 fi^1 )   =   ½(ai^1 bj^2' -/+ bj^2 ai^1' ) fi^1 gj^2     with the - occuring whenn fi and gj commute, and where ^1' is ^1 with an added # if gj is odd and ^2' is ^2 with an added # if fi is odd.

    When calculating the blade products fi^1 ¨ gj^2 we may get the same resultant blade for multiple i,j combinations and so have to add together the associated ai^1 ¨ bj^2' multivector products. The heierachical and compacted nature of the multivector representation makes addition a somewhat fiddly procedure akin to merging two seperately sorted lists. One potential strategy is a "bucket sort" type approach wherein the level l multivector being constructed by the åi=0a åj=0b  loop is built in an "expanded form" in an multivector "register" reg of maximal capacity 2L with the multivector coefficient of the kth basis blade being constructed in the kth cell reg->coeff[k].m of the register rather than continuously shuffling data to always occupy the lowest possible cells.
    Since in our recursive descent we will always be moving to a strictly lower level, we can, for a given level, keep this register singular and global rather than requiring multiple instances on the stack. On exit from the summation loops, the register multivector can be "condensed" into a compacted "result" multivector of the required capacity.

C multivector product routine
    By the above means, it is relatively straightforward to impliment all the mutliple "flavours" of conjugated and/or restricted geometric products in a single C function that is passed two multivectors and a product code incorporating two 4 or 6-bit conjugation specifiers and the restriction type, although the commutator and anticommutatior products are sufficiently different to warrant seperate rotines for nonzero levels. We simplify the code listed here by removing provision for factored forms. Note that C symbol ^ here denotes exclusive-or (XOR) rather than geometric outter product Ù .   The alignment of comments has suffered the transition to HTML somewhat.

    First we have the same-level blade by blade product primative.   Note that bladeindex types are L-bit bitflags specifying single-level blades whereas fullbladeindex types are N-bit bitflags specifying blades in the full algebra.
    Since a same-level blade-blade product primative takes two L-bit bitflags and (in the case of an orthonormal basis) returns either an L-bit blade indicator (typically the exclusive or of the inputs) and a sign indication bit, or an indicator that the product is zero. The hardest part tends to be the sign indicator and it is natural to pretabulate this for common products with 2L×2L entry 1-bit _LUTs each requiring 22L-3 bytes. For L=5 this is a 27-byte LUT for each pretabulated product while for L=4 it is 25-byte.
    We discuss here the code necessary to perform such a blade-blade product in the absence of a LUT, which is of course the code necessary to fill such a LUT.
 boolean OddBlade(bladeindex bi) { return(BitParity(bi));} // 1 iff bi has odd number nonzero bits
 boolean AntiCommutingBlades(bladeindex bi,bladeindex bj) // return 1 iff blades anticommute                
 {   if(OddBlade(bj)) return(BitParity(bi & (~bj)));   else return(BitParity(bi & bj));
 bladeindex BladeBladeProduct(bladeindex bi,bladeindex bj,mvproduct * ppro,mvlevel level,bladesflag inhibit)
 {   // Mutiply (logical) two same-level blades, returning index of resulting blade, or MV_INDEX_NIL if product vanishes .
    // Sets or clears MV_LEFT_NEGATION bit of pro according to sign of blade product (leaves extant if product vanishes).
    // level is required to access correct signatures - unneccesary if Euclidean.
    // inhibit forces particular results to MV_INDEX_NIL.
    // Works for commutative and anticommuatative products.
    bladeindex br = bi^bj;    // exclusive orr gives product blade index
    // Restrictive product filters
    {   if (((*ppro) & MV_OUTTER_PRODUCT)  && ((bi & bj)!=0) ) return(MV_INDEX_NIL);
       bladeindex bri = br & bi;
       bladeindex brj = br & bj;
       if(   (((*ppro) & MV_ONTO_CONTRACTIVE_PRODUCT)&& bri )
           || (((*ppro) & MV_BY_CONTRACTIVE_PRODUCT)  && brj )
           || (((*ppro) & MV_INNER_PRODUCT)                && bri && brj )
          ) return(MV_INDEX_NIL);   
    // Commutation product filters
    if(    (((*ppro) & MV_COMMUTATOR_PRODUCT)           && (!AntiCommutingBlades(bi,bj)) )
        || (((*ppro) & MV_ANTICOMMUTATOR_PRODUCT)    && ( AntiCommutingBlades(bi,bj)) )
        )  return(MV_INDEX_NIL);     
    if( inhibit && ((1 << br)&inhibit))   return(MV_INDEX_NIL);    // vanishes due to inhibition
    // index established, now have to determine sign and spot zero if null signature involved  
    boolean sign=0;
    bladeindex   bit = 1 << (MV_DIM0-1);                           // start with highest axis bit
    boolean bjparity = BitParity(bj) ;                                 // parity of nonzero bj bits below and including current bit
    {   if(bi & bit)
       {   if(bj & bit)                                                                // 1-blade present in both so signature contributes
          {   if (!((*ppro) & MV_EUCLIDEAN_PRODUCT))                     // Check signatures enabled
             {   if(mv_nullsigs[level] & bit) return(MV_INDEX_NIL);// vanishes due to null signature
                if(mv_negsigs[level] & bit) sign ^= 1;                  // accrue sign flip due to negative signature
             bjparity ^= 1;
           sign ^= bjparity ;                                                      // must comute bi axis across all lower axies present in bj
       else if(bj & bit) bjparity ^= 1;
    }   while(bit);
    // Now derive sign changes due to specified conjugations    
    mvconjugation cnj = (*ppro) & MV_CONJUGATION_ALL;
 if(cnj) REPORT2(LOCAL_NEWS,6,("[LftPcnj %X to lft blade %O -> %X] ",cnj, bi, level, mv_conj[cnj][level] & (1 << bi)));
    if(cnj && (mv_conj[cnj][level] & (1< 
 if(cnj) REPORT2(LOCAL_NEWS,6,("[RgtPcnj %X to rgt blade %O ->%X] ",cnj, bj, level, mv_conj[cnj][level] & (1 << bj)));
    if(cnj && (mv_conj[cnj][level] & (1 << bj))) sign ^=1;   // bj  conjugation
    *ppro = ((*ppro) & ~1) | sign;
    if(sign) *ppro |= MV_LEFT_NEGATION;   else *ppro &= ~MV_LEFT_NEGATION;

    The above can be simplified and optimised in various ways. The logical evaluation of the sign change accrued by the commutations irrespective of signatures (and so level) can be seperated out and pretabulated in a level independant 2L×2L bit array implimented as a bladeindex indexed array of bladesflags.
    The sign change accrued by negative signatures is available as the parity of (bi & bj & mv_negsigs[level]) .
    The signatures sign flip can be amalgamated with the commutation sign bit into a level-dependant 2L×2L bit array reducing the basis blade blade product of same-level blades indexed by bi and bj to forming the resultant blade index as the exclusive or bi ^ bj with sign given by entry (bi,bj) of a 2D 1-bit LUT.   The above code can of course be used to build such a LUT.

    Next, the base level generic multivector product. rrMUL(x.y) is assumed to be a macro or inline function for an optimised multiplication of two leaf real types.
 multivector MultivectorProduct0(multivector r, multivector a, multivector b, mvproduct product,bladesflag inhibit)
 {   // Fast MultivectorProduct assuming a and b both have level 0   (works for commutator products too)
    if((a==MV_ZERO)||(b==MV_ZERO)) {   if(r!=MV_ZERO) r->h.present=MV_FLAG_NONE; return(r);   }    // rapid zero case
     ZeroMultivector(mv_reg[0][0],0);                                                                  // clear mv register
     bladeindex inda,cnta,cntr;    inda=cnta=cntr=0;    bladesflag bfa=a->h.present;// prepare for outter loop
    {   uint32 shf=BSF_32(bfa);   bfa >>= shf; bfa>>=1;inda += (bladeindex)shf;         // fast low->high scan for nonzero bit
       bladeindex indb=0;   bladeindex cntb=0;   bladesflag bfb=b->h.present;         // prepare for inner loop
       {   uint32 shfb=BSF_32(bfb);bfb >>= shfb;   bfb>>=1;   indb += (bladeindex)shfb;
 if(cntb > b->h.capacity) REPORT2(LOCAL_NEWS,1,("\n cntb=%X exceeds capacity %X prse=%X bfb=%X bfa=%X",cntb,b->h.capacity,b->h.present,bfb,bfa))
          mvproduct pro = product;
          bladeindex ri = BladeBladeProduct(inda, indb, &pro, 0, inhibit);            // Do blade product first  
          if(ri!=MV_INDEX_NIL)                                                                              // and check non vanishing.
          {   bladesflag rf=(1 << ri);
 ASSERT2(ValidReal(a->coeff[cnta].r) && ValidReal(b->coeff[cntb].r) );               // Debug catch bad float data
             Real c=rrMUL(a->coeff[cnta].r, b->coeff[cntb].r);                                 // multiply real coordinates
             {   if(pro & MV_LEFT_NEGATION) c=-c;                                      // negate if BladaBladeProduct set NEGATION bit
                if(   mv_reg[0][0]->h.present & rf)   mv_reg[0][0]->coeff[ri].r += c;
                else   mv_reg[0][0]->coeff[ri].r = c;
                mv_reg[0][0]->h.present |= rf;
          }   // endif ri
          indb++;   cntb++;
       } // endwhile bfb
       inda++;   cnta++;
    }   // endwhile bfa

    These allow the general generic recursive multivector product function:
 multivector MultivectorProduct(multivector r, multivector a, multivector b, mvproduct product,bladesflag inhibit)
 {   // return a*b where * is product specified by product   (does not work for commutator products)
    if((a==MV_ZERO)||(b==MV_ZERO)) {   if(r!=MV_ZERO) r->h.present=MV_FLAG_NONE; return(r);   }    // rapid zero case
    if(r==a)   {   r = MultivectorProduct(NULL,a,b,product,inhibit);   KillMultivector(&a); return(r);   }
    if(r==b)  {   r = MultivectorProduct(NULL,a,b,product,inhibit);   KillMultivector(&b); return(r);   }
    if(a->h.level >= b->h.level)
    {   if(a->h.level > b->h.level)
       {   // --------------------------------------------------
          //   can subsume b into a's multivector coefficients
          // --------------------------------------------------
          bladeindex inda=0; bladeindex cnta=0;   bladeindex cntr=0;
           bladesflag bfa=a->h.present;
          r = EnsureCapacity2(r, a->h.level, BitsSet(a->h.present), MV_DIM0);
          {   uint32 shf=BSF_32(bfa);   bfa >>= shf; bfa>>=1;    inda += (bladeindex)shf;   // fast scan and move to nonzero bit
             if((!(product & MV_ONTO_CONTRACTIVE_PRODUCT)) || (inda==0) )
             {   mvproduct pro=product;
                if(OddBlade(inda)) { pro ^= MV_RIGHT_INVOLUTION;   if(product & MV_LEFT_REVERSE) pro ^= MV_LEFT_INVOLUTION;}
                 r->coeff[cntr].m = MultivectorProduct(r->coeff[cntr].m, a->coeff[cnta].m, b, pro, inhibit);
                if(r->coeff[cntr].m!=MV_ZERO) { r->h.present |= (1 << inda);cntr++;}
             cnta++;   inda++;
          }   // endwhile
       } // endif a->lev > b->lev
       if(a->h.level==0) return(MultivectorProduct0(r,a,b,product,inhibit));       // Rapid both 0 level case
       // --------------------------------------------------------------------------------
       //   a->h.level = b->h.level > 0      
       //   Bilinearly expand product into uncompressed multivector in mv_reg[a->h.level][0]
       // ------------------------------------------------------------------------------------
       bladeindex inda=0; bladeindex cnta=0;   bladeindex cntr=0;
        bladesflag bfa=a->h.present;
       {   uint32 shf=BSF_32(bfa);   bfa >>= shf; bfa>>=1;   inda += (bladeindex)shf;   // fast scan and move to nonzero a bit
          boolean odda = OddBlade(inda);
          bladeindex indb=0;   bladeindex cntb=0;      bladesflag bfb=b->h.present;
          {   uint32 shf=BSF_32(bfb);         bfb >>= shf; bfb>>=1;         indb += (bladeindex)shf;
             mvproduct pro = product;
             bladeindex ri = BladeBladeProduct(inda, indb, &pro, a->h.level, inhibit); // may toggle MV_LEFT_NEGATION bit in pro
             {   if(odda)
                {   pro ^= MV_RIGHT_INVOLUTION;       // need to commute b's coeff across a's blade  
                   if(product&MV_LEFT_REVERSE) pro ^= MV_LEFT_INVOLUTION; // and move a coeff accross a's blade if a reversed
                if((product&MV_RIGHT_REVERSE)&& OddBlade(indb))   pro ^= MV_RIGHT_INVOLUTION;   // and b's coeff across b's blade if b reversed
                bladesflag rf=(1 << ri);
                 if(   mv_reg[a->h.level][0]->h.present & rf)
                 {   mv_reg[a->h.level-1][1] = MultivectorProduct( mv_reg[a->h.level-1][1], a->coeff[cnta].m , b->coeff[cntb].m , pro);
                    mv_reg[a->h.level][0]->coeff[ri].m = MultivectorAdd(mv_reg[a->h.level][0]->coeff[ri].m, mv_reg[a->h.level-1][1]);
                 {   mv_reg[a->h.level][0]->coeff[ri].m = MultivectorProduct( mv_reg[a->h.level][0]->coeff[ri].m, a->coeff[cnta].m , b->coeff[cntb].m , pro);
                    if(mv_reg[a->h.level][0]->coeff[ri].m != MV_ZERO) mv_reg[a->h.level][0]->h.present |= rf;   
             }   // endif ri
             indb++;   cntb++;
          } // endwhile bfb
          inda++;   cnta++;
       }   // endwhile bfa
     }   // endif    
     { // --------------------------------------------------------------------------
        //   b->h.level > a->h.level so  subsume a into b's multivector coefficients
       // --------------------------------------------------------------------------
       bladeindex indb=0;   bladeindex cntb=0;   bladeindex cntr=0;
        bladesflag bfb=b->h.present;
       r = EnsureCapacity(r, b->h.level, BitsSet(b->h.present));
       {   uint32 shf=BSF_32(bfb);      bfb >>= shf; bfb>>=1;   indb += (bladeindex)shf;
          if((!(product & MV_BY_CONTRACTIVE_PRODUCT)) || (indb==0) )
          {   mvproduct pro = product;
             if( (pro & MV_RIGHT_REVERSE) && OddBlade(indb) ) pro ^= MV_RIGHT_INVOLUTION;
             r->coeff[cntr].m = MultivectorProduct(r->coeff[cntr].m, a, b->coeff[cntb].m, pro, inhibit);
             if(r->coeff[cntr].m!= MV_ZERO) { r->h.present |= (1 << indb);cntr++;}
       }   // endwhile

    The adminstrative routines ZeroMultivector    and Condense   are ommitted here, their functions being obvious but tedious.   They can be found in the MV sources.

C++ product wrapper
    The C++ abstractified mv type can be multiplied thus. The primative is complicated slightly by abstracting out ("buffering") dual as well as conjugations.
 mv mv_product(const mv &a,const mv &b,mvproduct pro)
 {   pro ^= ( a.conj() | ( b.conj() << MV_CONJUGATION_SHIFT ) );
    if(a.getDual() && (!(g_N & 1))) pro ^= MV_RIGHT_INVOLUTION; // need to commute i across b
    mv r = mv(MultivectorProduct(NULL, a.m, b.m, pro ));   r.s = rrMUL(a.s,b.s);
    if(a.getDual() ^ b.getDual()) r.setDual(true);
    if(a.getDual() && b.getDual() && g_i_neg) r.s = -r.s;         // double dual may entail sign flip
 mv operator*(const mv &a, const mv &b)   {   return(mv_product(a,b,MV_GEOMETRIC_PRODUCT            ));}
 mv operator^(const mv &a, const mv &b)   {   return(mv_product(a,b,MV_OUTTER_PRODUCT                  ));}
 mv operator|(const mv &a, const mv &b)   {   return(mv_product(a,b,MV_ONTO_CONTRACTIVE_PRODUCT   ));}

    Suppose we are at the lth level in the heireachy with a multivector whose coefficients come from an lL dimensional multivector space . We then have (åj=0Jz ajbj)* = åj=0Jz (aj*1)(bj*2) where *1 is the dual over the lL-dimesnional space inhabitted by aj and bj*2 is the dual of basis blade bj in its L-dimensional space (or (N-lL)-dimesional space if we are at the top level). For odd L we have to take bj#*2 if l is also odd.

Programing Meet and Join

    Recall that if the join j=akÈbl is known or guessed (such as taking j=(akÙbl)~ when nonvanishing) , then the meet is directly computable as akÇbl   =   (akj-1 ).bl   =   ((akj-1)Ù(blj-1)).j-1 .
    This computation returning zero indicates that j was not in fact the join.

    One infelicity of univalued-function based languages like C is that integer and modulo (remainder) divisions are optimally evaluated simultaneously but provided seperately. Meet and join are similar in that they are naturally computed together . The problem of finding the meet is one of finding a ½(k+l-d)-blade in (akDbl)* present in ak , while that of finding the join is one of either extending akDbl by the meet or constructing an (N-½(k+l+d))-blade wholly outside both ak and bk and then dualing it. Conventional approaches attempt to do one of these (or construct a ½(-k+l+d)-blade d with akÙd = akÈbk   as described in Fontinje et al) but it is possible to do both concurrently and go with whichever succeeds first as in the following composite algorithm for nonull ak and bl.

    The cornerstone of the meet construction algorithm described below is that if 1-vector c Î s=(akDbl)* then projection ¯ak(c) lies in the meet akÇbl while rejection ^ak(c) lies outside the join akÈbl.
[ Proof : c'=¯ak(c) lies in ak and so has a component in the meet and a component c" in ak'. But c" must arise as a projection of a 1-vector s" in s into ak with c" = ¯ak(s") = ¯ak'(s") = 0 since s excludes the disjoint.   Because ¯ak(c) lies in the meet, it lies outside the disjoint and so inside (akDbk)*. Since c also lies inside (akDbk)* so too must ^ak(c)=c-c' which is therefore a 1-vector lieing outside both ak and the disjoint and so also lieing outside bl and hence outside the join.  .]
    Note that it is often computationally prudent to disregard scale within the iterative process and normalise the join (and rescale the meet) on completion. However, care must be taken to avoid blade magnitudes getting progresively smaller leading to precision errors.  

Meet Construction Algorithm

    Given the delta product akDbl, one can construct the meet and join by algorithms such as the following. However, calculating the delta product is often problematic in that we must impose d, the grade of the highest nonvanishing compoent. What "nonvanishing" means can in practice be unclear, given possible nearzero "error" coordinates of higher grade blades. We discuss this further below, but for now we assume d to be duduceable from our representatation of akbl. The algorithm is:

  1. Check for and seperately handle special cases of zero (or nearzero) ak or bl. The join is independant of the scales of ak and bl so if either are unhelpfully small or large renormalise them either exactly or approximately.
  2. We initially reduce the computational requirements by swapping the arguments if necessary to ensure that k£l . We do this simply because we will be working with ak more than with bl and lower grades make for sparser products. If we sign the join for positive maximal coordinate we won't need to later "compensate" for this swap and can "forget" having made it.
  3. Compute akDbl and set d equal to its grade. If d=0 set j=ak and Exit.
  4. Set j=i and Ej= N - ½(k+l+d) , the excessity of the grade of j over the grade of the desired join. If we know in advance that the join excludes a partcicular q-blade fq then we can set j= ±fqi and Ej= N-q-½(k+l+d) .   
  5. If Ej=0 then Exit.
  6. Set m=1 and Em= ½(k+l-d) , the deficit of the grade of m=1 under the grade of the desired meet.
  7. If Em=0 then set j=akÙbl=akDbl  and Exit.
  8. Set s= (akDbl)¿j , an (N-q-d)-blade spanning the space from which we will extract constructive 1-vectors. It is prudent to rescale s if necessary so that its maximal absolute coordinate is nonsmall.
  9. FOR i = 1 TO N
        If s is a 1-vector we set c=s, otherise set c = ¯s(ei) , although if s has a reasonable magnitude then c=(ei¿s)¿s will suffice and avoids inverting blade s. If c is small (ei (near) orthogonal to s) then Goto 14.
  10. Decompose c into components c'=¯ak(c) and d=^ak(c) = c-c' inside and outside ak, and this time we do require correctly signed and scaled projection using ak-1.
  11. If c'=¯ak(c) is robustly nonzero we can accordingly "add" it to our constructed meet with m = mÙc' and deduct 1 from Em. If this reduces Em to zero we set j=(akDbl)Ùm and Exit.
  12. If d=^ak(c)=c-c' is robustly nonzero we can "remove" it from our constructed join with j = d¿j and deduct 1 from Ej. If this reduces Ej to zero then Exit. Since c was not small, at least one of the step 11 and 12 construction operations will have occurred.
  13. If desired, remove c from s so reducing its grade via s=c¿s .
  14. NEXT i . The algorithm will Exit prior to completing this loop .

    On exiting from the above algorithm, j will contain the unnormalised and quasi-arbitarily signed join so we normalise and selectively negate it to taste and then set m to either (ak¿j-1)¿bl or (akDbl)¿j-1 according to choice, assuming the meet is also required.

     One can simplify the algorithm by adopting only one of the strategies, taking Ej steps to construct j only, or Em steps to construct j via the (incorrectly scaled) meet.
    Rather than looping through the basis 1-vectors in index order and using only robustly nonzero projections, one can iterate through just the basis factors of the (N-d)-blade of maximal absolute coordinate in (akDbl)* to ensure robustly nonzero projections into s.

    The meet construction algorith provides both the meet and the join. It also provides an m-frame for the meet (the c' in the algorithm), but no d-frame for the disjoint part of the join. It computes the join by "paring down" the pseudoscalar i rather than constructively.

Multivector Programming Issues

    The manipulation of blades expressed "expanded out" with regard to a particular extended frame rather than in a factored form frequently requires the judicious "pruning" or ignoring of small non zero coordinates. When forming a restrictive product like ¿ or Ù, we know the resultant grades, but products such as D require deciding which nonzero coordinates in a particular expansion are "genuine" and which are "failed zeroes" ; values which should theroretically be zero but actually aren't as a result of limited computational precision. Similar problems arise in some forms of matrix based analyisis.

    It becomes a practical necessity to have some means of "cleaning up" or ignoring the "failed-zero detritus" when performing multivector operations such as D or -1 or since their presence can profoundly miseffect the result. Even in operations when the detritus is numerically benign, it still serves to slow computations and bloat storage requirements. Typically we might scan a multivector for its highest absolute coordinate, or compute the sum of all the absolute coordinates, or some other measure, and then discard any blades whose absolute coordinate is less than some small arbitary fraction of that value. 2-16 say. We will refer to the practice of clearing the "near zero" coordinates in such a manner as whittling the multivector. We are computationally effecting it, but only in the sense of bringing it nearer to what we consider its "true" value to be. Whittling is frame dependant and makes little sense to the mathematician, but programmers used to thinking of "transparently tidying" data prior to computations will be comfortable with the concept.

Multivector Inspection
    When forming delta products and similar computations it is useful to have a primitive that examines a multivector as expressed with regard to a particular extended basis and returns the value range of the coordinates for each grade and which blades have the maximal and minimal coordinates. Since the scalar and pseudoscalar grades have only one coordinate each, there are effectively 2N reals defining the "numeric spread" of the coordinates.

The Delta Product
    When implimenting a delta product we will typically multiply the mv representors of a and b into an mv c and calculate a whittling threshold for multivector c.m based on the maximal or summed coordinate moduli. We then establish the maximal grade having a coordinate greater than this threshold and discard all blades other than those of this grade and coordinate greater than the threshold. Having done this, there is a good chance the maximum survining coordinate modulus, K say, of c.m may  be less than we would like so we might consider transparently "rescaling" c by dividing c.s and multiplying c.m by (1/K) .

Null basis vectors

    The assumption of an orthonormal basis to simplify the primitives bites back when one works with GHC representations. Null e¥ is typically implimented as e++e- for basis e+ and e- so that commonly used geometric entities like e¥a for aÎi are represented by (a#)e+ + (a#)e- with a being stored twice.
    The BladeBladeProduct routine can be adapted to return no, one, or two blades together with associated signs (as opposed to reals for a general nononorthogonal basis) and this allows null e0 and e¥ with e0e¥ = -1 + e0Ùe¥ to be in the basis instead of e+ and e-. Coding is simplified if both e0 and e¥ reside at the same level, simplified further if they are the only two null basis vectors at that level, and still further if they are the only basis vectors at that top top level with N=lL+2 and when we can exploit
    (a1+b1e0+c1e¥+d1e¥0) (a2+b2e0+c2e¥+d2e¥0) = (a1a2+d1d2-b1c2#-c1b2#) + (a1b2+b1(a2-d2)#+d1b2)e0 + (a1c2+c1(a2+d2)#-d1c2)e¥ + (a1d2+d1a2+c1b2#-b1c2#)e¥0 where a1 etc. are all lower level multivectors in e¥0*.

    If we have N=lL+2 so that the top level contains only e0 and e¥ then it may be worth implimenting the topmost level as noncompressed in that it always has four cells (containing MV_ZERO if empty) and a bladesflag value (implied or actual) of 0xF. If l=1 so that N=L+2 as might be the case for Â3% or Â4,1% then (factored representations aside) we have only two levels, a compressed multivector0 base level containing up to 2L reals an uncompressed top level containing four multivector0 pointers.

    A key advantage of including e0 and e¥ in the coordinate basis is that products such as e¥(e¥a) zero logically rather than via vanishing arithmetic computation of (e++e-)((e++e-)a) . Another is that we can projcet a multivector into e0*, e¥*, or i logically by delinking  or not copying particular multivector0 pointers.
    MV 1.6+ supports non-orthogonal null basis vectors.

Performance Issues

    Having a pointer-based hierachical linking of "multivector subcomponents" risks high dimension multiblade multivectors being "spread out" through memory by the heap manager to the possible detriment of cache-based memory access optimisations and this issue may support raising the levelity L from 4 back to 5 or higher. One might think that the hierachy system imposes a "performance grain" on the modelled geometry with multiplication by e1 being typically faster than multiplication by eL+1 which in turn is faster than multiplication by e2L+1 even when cache-issues do not arise as a result of the extra indirections involved. But observe that if L=N and basis extension e+=eN+1 "straddles" the hierachy then the hierachy will automatically "maintain" multivectors in the form a + be+ and left multiplication by an mv ae+ will be derived and represented as a((e+2)b# + (a#)e+) so all of the innerloop "blade-blade" products will have one bladeindex zero and so be potentially streamlined, which is not the case if e+ is on the same level as e1,e2,...
    These matters should be taken carefully into consideration when implimenting the higher dimensional embeddings described subsequently since the choice of "indexing" a basis extension e0, say, before or after e1,e2,..eN in the multivector type may significantly effect computation times.

Coding Issues
    The mv C++ class provided by MV is designed to be easy to use but programmers should remain computaionally aware when using it. Suppose, for example, that f is defined as an mv valued function of an mv valued parameter. We could invoke its d-directed gradient as (f(a + e*d) - f(a - e*d))/(2*e) where e is a small float value but this will have the effect of creating an mv class containing an multivector struct containing small "coordinates" and a large scaling factor. This effect will be compounded if we then take second or higher derivatives by subtracting gradients.
    Rather than transparently rescaling f(a + e*d) - f(a - e*d) prior to dividing its scale s by 2*e , it is more efficient to invoke f(a+e*d)/(2*e) - f(a-e*d)/(2*e) . The mv addition requires multiplication by the individual scales of the operands anyway, so this adds just one division to the computational load but yields an mv with s=1.

Precedence Issues
    C++ lacks  the basic functionality of precedence reconfiguration for overloaded operator symbols +, ^, etc. which means that if we overload, say, ^ for mv arguments as the outter product Ù and + as multivector addition then the expression a^b + c*d will evaluate as a^(b+(c*d)) rather than the (aÙb) + (c*d) we desire, with Ù regretably taking precendence over + . It is accordingly prudent to use explicit brackets in any mv expressions involving operators other than +,-,*, and / .

MV 1.3

    This author has coded and tested a C++ multivector class mv exploiting the techniques described above. MV 1.3 supports N<64 and is extendable to N³64 provided an unsigned integer type capable of being logically shifted containing N or more bits is constructed to hold the fullbladeindex type. MV stores multivectors in "coordinate form" with regard to a particular extended basis, but compacted so that "empty" (zero) coordinates require no space. MV also supports "factored forms" so that blades, versors, and more general product forms can be stored more compactly still as "unevaluated products", though no attempt is made in 1.2 to maintain such forms under addition.

    MSVC/Watcom C++ sources for MV 1.3.0 are available here . MV 1.3 is provided as a rough-and-ready MSVC console project rather than a library, with simple textual display primitives for displaying multivector values and no GUI. The intention is to provide an easily usable cut-and-paste sourcebase for a C++ class representing a Âp,q multivector where p+q=N is potentially so large as to prohibit non-compactified representations, but  without compromising performance for p+q £ 5. Though provision is made for null signature basis blades in the product primatives, some operations like inverse and projection assume ei2=±1 for efficiency and need rewriting for Âp,q,r with r>0, as has been done for MV1.5.
    MV 1.3 provides support for multiply Regeneralised Homeogenised Coordinates though I have to date found little use for this.

    MV1.3 is here provided as is. It was developed as a theoretical exercise and proof of concept rather than for a particular application. While having had some testing and debugging it has not yet proved itself in any major application. Concise and meaningful code has been chosen in preference to obsfucating optimisation techniques, and there remains scope for improvements and optimisations.

MV 1.3 Files
    The files provided in MV 1.3 are:

    This contains defintions of various common types and macros like uint32 and MAX used by MV but also likely to be needed by more general code not requiring access to multivectors.

    This header file should be #included in any source file wishing to use the mv class. The maximal dimension supported by the code is #defined in this file. Setting this to values below 64 or 32 automatically enables various optimisations and compaction techniques.

    This header file should be #included (instead of mv.hpp which is included from within multivec.hpp) in any source file wishing to access both the mv class and the more fundamental underlying multivector C structures "wrapped" by the mv class.

    This contains the source for the low level C routines (suitable for assembly optimisations) that manipulate multivector structures. Such structures must be explicitly created and freed by the user when used directly.

    This contains the source for the C++ wrapper routines presenting multivector functionality through the convenient mv class interface. In the MSVC project, this also contains the MSVC console application equivalent of main() with some example initialisation and use of the MV suite.

Glossary   Contents   Author
Copyright (c) Ian C G Bell 2004, 2014
Web Source:
Latest Edit: 28 Jun 2015. Counter