This document is still under revision. All suggestions, critique, or comment gratefully received.
This document assumes familiarity with Multivectors.
Notations defined in that document are retained here. Note that we here use
labels e1,e2,... to denote a typically fixed, "base", "universal",
"fiducial" frame
and hip to denote tangent vectors. In much of the literature,
ei represent tangent or otherwise "motile" vectors while
si
or gi represent a "base frame"
.
This document makes extensive use of subscripts and superscripts to indicate dependencies
usually "dropped" in conventional treatments and is, in consequnce, theoretically ambiguous. Does vip , for example,
mean that vi is defined over or dependant on p , or that
v is a function of ip? In practice, meanings will be clear in context.
Tensors are traditionally a difficult concept but multivectors make them far easier to understand,
manipulate, and generalise. They are fundamental to many applications so we address them here.
Notations
Symbols such as d, d,¶, Ñ, and ð are used variously in the literature
for various "differentiating" operators. We will introduce the unorthodox notation Ðxa
for the "directed" derivative
with regard to a multivector parameter x in a particular multivector "direction" a, and use Ñ
to denote the un-directed ("splayed") derivatives traditionally denoted Ñ or ð
.
We will typically use d or d to denote a small scalar
and d to denote a 1-vector interpreted as a (possibly large) "displacement". We will sometimes use dx to denote a
a small change in multi-vector parameter x when ambiguity with multiplication by a scalar d cannot arise.
Multivector Functions as Tensors
The traditional presentation of an N-dimensional tensor of integer rank r is a point-dependant
N r
element "array" or "matrix" defined with respect to a given N-dimensional coordinate frame, that transforms according to
particular rules in accordance with transforms of the underlying coordinate frame. Multivectors provide an
attractive alternative (and more general) formulation under which the conventional tensor product
follows directly from the geometric product. More formal definitions of the following explicitly specify
a "scalar source"
from which to "build" linear combinations, but here we implicitly assume "real" scalars (from  or
a (finite-precision) approximation thereof).
Fields
A field is a function F: Âp,q,r
® Âp,q,r . In other words: a point-dependant multivector.
If the function is (unit) k-vector valued, we have a (unit) k-field.
A 0-field thus associates a scalar value with every point. A 1-field associates a 1-vector with every point.
From a programmers' perspective, fields are functions having at least one 1-vector parameter. This "primary" parameter
is usually interpreted as a point or position.
When the primary argument is interpreted as a (scaled) direction rather than a point we will refer
here to a directional k-field.
Tensors
We regard an N-dimensional tensor of
degree k as a point-dependant
multilinear N-dimensional multivector-valued function of k N-dimensional 1-vectors.
Fx : (Âp,q,r)k ® Âp,q,r
where p+q+r = N .
By multilinear we mean linear in each argument.
If k=0 we have a point-dependant function taking no arguments
and returning a (point-dependant) multivector,
effectively a field. A (t;0)-tensor is thus a t-field,
often refered to as an invariant tensor, though its "value" does in general vary with x,
If Fx (a1,a2,..,ak) = Fx (a1,a2,..,ak)<t> (ie.. Fx is t-vector valued)
we say the tensor has type t and rank t+k
and refer to it here as a (t;k)-tensor .
From a programmers' perspective, tensors are multivector-valued functions of at least one 1-vector argument,
linear ("affine") in all but the primary argument.
When t=k we refer to a k-tensor rather than a (k;k)-tensor . A k-tensor is thus
a point-dependant k-vector-valued multilinear function of k 1-vectors.
In particular, a 1-tensor is a point-dependant directional 1-field.
The "scalar product" ¿ is a (0;2)-tensor, though we usually write a¿b in preference to ¿(a,b) .
The outter product Ù is a 2-tensor. The geometric product is a tensor of degree 2 but "mixed" type.
Forms
If a (t;k)-tensor is skewsymmetric in its arguments so that
Fx (a1,a2,..,ak)
= Lx(a1Ùa2...Ùak)
= Lx(ak)
can be viewed as a function of a single k-blade rather than of k 1-vectors , then it is called a
a skewsymmetric (t;k)-tensor or a (t;k)-multiform . When t=k
we abbreviate to a k-multiform.
If t=0 (ie. Fx is scalar valued) then it is instead called a k-form. A 1-multiform is a 1-tensor.
It can be shown [see Hestenes & Sobczyk] that any k-form can be expressed as
Lx(ak)= uk¿ak where uk is a point-dependant k-vector .
If a k-multiform maps any given k-blade to another k-blade (rather than to a k-vector) then we say the multiform
is blade preserving. A 1-tensor is thus a blade-preserving 1-form since any 1-vector is a 1-blade.
It can be shown that provided k¹½N , any blade preserving k-form is merely the outtermorphism
of a 1-form. For k=½N, the gemetric dual prserves k-blades but is not an outtermorphism.
Dyads
A k-dyad is a k-multiform of the form
D(ak) = uk(vk¿ak)
where uk, vk are point-dependant k-blades.
A k-multiform can be expressed in dyadic form
as a sum of k-dyads. A 1-dyad is known as a dyad. A 0-dyad is the "succesive" multiplicative
combination of two scalar fields
Dx(a)=uxvx a
Multitensors
We can generalise a (t;k)-tensor to a (t;k)-multitensor) being a point dependant
multivector-valued function of k multivectors
Fp(a1,a2,...,ak) which is t-vector-valued when acting on k 1-vectors.
, linear in all but the primary (point) argument .
We will henceforth use the term tensor to refer to a multilinear multivector-valued function
of k nonprimary 1-vector arguments and multitensor for a multilinear multivector-valued
function of k nonprimary multivector arguments.
We will typically restrict the grade of the nonlinear "primary" multivector argument p to 1 and consider
it as a 1-vector "point" p . If a multitensor is t-vector valued,
we can regard it as a sum of (t;k)-forms with k ranging from 0 to N.
Extended Fields
Suppose now that we have k multivector fields aip=ai(p).
We can then extend a given k-multitensor Fp(a1,a2,...,ak)
with these fields to form an extended field
which we will also denote Fp mapping UN×UNk ® UN and defined by Fp = Fp(a1p,a2p,..,akp).
Outtermorphisms and Determinants
Let ¦ : Âp,q,r ® Âp,q,r be a linear
transformation (ie. a 1-field over Âp,q,r typically regarded
as acting on and returning "points" rather than "vectors").
We can extend
¦ to a multivector field ¦Ù over Âp,q,r by defining
¦Ù(a) º a ;
¦Ù(a) º ¦(a) ;
and
¦Ù(aÙb) º ¦(a)Ù¦Ù(b).
This extension is known as the outtermorphism of ¦.
Clearly ¦Ù(a<k>) = ¦Ù(a)<k> and in particular
¦Ù(i) = |¦|i where scalar |¦| is the
determinant of ¦
(nonzero iff ¦ invertible).
We will henceforth consider all linear 1-fields (1-tensors) to be so extended and
will frequently drop the Ù suffix
. We can similarly
extend any k-tensor to be defined over k multivectors rather than k 1-vectors.
Since ¯(cÙd,b)
= (¯b(c))Ù(¯b(d)) ,
¯bÙ = ¯b , and so ¯b is an outtermorphism
and we can write
¯aÙ =; ¯a .
It is worth explicitly noting that outtermorphisms preserve scalars.
Eigenblades
We now generalise the concept of eigenvectors and associated eigenvalues.
We say k-blade ak is an left k-eigenblade of a general ¦:
Âp,q,r®Âp,q,r with associated scalar eigenvalue
a if ¦Ñ(ak)=aak .
We say it is a right k-eigenblade if
¦D(ak)=aak .
If ak is both left and right eigenblade then the eigenvalue is common and we have a proper eigenblade
[ Proof : aLeftak2 = ak¿¦Ñ(ak) = ¦D(ak)¿ak = aRightak2
.]
A proper 1-eigenblade is a conventional eigenvector.
Scalars are 0-eigenblades of eigenvalue 1.
i is an N-eigenblade of ¦ with eigenvalue Det(¦) .
If ak and br-k are eigenblades with eigenvalues a,b then akÙbr-k is either degenerate (zero) or an eigenblade of eigenvalue ab. We say an eigenblade is irreducable if it is not itself the join of two eigenblades. For a transformation ¦ with ¦(i)=|¦|i , "factorising" N-eigenblade i into irreducible "sub" eigenblades corresponds to decomposing the space spanned by i into subspaces invariant under ¦.
If a and b are left and right eigenblades with eigenvalues a,b respectively then a¦D(a¿b)=b(a¿b) and b¦Ñ(aëb)=a(aëb) which is to say that the non-vanishing of contraction a¿b or b¿a is a right (or left) eigenblade having eigenvalue ab-1 (or a-1b).
For any 1-vector a and linear outtermorphism ¦=¦Ñ, the (k+1)-blade
aÙ¦(a)Ù¦2(a)Ù...Ù¦k(a)
must vanish for some k £ N because all (N+1)-blades are degenerate.
We then have ¦( aÙ¦(a)Ù...Ù¦k-1(a) )
= la aÙ¦(a)Ù...Ù¦k-1(a)
for some scalar eigenvalue la of k-eigenblade ak = aÙ¦(a)Ù...Ù¦k-1(a)
. We say a has ¦-eigenicity k .
But ¦ can also be expressed as a real N×N matrix which we know (from
the characteristic polynomial methods of traditional matrix theory) has N eigenvectors,
provided we allow complex vector coordinates and complex eigenvalues. Complex
eigenvectors occur in conjugate pairs, say
¦(a+ib) = r(iq)↑ (a+ib) and
¦(a-ib) = r(-iq)↑ (a-ib) for real scalars r and q with q¹0.
Taking real and imaginary parts we obtain ¦(a) = r( cos(q)a - sin(q)b) ;
¦(b) = r( cos(q)b + sin(q)a)
giving ¦(aÙb)
= 2 cosq sin(q)(aÙb) .
= r2 sin(2q)(aÙb) . The geometric interpreation of i in a Euclidean context is (aÙb)~
Thus we can choose a basis in which each elements ei has an ¦-eigenicity of either one (when ei is an eigenvector of ¦) or two
(when eiÙ¦(ei) is a 2-eigenblade of ¦).
Coordinate-based Tensor representations
With regard to a given invertible frame {e1,..,eN}, we have an N r 0-field
primary matrix representation of Fx .
Fx m..qi..l º
eq..m ¿ Fx (ei,ej,..,el)
[ with t suffix q..m and k suffix i..l ]
, ie. the component of em..q
in Fx (ei,ej,..,el) .
Alternate matrix representations are possible
Fx m..q i..l = eq..m ¿ Fx (ei,ej,..,el)
giving the component of em..q in Fx (ei,ej,..,el) ;
Fx m..qi..l = eq..m ¿ Fx (ei,ej,..,el)
giving the component of em..q in Fx (ei,ej,..,el) ; and so forth.
Hence the alternate coordinate expressions v = åi=1N viei
= åi=1N viei
for (1;0)-tensor v .
With regard to orthonormal frames in ÂN , ei=ei and all these representations are identical.
In particular, a 1-tensor (point-dependant 1-vector function of a 1-vector) has representations
Fx ij = ei¿F(ej) ;
Fx ij = ei¿F(ej) ;
Fx ij = ei¿F(ej) ;
Fx ij = ei¿F(ej) ;
Note that the "height" of a suffix is often used in three related but alternate ways:
We have the generalised (multivector) differential of v at a given x
vÑx(a)
º Ðxa(v(x))
º Lime ® 0 ((v(x+ea)-v(x)) e-1 )
[ e a scalar ] which we will see is linear in a ; and the
generalised (multivector) centred differential of v at a given x
voÑx(a)
º oÐxa(v(x))
º ½ Lime ® 0 ((v(x+ea)-v(x-ea)) e-1 )
.
The centred differential has the advantage of sometimes being evaluable at x where v(x) is undefined
but tends to be less applicable at boundary points. When v(x) is defined
and the limit is well-defined being the same for e ® 0 from above as from below,
the centralised differential is equivalent to the differential.
Clearly Ðxa uxv = uav for any multivectors u,v independant of x
and, in particular, Ðapp = a which we can also write as 1Ñ = 1.
The differential of v(x) is thus the function which given a returns the a-directed derivative of v(x).
A directed derivative can be regarded as a result of a particular evaluation of a differential.
The full notation vÑxx(a)
reminds us that the differential is both "with respect to" x and evaluated "at"
a partcular x. We will typically ommit at least one of these suffixes so that
vÑxx(a)
º vÑx(a)
º vÑx(a)
º vÑ(a) .
We refer to Ða as the scalar a-directed derivative operator. By a "scalar" operator we here mean grade preserving
in that
Ða(v(x)<k>) = (Ða(v(x))<k> .
We will frequently drop the brackets and write Ðavx for Ða(v(x)). We will use the notation =( )= to indicate the mere addition or removal of brackets in accordance with our bracket conventions.
Restricting a and x to be 1-vectors gives the 1-differential of v(x) at a given x
vÑx(a)
º Ðxa(v(x))
º Ða(v(x))
º Lime ® 0 ((v(x+ea)-v(x)) e-1 )
We can outtermorphically extend the 1-differential to act on multivectors but it is important to
recognise that even with 1-vector x=x,
vÑx(a) = vÑx(a) is in general
true only for 1-vector a.
Ða obeys the product rule
Ða(bxÙcx)
= (Ðabx)Ùcx
+ bxÙ(Ðacx)
.
Consequently
Ða(u(x)Ùv(x)) ¹
uÑ(a)ÙvÑ(a)
in general .
If v takes a 1-vector argument x (often interpreted as a "point")
then, given an
inverse frame, we can view v(x) as a
multivector-valued function of N scalars
v(x1,x2,..,xN).
We write Ðxk
or Ðek
for the partial derivative "scalar" operator
Ðxk (v(x)) º ¶v(x)/¶xk
º Limd ® 0 ((v(x+dek)-v(x)) d-1 )
º Ðek(v(x)) .
The 1-differential at a given x of v(x)
can then be expressed as
vÑx(a) º Ðav(x)
º åk=1N Ðxk ((a¿ek)v(x))
= Limd ® 0 ( (v(x+da)-v(x)) d-1 ) .
For 1-field v(x)=¦(x) the differential ¦Ñx(a) is a 1-tensor
with matrix representation ¦Ñxij =
¶yi/¶xj ïx where y=¦(x).
[ Proof : ei¿¦Ñx(ej)
= ei¿(Ðej¦(x))
= ei¿(¶¦(x)/¶xj)
= ¶yi/¶xj
.]
If ¦: VM ® UN
then ¦Ñ: VM ® UN can still be defined
as Ða¦(x) for a,xÎVM.
Of course, if N<M then ¦Ñ(iM)=0 since any M-blade in
UN must be degenerate.
Linearity of the Differential
That the differential is linear is surprising. One feels that one ought to be able to construct
pathological functions, "directed bumps" which can "fool" a particular coordinate basis
via a deceptive perfomance along the base axies.
But one cannot do so without violating continuity assumptions for ¦.
Consider as an example the 0-field v(x) = r sin2q
= 2x1x2
defined over Â2.
We write Ñv = 2(x2e1 + x1e2).
Ðe1v(x)=2x2 while Ðe2v(x)=2x1 and both of these are zero at x=0.
But Ðe1+e2v(x)=x1+x2 is also zero at x=0 so linearity of vÑ
survives there. Moving away from 0 to, say, point e1, Ðe2v(x) becomes non zero
but vÑ is linear there too.
Linearity survives at 0 by virtue of the r factor vanishing at 0, but only
by having such a zeroing factor can we eliminate the discontinuity arising from
q(x1,x2) being undefined at 0.
We might attempt to cobble together something from splines with "flat areas" but anything so cobbled
will require a discontinuity in a derivative of some order. If a function is flat to first order somewhere,
it must be flat to first order everywhere, or face a second order discontinity at the "interface".
One might think that continuously differentable "non-centred" functions are flat nowhere or everywhere,
which would have crucial ramifications in physics since it implies that no truly continous fields can be entirely localised.
Though we could damp function values with distance, there would always be theoretically detectable oscillations
conceivably exploitable as an information channel. In non Euclidean spaces we can define inifnitely continous
functions which become ever flatter as they approach the boundary of the null cone and are fully flat outside it, however.
If a "particle" is modelled as a continous fluctation defined over relativistic timespace, that fluctaution must
extend not only spacially, but temporally into the distant past and future of any observer.
For linear ¦, ¦Ñ=¦
[ Proof : (Ða¦(x) = ¦(Ðax)=¦(a) .] although note that ¦Ñ is implicitly defined over all
UN even if ¦ is defined only over a subspace.
For small ex , ¦(x+ex) » ¦(x)+¦Ñx(ex) .
[ Proof : ¦(x+ex) »
¦(x) + åk=1N exk¦Ñx(ek)
= ¦(x) + åk=1N (ex¿ek)¦Ñx(ek) =
¦(x)+¦Ñx(ex) .]
The 1-differential ¦Ñx0 can thus be thought of as the linear approximator to ¦(x)-¦(x0) for x close to x0. If ¦Ñx0(a) is constant " unit a then ¦ is radially symmetric at x0 (ie. can be expressed as a function of |x - x0|).
¦Ñ-1 = (¦-1)Ñ when ¦-1 exists, so we can denote both by ¦-Ñ.
[ Proof : ...
.]
The outtermorphism extension of ¦Ñ is denoted by ¦Ñ^ or just ¦Ñ. Its determinant J¦ º |¦Ñ| º (¦Ñ^(i))* is the conventional Jacobian of ¦ at x.
Of particular interest is the self-directed 1-differential or streamline derivative
¦Ñx(¦(x)), which
describes how a 1-field changes when it "follows itself".
The composite 1-differential at x is
¦ÑÑx(b) º
¦ÑxÑ(b) = (Ña¿b)¦Ñx(a) .
It is of limited usefulness.
Differentiating Exponentials
We have
Ðd(x*a)↑ =
Lime ® 0 (e(x+ed)*a-ex*a)e-1
= ex*a Lime ® 0 (edd*a-1)e-1
= (d*a) (x*a)↑
and more generally if ÐdFx commutes with Fx then Ðd(Fx)↑ = (ÐdFx)(Fx)↑ .
If ÐdFx anticommutes with Fx then Ðd(Fx)↑
= (ÐdFx)(1+Fx |2/3! + Fx4/5! + ...)
= (ÐdFx)Fx-1 sinh(Fx)
Ðde(x*a)b = (d*a)e(½p+(x*a)b
provided b2=-1.
The Directed Chain Rule
If ¦(x)=g(h(x)) then we have the Chain Rule ¦Ñ(a)=gÑh(x)(hÑx(a)) .
When g and h are linear this reduces to
¦Ñ(a)=gÑh(x)(h(a)) =
(h(a)¿Ñx)g(x)
= Ðxh(a)g(x) .
Primary Differential
Let us switch from x to p and suppose we have an extended field
Fp = F(p,a1p,a2p,...,akp) = Fp(a1p,a2p,...,akp) .
ÐdFp =
Ðd(Fp(a1p,a2p,...,akp)) =
Lime ® 0e-1(
F(p+ed,a1p+ed,a2p+ed,...,akp+ed)-F(p,a1p,a2p,...,akp))
We might consider regarding Fp as a multivector field by holding the k linear parameters aip constant at their p values throughout a neighbourhood of p
and then take its d-directed derivative,
defining
Ðßd Fp(a1p,a2p,...,akp) º
Lime ® 0e-1(
F(p+ed,a1p,a2p,...,akp) - F(p,a1p,a2p,...,akp) )
but this raises difficulties if the aip are restricted in some manner
and unable to hold their "at p" values away from p.
A better definition
for the d-directed primary derivative operator
is
Ðßd Fp(a1p,a2p,...,akp) º
(ÐdFp)(a1p,a2p,...,akp)
º Ðd (Fp(a1p,a2p,...,akp)) -
Fp(Ðda1p,a2p,...,akp)) -
Fp(a1p,Ðda2p,...,akp)) -
... - Fp(a1p,a2p,...,Ðdakp)) .
Ðßd Fp(a1p,...) is then a well-defined point dependant multilinear function of k multivector arguments
known as the d-directed primary derivative of F.
The choice of the ß symbol is here intended to suggest of the "lowering" of the "scope" of Ðd to apply only to
the "low-suffixed" primary p.
We have discussed a 1-vector-directed primary derivative. Generalising to a multivector "point" p
we have the obvious a-directed primary derivatives for general multivector a.
In particular, we have the traditonal derivative of a multivector-valued function of a scalar
¦ : ®UN
as Ð1¦(x) = ¶¦(x)/¶x = ¦'(x)
Second Primary Differential
The first differential ¦Ñp(a) = ¦Ñ(a) can be extended via a given 1-field ap=a(p)ºa into a field whose b-directed
primary derivative is given by
Ðßb ¦Ñp(ap) = (Ðb¦Ñp)(ap) = Ðb (¦Ñp(ap)) - ¦Ñp(Ðbap)
= ÐbÐap¦(p) - ¦Ñp(Ðbap)
= ÐbÐap¦(p) - ÐÐbap¦(p)
This provides a bilinear
second differential <1;2>-tensor ,
¦Ñ2p(a,b) º
Ðßb(¦Ñp(a))
= (Ðb¦Ñp(a)) - ¦Ñp(Ðba)
= ÐbÐa¦(p) - ¦Ñp(Ðba)
If the second differential ¦Ñ2 is symmetric we say ¦ satisfies the integrability condition
which we can consequently express as
ÐbÐa¦(p) - ¦Ñp(Ðba) = ÐaÐb¦(p) - ¦Ñp(Ðab)
Û (Ðb×Ða)¦(p) = ½¦Ñp(Ðab - Ðba)
º ½¦Ñp(aÄb) .
Provided Ðba = Ðab (which we can also denote aÄb=0) the commutability of
Ðßa and Ðßb is thus equivalent to the commutability of Ða and Ðb ;
and this is trivially true in the particular case
Ðba = Ðab = 0 corresponding to "constant" a and b.
¦Ñ2p(a,b) is the directed derivative at p in direction b
of the a-directed derivative ¦Ñp(a). It is maximised
when b is normal to the surface ¦Ñp(q) = ¦Ñp(a) .
Consider direction dq at point p + dp. ¦ is approximated near p
as
¦(p+dp+dq) » ¦(p) + ½(
¦Ñp(dp) + ¦Ñp+dp(dq) +
¦Ñp(dq) + ¦Ñp+dq(dp))
» ¦(p)
+ ¦Ñp(dq) + ¦Ñp(dp) +
½(¦Ñ2p(dq,dp) + ¦Ñ2p(dp,dq))
= ¦(p) + ¦Ñp(dq) + ¦Ñp(dp) + ¦Ñ2p(dq,dp) .
Third Primary Differential
The second differential can itself be primary differentiated
Ðßb(¦Ñ2(a1 p,a2 p))
= Ðb(¦Ñ2p(a1 p,a2 p))
- ¦Ñ2p(Ðba1,a2 p)
- ¦Ñ2p(a1 p,Ðba2 p)
Secondary Differential
We here define the secondary directed differential by
ÐÞd Fp(a,b,...) º Ðd Fp(aÐ,b,...)
º Lime ® 0 e-1 (Fp(a+ed,b,...)-Fp(a,b,...)) .
aÐ here denotes the scope of the differentiaon implicit in Ð applying only
to parameter a.
Thus "secondary derivative" refers to differentiation with respect to the second (first nonprimary) parameter,
whereas "second derivative" usually refers to the combination of two successive primary derivatives,
More generally, we have the (i+1)ary directed differential
ÐÞid Fp(a,b,...)
º Ðd Fp(a,b,.gÐ,..)
where g is the ith non-primary parameter.
If Fp has k nonprimary parameters we have Ða(Fp(a1,....)) = (Ðßa + åi=1k ÐÞiaFp .
Let Fp = Fp(a1,...ak) be a tensor taking k non primary parameters . We can form
FpÑ = FpÑ(a1,..,ak,d) º
ÐßdFp(a1,..,ak) º (ÐdFp)(a1,..,ak)
º Ðd (FpÑ(a1,..,ak)))
- Fp(Ðda1,..,ak) - ... - Fp(a1,..,Ðdak) .
Lie Product
Having defined a directed derivative operator Ðap we define
the skewsymmetric bilinear Lie product by
apÄbp º Ðpapbp - Ðpbpap
º Ðapbp - Ðbpap
This is often known as the Lie Bracket
and denoted [ap,bp] but we will favour the Ä product notation here.
Undirected Derivatives
" Here, I'd like to introduce you to a close personal friend of mine.
M-41A 10mm pulse-rifle, over and under with a 30mm pump-action grenade launcher."
Corporal Dwayne Hicks, "Aliens".
"Undirected derivatives" can be thought of as "splayed out" directed derivatives, or as
"embodying" derivatives in multiple directions.
1-derivative Ñ
We define the 1-vector del-operator (aka. nabla) or 1-derivative
(aka. vector derivative)
Ñ = Ñx º åk=1N ekÐxek
= åk=1N Ðekek
so that Ñv(x)
= åk=1N ekÐek(v(x))
= åk=1N Ðek(ekv(x))
= åk=1N (¶/¶xk)(ekv(x))
.
Note that this definition remains consistant for all frames {ei} including nonorthonormal ones.
If v(x)=v(x) is scalar valued, Ñv(x) = åk=1N eiÐxk v(x1,..,xN) is the conventional gradient with regard to Euclidean ÂN.
Applying Ñ as a geometric product gives Ñ v(x) = åk=1N Ðxk (ek v(x)) = åk=1N Ðxk (ek¿v(x) + ekÙv(x)) = Ñ¿v(x) + ÑÙv(x).
If v(x)=v(x) is 1-vector valued, the scalar ¿ term is
just
åk=1N Ðxk (vk) which for a Euclidean space
(vk=vk) is the traditional
divergence of v(x), while the bivector Ù term is known as the curl of
v(x). For N=3, this is dual to (minus) the conventional
curl
Ñ×v(x).
We have thus essentially unified and generalised the three conventional differential operators grad, div, and curl.
It is possible to define Ñ independantly of an inverse coordinate frame
as the limit of a surface integral, as discussed briefly under
Tangential Derivative
below.
We refer to Ñxv(x) as the 1-derivative of v with respect to x.
We have
Ða
= (a¿Ñ)
= (a*Ñ)
with the brackets here emphasising "precedence" rather than specifying the
"scope" of the Ñ which should be thought of as extending rightwards from the
expression.
Conventionally, a leftward, rightward, or double-headed horizontal arrow above the Ñ is used to indicate
the direction(s) of differential scope, but this technique is typographically unavailable here.
Ña(¦Ñx(a))
= Ña((a¿Ñx)¦(x))
= (Ña(a¿Ñx))¦(x) = Ñx¦(x)
so we have the operator identity
Ña(a¿Ñx)
= Ña Ðxa
= Ñx
which we can abbreviate as
ÑaÐa = Ñ .
It is customary to abbreviate Ñ(ap) by Ñap, treating Ñ as a "left-multiplier"
but we loose associativity in that
(Ña)b ¹ Ñ(ab) in general.
We can use Ñ as a right-multiplier eg. aÑ provided we understand the "scalar" Ðxk
to apply "leftwards" as well as "rightwards".
The expression abÑcd is usually interpreted (defined) as
ab(Ñc)d
but could (perhaps more properly) be considered to mean
ab(Ñ(cd)) + ((ab)Ñ)cd .
We will retain the traditonal "rightward-only scope"
for Ñ here but when we include Ñ in a list of operators
fÑgh this should be thought of as abbreviating the composite operation
fÑgh(ap)
º
f( Ñ(gh(ap)) )
º f( Ñ( g(h(ap)) ) ) . The scope of Ñ thus extends rightwards to encompass all following symbols
unless contraindicated by brackets.
We will use ( ) to denote the extent of Ñ s whenever possible but this becomes complicated by brackets expressing product precedence.
When we wish the derivative aspect of a Ñ to "hop over"
intervening terms or to move leftwards rather than rightwards or just wish to emphasise that
the default applicability we will add a Ñ suffix to the term to which the Ðei "apply".
The ek act geometrically on "intervening" terms irrespective of any Ñ's.
In general we will here assume the "differentiating scope" of Ñ to extend rightwards but not leftwards.
Thus (Ñp¿ap) and (ap¿Ñp) are distinct scalar operators since
(ap¿Ñp)Fp = (ap¿Ñp)FpÑ whereas
(Ñp¿ap)Fp =
(Ñp¿apÑ)Fp + (Ñp¿ap)FpÑ
= (Ñp¿apÑ)Fp + (ap¿Ñp)FpÑ
= (Ñp¿apÑ)Fp + (ap¿Ñp)Fp .
Only if (ÑapÑ)<0> = 0 (eg. if ap=a independant of p) are they equivalent.
We have the geometric product rule
Ñ(a¨b)
= Ñ(aѨb)
+ Ñ(a¨bÑ)
where ¨ denotes any bilinear multivector product (¿,Ù,., geometric,etc. )
and Ñ denotes the differentating scope scope of the Ñ.
As long as we remember the geometric product rule, we can derive many equations involving Ñ simple by
reference to its 1-vector nature. a¿(a¿b)=0, for example, gives
ѿ(ѿb)=0.
However, care must be taken with Ñ. If can readily be verified that Ñx = N and that
Ñ(x2)=2x . Derivations such as
Ñ(x2) = ÑxÑx + ÑxxÑ
= 2(Ñx)x
= 2Nx = 2Nx are erroneous. We cannot commute x with x while we are varying one of them,
because the variation may not commute with x.
Useful Ñ results
ѺÑx ; y↑ º ey ; * denotes scalar product throughout :
Ñ x = Ñ¿x = N | Ñ Ù x = 02 = 0 |
Ñx(xb) = (Ñxx)b = Nb | Provided Ñxb=0. This grade decomposes into |
Ñ(x¿bk) = (bk.Ñ)x = kbk
in particular: Ñ(x¿a) = (a¿Ñ)x = a | Ñ(xÙbk) = (bkÙÑ)x = (N-k)bk
in particular: Ñ(xÙa) = (N-1)a |
Ñ(bkx) = (-1)k(N-2k)bk | in particular: Ñ(ax) = Ñ(2(x¿a)-xa) = (2-N)a |
Ñx(x*a)↑ = (åi=1N ei(ei*a))(x*a)↑ = a<1> (x*a)↑ | and so: Ñx2ex*a = a<1>2 ex*a |
Ñx((x*a)b)↑ = a<1> ((½p + x*a)b)↑ | and so: Ñx2e(x*a)b = -a<1>2 e(p+x*a)b provided b2=-1 |
Ñ(lx2b)↑ = 2lNx(lx2b)↑b | |
Ñ(¦(x)b)↑ = (Ѧ(x))(¦(x)b)↑b for central ¦(x) | |
Ñ(g(x)F(x)) = (Ñ(g(x))F(x) + g(x)(ÑF(x)) | so
Ñ(f(x)~)
= Ñ(|f(x)2|-½) f(x)
+ |f(x)2|-½ Ñf(x)
= -½|f(x)2|-3/2Ñ(|f(x)2|) f(x) + |f(x)2|-½ Ñf(x) = |f(x)2|-½ ( -/+ ½|f(x)2|-1Ñ(f(x)2) f(x) + Ñf(x) ) according to sign of f(x)2. |
Ñ(F(x)G(x)) = (ÑFÑ(x))G(x) + (Ñ¿F(x))G(x)Ñ + (ÑÙF(x))G(x)Ñ | so
Ñ(f(x)~)
= Ñ(|f(x)2|-½) f(x)
+ |f(x)2|-½ Ñf(x)
= -½|f(x)2|-3/2Ñ(|f(x)2|) f(x) + |f(x)2|-½ Ñf(x) = |f(x)2|-½ ( -/+ ½|f(x)2|-1Ñ(f(x)2) f(x) + Ñf(x) ) according to sign of f(x)2. |
According as x2 is ± : | |
Ñ(|x|m) = ±m|x|m-2x = ±m|x|m-1x~ | Ñ(x|x|m) = (N+m)|x|m " mÎÂ Þ Ñx(x~) = (N-1)|x|-1 |
Ñ ¦(|x|) = ±¦'(|x|)x~ | Ñ(¦(|x|)x~) = ¦'(|x|) + (N-1)|x|-1¦(|x|) |
Ñ x2k = 2kx2k-1 | Ñ x2k+1 = Ñ¿x2k+1 = (2k+N)x2k |
Ñ((lx)↑) = l(lx)↑ + (N-1)x-1¿((lx)↑) | Only for N=1 do we have Ñ((lx)↑) = l(lx)↑ . |
Ñ (¦(|x|)x~)↑ = ¦'(|x|)(¦(|x|)x~)↑ + sin(¦(|x|))(N-1)|x|-1 for x2 < 0. | |
Ñx((x-a)|x-a|-N)
= Ñx¿((x-a)|x-a|-N)
= oN = |dSN-1|
at x=a and 0 elsewhere. | |
Ñx((x¿a)m) = m(x¿a)m-1a | |
Ñbk(x¿a2) = 2(-1)k(bkÙa2-bk.a2) | k³2 |
Ñ((x¿a2)¿bk)=a2×bk + 2a2¿bk ; | k³2 |
(bk.Ñ)(x¿a2) = bk×a2 + 2a2¿bk ; | k³2 |
Ñ((x¿a2)Ùbk) = a2×bk + 2a2Ùbk | |
(bkÙÑ)(x¿a2) = bk×a2 + 2a2Ùbk | |
Ñ(axb)↑ = (Ñax)(axb)↑ b | |
bx¿(Ñax) = (bx¿Ñ)axÐ | Holds only for scalar ax. We cannot, in general, retrieve (b¿Ñx)ax from b and Ñxax . |
Ñ¿(axcx) = ax(Ñ¿cx) + (Ñax)¿cx | ÑÙ(axcx) = ax(ÑÙcx) + (Ñax)Ùcx |
Ñ¿(axÙcx) =
(ax¿Ñ)cxÑ
+ (Ñ¿axÑ)cx
- axÙ(Ñ¿cxÑ)
- axÑÙ(Ñ¿cx)
In particular Ñ¿(aÙbx) = (a¿Ñ)bx - a(Ñ¿bx) [ Proof : a¿(bÙc) = (a¿b)c - bÙ(a¿c) with a=Ñ .] |
ÑÙ(ax¿cx)
= (ax¿Ñ)cxÑ
+ cx(Ñ¿axÑ)
- ax¿(ÑÙcxÑ)
- (cxÙÑ).axÑ
In particular Ñ(a¿bx) = (a¿Ñ)bx - a¿(ÑÙbx) [ Proof : a¿(bÙc) = (a¿b)c - bÙ(a¿c) with b=Ñ .] |
Ñ x~(x~+a)~ = 2-½(1±x~¿a)-½|x|-1 (a(N-3/2) + ½x~) | when x~2 = a2 = ±1 |
Ñ |x|½x~(x~+a)~ = 2-½(1-x~¿a)-½|x|-½ (a(N-1)+x~) | when x~2 = a2 = ±1 |
Monogenic Functions
We say v(x) is monogenic (aka. analytic) if Ñxv(x) = 0 " x .
We say v(x) is meromorphic if it is monegenic at all x except some welldefined poles
x1,x2,...,xk
at which we have Ñxv(x) ïxi
= - oM Ri
where multivector Ri is the residue at pole xi and oM
is the boundary content of a unit radius (M-1)-sphere.
Monogenic functions are fundamental in theoretical physics, particularly
(in nonrelativistic central potential theory)
spherical monogenics
Yx
of the form
Y(x)
= xl y(x~)
= rl y(q,f)
for N=3 .
Monogenity condition ÑxYx=0 requires (xÙÑx) y(q,f) = l y(q,f)
interpreted as Y having constant scalar integer "angular-momentum operator" eigenvalue l, known as the angular quantum number .
[ The brackets around (xÙÑx) denote the precendece of the Ù ; the differentiating scope of the Ñx acting rightwards over the y.
].
For l<0 we have a single pole at 0.
Laplacian Ñ2
Since Ñxv(x)
= ÑavÑx(a)
= Ña((a¿Ñx)v(x)))
we have Ñx2v(x)
= Ñx(ÑavÑx(a))
= ÑbÑavÑ2x(a,b)
= (Ñb¿Ña
+ ÑbÙÑa)vÑ2x(a,b) .
If v obeys the integrability condition, the symmetry of second differential vÑ2x(a,b) causes the Ù term to vanish
[
(ÑbÙÑa)vÑ2x(a,b)
= -(ÑaÙÑb)vÑ2x(a,b)
= -(ÑaÙÑb)vÑ2x(b,a)
]
and so
Ñx2 = Ñx¿Ñx
is a grade-preserving ("scalar")
operator known as the
second derivative or Laplacian or D'Alembertian operator.
Consequently
Ñx2 = Ñx¿Ñx
and
ÑxÙÑx = 0 , at least with regard to its action on integrable tensors.
This is most clearly seen when expressed in coordinate terms with regard to an orthonormal basis
as
ÑxÙÑx = (åi=1N eiÐei)Ù(åj=1N ejÐej)
= åi<j eij(ÐeiÐej-ÐejÐei)
= 2 åi<j eij(Ðei×Ðej)
Thus, assuming ÐeiÐejv(x) = ÐejÐeiv(x), we have
Ñ2v(x) =
åj=1N ej Ðej (åk=1N ek Ðek (v(x)))
= åj=1N åk=1N ejekÐekÐejv(x)))
= åk=1N (ek)2 Ðek2(v(x))
= åk=1N ek Ðek2 v(x)
so that, when applied to a scalar function, Ñ2 is the conventional Laplacian but with the basis signatures
effecting the summation when acting in non-Euclidean spaces.
With regard to functions not obeying the integrability condition, the Laplacian Ñx2 includes a bivector operator ÑxÙÑx known as the tortion which does not vanish, and so Ñx acts somewhat "less like" a 1-vector geometrically.
Suppose Ñx2 vx = ax vx for central multivector ax with
Ñx ax = 0 .
(aÑx)↑ vx
= (1 + 2!-1a2ax + 4!-1a4ax2 + ...)vx
+ (a + 3!-1a3ax + ...)Ñxvx
= cosh(aax½) + sinh(aax½)Ñx)vx
provided a and "square root" ax½ are both central.
Writing x=|x| we have Ñx2 ¦(x) = ±(¦"(x)+(N-1)x-1¦'(x))
according as x2 is ± so Ñx2 ¦(x) = 0 provided
¦"(x) = (1-N)x-1¦'(x) eg. for ¦(x)= x2-N ;
and Ñx2 ¦(x) = l ¦(x) 0 provided
Useful Ñ2 results
According as x2 is ± : | |
Ñ2(|x|m)
= ±m(N+m-2)|x|m-2
Þ Ñ2(|x|2-N)=0 |
Ñ2(x|x|m) = ±(N+m)m|x|m-2x
Þ Ñ2x = Ñ2 |x|1-Nx~ = 0 |
Ñ2(x~) = ±(1-N)|x|-3x | |
Ñ2 ¦(|x|) = ±(¦"(|x|) + (N-1)|x|-1¦'(|x|)) | Ñ(¦(|x|)x~) = ¦'(|x|) + (N-1)|x|-1¦(|x|)
Ñ2(¦(|x|)x~) = ±(¦"(|x|) - (N-1)|x|-2¦(|x|) + (N-1)|x|-1¦'(|x|))x~ |
Ñ2(¦(|x|)x~)↑
= (-¦"(|x|)x~
+ ¦'(|x|)2
- ¦'(|x|)(N-1)x~|x|-1)(¦(|x|)x~)↑
+ sin(¦(|x|))(N-1)|x|-2x~ for x2<0 |
Generalising v(x) to a multivector argument v(x) we can construct a multivector del-operator we will call
the multiderivative or "allblade gradient"
Ñ v(x) º
Ñx(v(x))
º
åk=02N-1 e[.k.] Ðxe[.k.] v(x)
where e[.k.] is the kth pureblade element of a given ordered extended basis
for UN and {e[.k.]} is an extended inverse frame for that basis. Note that we include a scalar-directed
derivative due to e[.0.] = 1 in this summation .
If multivector x is prohibited from "containing" specific basis blades, then those blades are omiitted from the summation.
In particular Ñx<k> v(x) =
åi=1N kCe[.<k>i.] Ðxe[.<k>i.] v(x)
where e[.<k>i.] is the ith of the NCk k-blade elements
of a given ordered extended basis.
Ñ<k>x º (Ñx)<k> = Ñx<k> is known as the k-derivative (so Ñ<1>x = Ñx , the 1-derivative ).
More generally, for blade b we define the b-projected multiderivative by
Ñ[b]x =
åk=02N-1 (¯be[.k.]) Ðx¯be[.k.]
Clearly
Ñ[i]x =
Ñx ;
Ñ[1]x = Ñx<0> .
For invariant a,
Ñ<k>xax =
Ñ<k>xax<k> =
åi=0N Ñ<k>xa<i>x<k> =
åi=0N H ik a<i>
In particular,
Ñ<k>xax =
Ñxax<k> =
Ñ<k>xax<k> =
NCka ( and hence
Ñxax = 2Na ) provided Ñxa=0 .
The generalised multivector-directed derivative operator is expressable as
Ða = (a*Ñx) ,
providing an alternate coordinate-free defintion of Ñx .
Generalising
Ñx
= Ña(a¿Ñx)
we have
Ñx =
Ña(a*Ñx)
.
Ñx(x*a) = åj=12N e[.j.] (e[.j.] * a) = åj=12N e[.j.] a[.j.] = a .
Ðxb xÙa = bÙa by the limit definition of directed derivatives
.
Thus Ñx xÙa =
åj=12N
e[.j.]
(e[.j.] Ù a)
by the coordinate based defintion of Ñ .
Let us now assume that a is a (N-k)-blade and that an orthonormal frame is chosen with
ei¿b = 0 " 1£i£k and a=|a|e(k+1)(k+2)..N.
In this frame, only basis blades within e12...k will have nonvanshing
nonvanshing
and the summation term reduces to a . We thus have frame-invarient identity
Ñx xÙa = åi=0k kCi a for (N-k)-blade a and, more usefully,
Ñ<r>x xÙak = kCr ak for x-independant k-blade ak.
Similarly, Ñx x¿a =
åj=12N
e[.j.]
(e[.j.] ¿ a)
which for k-blade a
Let us now assume that a is a (N-k)-blade and that an orthonormal frame is chosen with
ei¿b = 0 " 1£i£k and a=|a|e(k+1)(k+2)..N.
In this frame, only basis blades within e12...k will have nonvanshing
nonvanshing
and the summation term reduces to a . We thus have frame-invarnient identity
Ñx xÙa = åi=0k kCi a for (N-k)-blade a and, more usefully,
Ñ<r>x xÙak = kCr ak for x-independant k-blade ak.
The more general gradifying substitution gives that
(Ñß*ÑÞ) has the effect of replacing
the first nonprimary multivector argument by ÑÜ .
The following results are useful:
Hence Ñ(ap¿bp) = Ñ(apÑ¿bp) + Ñ(ap¿bpÑ) = Ðbpap + Ðapbp - bp¿(ÑÙap) - ap¿(ÑÙbp) . [ HS 2-1.43 ]
Dropping all p suffixes for brevity, we have
(bÙc)¿(ÑÙaÑ) =
Ðc(b¿a) - Ðb(c¿a) + (bÄc)¿a
where
apÄbp º Ðapbp - Ðbpap .
[ HS 2-1.46 ]
[ Proof : (bÙc)¿(ÑÙaÑ) =
b¿(c¿(ÑÙaÑ))
= b¿(
(c¿Ñ)aÑ - (c¿aÑ)Ñ
)
= (c¿Ñ)(b¿aÑ) - (b¿Ñ)(c¿aÑ)
= (c¿Ñ)(b¿a) - (b¿Ñ)(c¿a)
- ((c¿Ñ)(b| Ñ¿a) - (b¿Ñ)(cÑ¿a)
= Ðc(b¿a) - Ðb(c¿a)
- ((c¿Ñ)bÑ - (b¿Ñ)cÑ)¿a)
= Ðc(b¿a) - Ðb(c¿a) + (bÄc)¿a
.]
Setting a=Ñ gives
(bÙc)¿(ÑÙÑ) =
Ðc(b¿Ñ) - Ðb(c¿Ñ) + (bÄc)¿Ñ
= Ðc×Ðb - Ðb(c¿Ñ) + (bÄc)¿Ñ
[ HS 4-1.15 ]
Partial Undirected Derivate ¶
If we have a function F(a,b,..,h) of several parameters
we have "partial" differentiation
¶F/¶bij.. = Limd ® 0
d-1(F(a, b+deij.. ...,h)
- F(a, b, ...,h) )
in which other parameters like a and h are held constant even if they may in actuality depend on b.
¶x º åi=1N ei (¶/¶xi)
¶x
º åijk..=1N eijk.. (¶/¶xijk...)
are Ñx and Ñx with ¶/¶xijk... replacing
d/dxijk...
Secondary Undirected Derivative ÑÞ
ÑÞ Fp(a1,a2,..,ak) º Ña1 Fp(a1,a2,..,ak)
º åi=1N eiÐa1ei Fp(a1,a2,..,ak)
= åi=1N ei Fp(Ða1eia1,a2,..,ak)
= åi=1N ei Fp(ai,a2,..,ak)
Ðbd(FpÑ(a1,..,ak,d))
= (ÐbFp)(a1,..,ak) follows easily from the limit definition of Ðdb
so we have
ÑÞkFpÑ(a1,..,ak,d)
= ÑdFpÑ(a1,..,ak,d)
= (ÑFp)(a1,..,,ak)
which we can write as
ÑÞ→Ðß = Ñß
abbreviating ÑÞ→Ðß Fp(a1,..,ak) º
= ÑÞ→((ÐFp)(a1,..ak)) =
ÑßFp(a1,..,ak)
º (ÑFp)(a1,..,ak)
where the → indicates dervivative scope applying only to the rightmost ("last") parameter,
the parameter "introduced" by the Ð.
We will call this the differential derivative rule.
In particular, with regard to a simple field
Fp=F(p) with no nonprimary parameters we have
ÑÞÐßFp = ÑpFp .
(ÑpFp)Ñ(a) º Ðpa(ÑpFp) = ÑbFpÑ2(a,b) Ðpa(ÑpFp)
Now (Ñß¿ÑÞ)Fp(a,b,..)
= åi=1N ei2 Ðaei(Ðeip)Fp)(a,b,..)
= åi=1N ei2 (Ðeip)Fp)(ei,b,..)
= åi=1N (Ðeip)Fp)(ei,b,..)
= FpÑ(Ñ,b,..)
so the operator (Ñß¿ÑÞ)
= (ÑÞ¿Ñß) has the effect of replacing the first nonprimary
1-vector parameter with ÑÜp .
We will punishly call this the gradifying substitution rule. By regarding a
(t;k)-multiform as a skewsymmetric (t;k)-tensor Fp(a1,a2,.,,,ak)=Fp(a1Ùa2Ù...ak)
we obtain
(Ña1¿Ñp)Fp(a1Ùa2Ù...ak) = Fp(ÑÜpÙa2Ù...ak)
Simplicial Derivative Ñ(r)
Any multitensor grade decomposes to a sum of multiforms so the derivatives of multiforms are of particular interest.
In this section we neglect any non-linear "primary" parameter and consider functions of k-blades.
Suppose first that L(a1,a2)=L(a1Ùa2) is bilinear and skewsymmetric in a1, and a2.
(Ña2*Ña1) L(a1,a2)
=
((åj=1N ejÐa2ej)*
(åi=1N eiÐeja2))
L(a1Ùa2)
=
åj=1N åi=1N (ej*ei)
Ða2ej Ða2ej L(a1Ùa2) )
=
åi=1N (ei2)
Ða2ei L(eiÙa2) )
=
åi=1N (ei2)L(eiÙei) = 0 .
Thus with regard to action on linear functions of a1 Ù a2 ,
Ña2Ña1
= Ña2ÙÑa1
= -Ña1ÙÑa2 .
Suppose now that L(x) is a linear multivector-valued function of a mulivector x.
Solely by its action on r-blades, L induces a function of a k-blade ak and a (r-k)-blade
br-k
via L(akÙbr-k) .
The directed chain rule gives
Ðakck L(akÙbr-k)
= LÑxakÙbr-k(Ðakck(akÙbr-k))
= L(ckÙbr-k) .
Similarly
Ðbr-kdr-k L(akÙbr-k)
= L(Ðbr-kdr-k(akÙbr-k))
= (-1)k(r-k)L(Ðbr-kdr-k(br-kÙak))
= (-1)k(r-k)L(dr-kÙak)
= L(akÙdr-k) .
We have the linear derivative factorisation theorem
of a linear multivector function L(x).
Ñbr-kÑakL(akÙbr-k)
= rCk Ñx<r> L(x) .
[ Proof :
Ñbr-kÑakL(akÙbr-k)
= åj=1NCr-k e[.<r-k>j.]
Ðbr-ke[.<r-k>j.]
åi=0NCk-1 e[.<k>i.]
Ðake[.<k>i.] L(akÙbr-k)
= åj=0NCr-k-1
åi=0NCk-1
e[.<r-k>j.]
e[.<k>i.]
Ðbr-ke[.<r-k>j.]
L(e[.<k>i.]Ùbr-k)
= åj=0NCr-k-1 åi=1NCk
e[.<r-k>j.]e[.<k>i.]
L(e[.<k>i.]Ùe[.<r-k>j.])
For each e[.<k>i.] blade only (r-k)-blades
composed from the remaining N-k basis 1-vectors
will give a non-vanishing
e[.<k>i.]Ùe[.<r-k>j.] .
Consider a particular basis r-blade
e[.<r>q.]
and chose a particular k of the basis 1-vector factors of this r-blade. The sign change
aquired in moving these k factors leftwards to factorise
e[.<r>q.] as
e[.<k>i.]e[.<r-k>j.]
for some i,j is precisely the same sign change as that aquired reordering
e[.<r>q.]§
as
e[.<r-k>j.]§e[.<k>i.]§ .
Thus
e[.<r-k>j.]§e[.<k>i.]§
L(e[.<k>i.]Ùe[.<r-k>j.])
= e[.<r>q]§ L(e[.<r>q.] .
Now, e[.<t>u.]§
= e[.<t>u.] may not hold for nonEuclidean UN but
if all are non-null, the scalar multiplier incurred replacing
e[.<r-k>j.]§e[.<k>i.]§
with
e[.<r-k>j.]e[.<k>i.]
is identical to that incurred by replacing
e[.<r>q.]§ e[.<r>q.] so we can safely write
e[.<r-k>j.]e[.<k>i.]
L(e[.<k>i.]Ùe[.<r-k>j.])
= e[.<r>q] L(e[.<r>q.]) .
Thus
e[.<r>q] L(e[.<r>q.] arises in the summation
rCk times and the result follows
.]
Taking k=1 the linear derivative factorisation theorem provides
Ñbr-1Ña1L(a1Ùbr-1)
= r Ñx<r> L(x)
and hence
(Ñar...Ña2Ña1)L(a1Ù...ar) =
r! Ñ<r>L(a1Ù...ar) .
When acting on a skewsymmetric L, the multivector operator
Ñar...Ña2Ña1
is equivalent to the r-vector operator
ÑarÙ...Ña2ÙÑa1
and so
(ÑarÙ...Ña2ÙÑa1)L(a1Ù...ar) =
r!Ñ<r>L(a1Ù...ar) .
Accordingly we define the simplicial r-derivative
by
Ñ(r) = (r!)-1ÑarÙ...Ña2ÙÑa1
, equivalent to the r-derivative when acting on a ,r>-multiform.
Conveyed Derivative Ñ→
We reassume a primary "point" paramter, which we will denote by p rather than x,
and suppose we have an extended field
Fp = F(p,a1p,a2p,...,akp) . We here refer to
Ñp→i Fp(a1p,..,akp)
º FpÑ(a1p,..,Ñpaip,..,akp)
= åj=1N Ð ßFp(a1p,..,ejaip,..,ak)
= åj=1N (Ð Fp)(a1p,..,ejaip,..,ak)
as the conveyed 1-derivative
assciated with the ith linear parameter. The Ñ here indicates that
the differtiating scope of the "interior" Ñp is to be considered as "leaping leftwards" to apply only to the
instance of p as primary parameter.
In the case of 1-multitensor F(p,a1p) = Fp(ap) we can unambiguosuly define
the coveyed 1-derivative operator by
Ñ→p Fp(ap) º
Fp(ÑÜpap) º
FpÑ(Ñpap)
= åi=1N ÐeißFp(eiap)
= åi=1N (ÐeiFp)(eiap) .
We can move the "scalar" Ðßei "leftwards" out of the Fp() only because
Fp(a) is linear in a.
.
As before, the differentiating scope of the Ñp geometrically acting on the ap applies not to ap
but to the p in the primary argument. ap (ie. the value of a(p) at p) geometrically effects the
result of the differentation but there is no differentiation of a(p) itself and values of a
other than at p have no effect on the result, which is why we can informally think of "holding them constant".
The Ñ→ notation is intended as suggestive of the geometrical "nature" of the Ñp
hopping "rightwards" into the non-primary parameters while the differentiating scope remains
exclusively with the primary parameter.
The conveyed derivative grade decomposes as
Fp(ÑÜpap) º
Fp(ÑÜp¿ap + ÑÜpÙap)
= Fp(ÑÜp¿ap) + Fp(ÑÜpÙap)
= FpÑ(Ñp¿ap )
+ FpÑ(ÑpÙap) .
If a 1-tensor is extended outtermorphically to take a multivector argument with Fp(a)=a
it is natural to write
Ñ→p Fp(1) = Fp(ÑÜp)
= FpÑ(Ñp)
= åi=1N (Ðei Fp)(ei)
= åi=1N ÐeiFp(ei)
.
We say a 1-tensor ¦p(a) is zero divergent if
Ñp¿¦p(a) = 0 " a
which is equivalent to ¦Dp(ÑÜp)¿a = 0 " a
Û ¦Dp(ÑÜp)=0 .
If ¦p is symmetric, we thus have ¦p(ÑÜp)=0 Û Ñp¿¦p(a)=0 " a .
Suppose we have a (t;k)-multiform (t-vector-valued point-dependant linear function
of a k-vector) Fp(ak) for k³1. The conveyed derivative induces a (t;k+1)-multiform
FÑ→p¿(ak+1) º
(Ñ→p¿Fp)(ak+1) º
Fp(ÑÜp¿ak+1 )
= FpÑ(Ñp¿ap) ;
and a (t,k-1)-multiform
FÑ→pÙ(ak-1) º
(Ñ→pÙFp)(ak-1) º
Fp(ÑÜpÙak-1 )
= FpÑ(ÑpÙap ) .
Fp(ak+1.ÑÜp) = (-1)k Fp(ÑÜp¿ak+1) is known as
the exterior 1-differential and is frequently denoted dFp .
Fp(ak-1ÙÑÜp) = (-1)k-1 Fp(ÑÜpÙak-1) is known as
as the interior 1-differential. In most of the literature, the 1- is unstated.
The Expanded k-blade contraction rule then yields, for example,
Ñ→p Fp(a1Ùa2Ùa3)
= FpÑ(Ñ→p¿(a1Ùa2Ùa3))
= Fp((Ñp¿a1)(a2Ùa3)
- (Ñp¿a2)(a1Ùa3)
+ (Ñp¿a3)(a1Ùa2))
= (Ñp¿a1)FpÑ(a2Ùa3)
- (Ñp¿a2)FpÑ(a1Ùa3)
+ (Ñp¿a3)FpÑ(a1Ùa2)
= Ðßa1Fp(a2Ùa3)
+ Ðßa2Fp(a3Ùa1)
+ Ðßa3Fp(a1Ùa2)
= S1←2←3 Ðßa1Fp(a2Ùa3)
Operating on and with Ñ
It is important not to confuse:
It can be shown that (ÑÜp¿)2 = 0 with regard to its action on functions satisfying the integrability condition.
We can generalise to the
conveyed multiderivative
Ñp→ Fp(ap)
º åk=02N-1 Ðßpe[.k.] (Fp(e[.k.]ap))
= åk=02N-1 (Ðpe[.k.] Fp)(e[.k.]ap)
Adjoints
The adjoint of a (nonlinear) 1-field ¦ is a 1-tensor
defined by
¦D(u) º Ñx(u¿¦(x)) .
If ¦:VM ® UN then ¦D : UN ®
VM.
Let a Î UN, b Î VM.
For linear ¦, the (outtermorphism extended) adjoint has the property that
(a¦(b))<0> = (b¦D(a))<0>
and in particular we have the 1-vector ¿ crossover rule
a¿¦(b) = ¦D(a)¿b
which provides an alternate definition for the adjoint of a 1-tensor.
Setting a=b=i
gives |¦| = |¦D| .
The ¿ crossover rule has operator formulation
a¿((b¿Ñx)¦(x)) = b¿(Ñx(a¿¦(x)))
for general ¦.
We also have
a¿¦D(b) = ¦D(¦(a)¿b)
and
b¿¦(a) = ¦(¦D(b)¿a)
[ note ¿ rather than . ] .
If ¦(x)=(a+bb2)x where b2 is zero or a 2-vector containing x (so that ¦(x) remains 1-vector valued) then ¦D(x)= x(a+bb2)=(a-bb2)x .
More generally, if ¦(x) = axxbx preserves grade then
¦D(x) = bxxax .
[ Proof : Cyclic scalar rule .]
Hence for any multivector conjugation or operation ^ with a^^=a we have
a^D = a^^ .
If ¦ is expressed with regard to an orthonormal basis {ei} as ¦ ij then ¦D is expressed as ¦D ij = eiej¦ ji where ei is the signature of ei. The adjoint can thus be thought of as a "signature reflected transpose". [ Proof : ¦D ij = ei¿(¦D(ej)) = eiejei¿¦D(ej) = eiej¦(ei)¿ej = eiej¦ ji .]
If ¦ is invertible,
¦D(a¿b) = ¦-1(a)¿¦D(b) .
In particular [b=i]
¦-1(a) = (i2)¦D(a*)* / |¦| .
The crossover rule gives ¦-1(ei)¿¦D(ej) = ¦(¦-1(ei))¿ej = ei¿ej
so ¦D provides an inverse frame for {¦-1(ei)}.
If ¦ is an invertible linear Lorentz transformation, ¦D = ¦-1 .
[ Proof : a¿b = ¦(a)¿¦(b)
= a¿¦D(¦(b)) " a,b .]
Since ¦Ñ is linear for general ¦, the above results relating ¦D to ¦ for linear ¦ also hold relating
¦D to ¦Ñ for general ¦. In particular we have
¦-D º (¦D)-1 = (¦-1)D
[ Proved under "Chain Rule" below. ]
¦-Ñ(a) = i2¦D(a*)* / |¦| .
¦DD = ¦Ñ
[ Proof :
a¿¦Ñ(b) = b¿¦D(a) = a¿¦DD(b) since ¦D is linear .]
¦ÑD = ¦D
[ Proof :
¦ÑDx(v) = Ña(v¿¦Ñx(a))
= Ña(v¿(a¿Ñx¦(x)))
= (v¿Ñx¦(x)) = ¦Dx(v)
.]
We can generalise to a multivector adjoint
¦D(u) = Ñx(u*¦(x)) satisfying the
*-crossover rule
a*¦(b) = b*¦D(a)
for linear ¦(x).
The adjoint of ¦(a)= fa is ¦D(a)=af so we say a multivector f is self adjoint
if it is central (commutes with everything).
An alternative (equivalent) definition of the adjoint based on the simplicial derivative
defined below is
¦D(b<k>)
º
(k!)-1 (ÑakÙ...ÙÑa1) (b*(¦(a1)Ù...Ù¦(ak)))
= (k!)-1 (ÑakÙ...ÙÑa1)(b<k> ¿ (¦(a1)Ù...Ù¦(ak)))
for k>0.
The Undirected Chain Rule
Given scalar field f and (nonlinear) 1-field ¦,
Ñ(f(¦(x)) = ¦Dx( (Ñf)(¦(x)) )
.
[ Proof : a¿(Ñ(f(¦(x))) =
Limd®0(( f(¦(x+da)) - f(¦(x)) )/d)
= Limd®0(( f(¦(x)+d¦Ñx(a)) - f(¦(x)) )/d)
= ¦Ñx(a)¿(Ѧ(x)f(¦(x)))
= ¦Ñx(a)¿((Ñf)(¦(x)))
= a¿(¦ÑxD((Ñf)(¦(x)))
= a¿(¦Dx((Ñf)(¦(x)))
" a
.]
This provides the multivector calculus formulation of the Chain Rule which we can write as
Ñx = ¦Dx(Ѧ(x))
;
Ѧ(x) = ¦-Dx(Ñx)
.
The Chain Rule gives
(¦g)D = gD¦D
[ Proof :
(¦g)D(v)=Ñx(v¿¦(g(x)))
= (gD(Ñg(x)))(v¿¦(g(x)))
= gD(Ñg(x))(v¿¦(g(x)))
= gD(¦D(v)) .]
Thus ¦-D º (¦D)-1 = (¦-1)D .
[ Proof : Set g=¦-1 in
(¦g)D = gD¦D
.]
g
The directed chain rule can be expressed as Ðxa F(g(x)) = FÑ(gÑ(a)) = FÑ((Ñx*a)g(x)) .
Of particular interest is the case where g(x)=g(x) is scalar valued with derivative
gÑ(a) = (a*Ñx)g(x) = a*(Ñxg(x))
= (aÑxg(x))<0> ;
and where F mapping scalars to multivectors
has a derivative FÑ(x)
= FÑ<0>(x)
º F'(x) = Ñ<0> F(x) = ¶F(x)/¶x mapping scalar
x to multivectors of like grade(s) to F.
We then have
ÐaF(g(x)) = F'(g(x))gÑ(a)
= (a*(Ñxg(x))) F'(g(x)) which we will call the
directed scalar-funneled chain rule.
Whence the
undirected scalar-funneled chain rule
Ñx F(g(x)) = (Ñxg(x)) F'(g(x)) .
The Kinematic Rules
A particularly useful result is
(Ñ(bab§))<0>
= 2((Ñb) a<+§> b§)<0>
for constant (point-independant) a.
[ Proof :
(Ñ(bab§))<0>
= (Ñ(bÑab§)+Ñ(ba(b§Ñ)))<0>
= (Ñ(b)ab§)<0>+(ba(b§Ñ))<0>
= (Ñ(b)ab§)<0>+(ba(Ñb)§)<0>
= (Ñ(b)ab§)<0>+(((Ñb)a§b§)§)<0>
= (Ñ(b)ab§)<0>+((Ñb)a§b§)<0>
= (Ñ(b)(a+a§)b§)<0>
.]
Consequently, we have the kinematic rule
Ñ¿(b§(a))
º Ñ¿(bab§) = 2((Ñb)ab§)<0>
for constant 1-vector a . Note that we must retain the <0> on the right hand side even
if the left hand side is scalar-valued.
Also
(Ñ(bb§))<0>
= 2((Ñb)b§)<0> = 0
for constant (point-independant) a.
If b§ preserves grade then
Ñ(bab§) =
2(ÑbÑ)ab§
and also
ÑÙ(bÑab§) - b(aÙÑ)bѧ
ÑÙbab§ = ÑÙ(bÑab) - babѧÙÑ
= ( (Ñb)ab§ - ba(Ñb)§ )<2>
= ÑÙ(bÑab§) - b(aÙÑ)bѧ
.]
[ Proof :
ÑÙbab§ = ÑÙ(bÑab) - babѧÙÑ
= ( (Ñb)ab§ - ba(Ñb)§ )<2>
= ( (Ñb)ab§ + (ba(Ñb)§)§ )<2>
= 2(Ñb)ab§ since a§=a
Hence
Ñbab§ =
((Ñb)ab§)<0> + (Ñb)ab - b(aÙÑ)bѧ
Given how many common geometric transformations are representable in the form
b§(a) º bab§ ,
this is a profoundly important result.
Similarly
the Clifford kinematic rule
(Ñ(bab§#))<0>
= 2((Ñb) a<-§#> b§#)<0>
for constant a
yields
ÑÙbab§# = ÑÙ(bÑab) - babѧ#ÙÑ
= ( (Ñb)ab§ + ba(Ñb)§# )<2>
= ( (Ñb)ab§ - (ba(Ñb)§#)§# )<2>
= 2(Ñb)ab§# since a§)ivl=-a
.]
Ñ¿(bab§#) = 2((Ñb)ab§#)<0>
for constant 1-vector a, with
Ñ(bab§#) = 2((Ñb)ab§# if b_trvd# preserves
grade.
[ Proof :
(Ñ(bab§#))<0>
= (Ñ(bÑab§#)+Ñ(ba(b§#Ñ)))<0>
= (Ñ(b)ab§#)<0>+(ba(b§#Ñ))<0>
= (Ñ(b)ab§#)<0>+(ba(Ñb#)§)<0>
= (Ñ(b)ab§#)<0>+(((Ñ#b)a§#b§#)§#)<0>
= (Ñ(b)ab§#)<0>-((Ñb)a§#b§#)<0>
= (Ñ(b)(a-a§#)b§#)<0>
If even rotor field Rp has RpRp§=1 then
wp(d) º 2(ÐdRp)Rp§ has grade <2;6;10;...>
and
we have the rotor equation of motion
ÐdRp = ½wp(d)Rp .
For N<6 , wp(d) is a pure bivector and so w is a (2;1)-tensor.
Taylor's Formula
The multivector version of Taylor's Formula :
¦(x+d) = åk=0¥ ( (d¿Ñx)k / k ! ) ¦(x)
º ed¿Ñx¦(x)
= eÐd ¦(x)
º Ðd↑ ¦(x)
.
gives the approximation
¦(x+d) = ¦(x) + ¦Ñx(d) + ½¦Ñ2x(d,d)
+ 0(|d|3)
A general ¦x can be characterised at x (to second order) by
its value at x, a symmetric 1-tensor ¦Ñx, and a symmetric 2-tensor ¦Ñ2x.
Contraction and Trace
Suppose ¦(a,b,..,d) = åi j k .. majbk..dmf ijk..mei
[ where f ijk..m º ei¿¦(ej,ek,..,em) ] is a (1;k)-tensor.
Ña¿¦(a,b,..,d)
= åi=1N ei¿¦(ei,b)
is a (0;k-1)-tensor.
[ Proof :
Ña¿¦(a,b,..,d) º
(åleldal)¿(åi j .. majbk..dmf ijk..mei )
= ål i j k .. mdalajbk..dmf ijk..mel¿ei
= åi j k .. mbk..dmf ijk..mej¿ei
= åk..mbk..dm(åif iik..m)
.]
Divergence,
usually known in this context as contraction, with regard to a particular nonprimary parameter (or suffix)
thus provides a frame invarient way of decrementing both the degree and
type of a tensor and so reducing its rank by two.
The resultant tensor has representation åi f iik..m .
When t=k=1 (ie. a 1-vector valued function of a 1-vector) contraction produces scalar
åi fii known as the trace of ¦ which corresponds to the the traditional
matrix trace (sum of leading diagnonal elements).
Another way to decrement the type of a tensor is to take an inner product with a 1-vector
u¿¦(a,b,..,d) . This preserves the degree, so the rank is also decremented.
We introduce the notation ÑÞ to indicate contraction with regard to the first (leftmost)
non-primary parameter of a tensor ¦p(a,b,...) in order to allow the abbreviation
ÑÞ ¦p º Ña ¦p(a,b,...) .
The non-primary curl ÑaÙ¦(a,b,..,d) is known in this context as protraction. It provides a (t+1;k-1)-tensor, so preserving the rank.
The contraction and protraction of an r-multiform (ie. an r-vector valued linear function of a
r-blade) are of particular interest. We say an r-multiform
F(a1,a2,...ar)=F(a1Ù...ar) is contractionless if
Ña1¿F(a1Ù...ar) = 0 and this then implies that
Ñak+1¿(Ñ(k)ÙF(a1Ù...ar))=0 for any 1£k<r
and that
Ñ(k)F(a1Ù...ar)) = Ñ(k)ÙF(a1Ù...ar)).
Similarly F is protractionless if
Ña1ÙF(a1Ù...ar) = 0 and this then implies that
Ñak+1Ù(Ñ(k)¿F(a1Ù...ar))=0 for any 1£k<r
and that
Ñ(k)F(a1Ù...ar)) = Ñ(k)¿F(a1Ù...ar)).
Hence all contractions of a protractionless multiform are protractionless; and all
protractions of a contractionless multiform are contractionless.
[ Proof : Induction on k. See Hestenes & Sobczyk (3-9). .]
Thus when acting on a protractionless multiform
ÑÞk
= ÑÞ.ÑÞ.....ÑÞ
and when acting on a contractionless multiform
ÑÞk = ÑÞÙÑÞÙ....ÑÞ .
Covariance
We will
temporarilly "promote" position x from its suffix position to a bracketed argument.
Let F(x,..) = Fx be a (t;k)-tensor.
Let ¦ be a (nonlinear) invertible 1-field which we interpret as
returning points (ie. a transformation of the pointspace) in a "relabelling" or "coordinate transform"
rather than a "warping" context. Set y º ¦(x).
¦ induces the
substitutive transform F¦(x)ºF(¦-1(x))
, ie. F¦ ºF¦-1 , or, equivalently,
F¦(y) ºF(x) .
[ In much of the literature, inexplicit "prime" notations such as F' or F* replace
F¦ ]
If at a given point x we have a nondegenerate but otherwise general (ie. neither necessarily orthogonal nor normal)
basis N-frame {ei} then ¦ induces at y=¦(x) an "¦-transformed" N-frame { fi=¦-Ñx(ei) }
having inverse frame { fi = ¦Dx(ei) }
.
We have
fi¿ej = ¶yi/¶xj ïx and also
ei¿fj = ¶xi/¶yj ïy .
[ Proof :
fi¿ej = ¦Dx(ei)¿ej = ei¿¦Ñx(ej) = ¶yi/¶xj ïx .
Second result follows similarly. .]
With regard to the {fi} N-frame, if t,k > 0 then F¦ has coordinates
Fy ¦ m..qi..l | º | f q..m ¿ Fy¦(fi,fj,..,fl) | = | f q..m ¿ _Fx(fi,fj,..,fl) |
= | ¦D(eq..m) ¿ _Fx(¦-Ñ(ei),..,¦-Ñ(el)) | = | ¦D(eq..m) ¿ (¦-Ñx)k(_Fx(ei,..,el)) | |
= | eq..m ¿ (¦-Ñx)k-1(_Fx(ei,..,el)) . | = | (¦D)k-1(eq..m) ¿ _Fx(ei,..,el) . |
1-tensors
Linear ¦ is symmetric iff ¦(a) = ¦D(a)
(an equivalent condition is a¿¦(b) = ¦(a)¿b " a,b
[ and hence ¦ ij=¦ ji
]
).
In an N-D Euclidean space, symmetric tesors are diagonalisable, that is an
eigenframe {di}
exists with ¦(di)
=lidi
where li is the scalar
eigenvalue associated with 1-vector 1-eigenblade
di.
This remains true for Minkowski spaces only if N£3.
Projection is symmetric, ie. ¯bD = ¯b, since
c¿(¯bk(d))
= (-1)k+1(¯bk(d))ëc
= (-1)k+1((d¿b)¿b-1)ëc
= (-1)k+1((d¿b-1)¿b)ëc
= (-1)k+1d¿¯_bkmv(b)(c)
= ¯_bkmv(b)(c)¿d .
A 1-tensor ¦ is skewsymmetric iff ¦(a) = -¦D(a)
(an equivalent condition is a¿¦(b) = -¦(a)¿b " a,b
[ and hence ¦ ij=-¦ ji ]
).
a¿(ÑÙ¦(x))
= a¿(Ѧ(x) - Ñ¿¦(x))
= a ¿(Ѧ(x)) - a¿(Ñ¿¦(x))
=?= (a¿Ñ)¦(x) - a¿(Ñ¿¦(x))
= ¦Ñ(a) - ¦D(a)
= ¦(a) - ¦D(a) .
Thus a symmetric 1-tensor has zero curl.
A skewsymmetric 1-tensor has ¦Ñ(a) = ½a¿(ÑÙ¦(x))
= ½(a¿w(x))
where w(x) = (ÑÙ¦(x))) is a
bivector-valued functional fully characterising ¦,
The divergence of a skewsymmetric ¦ is zero.
A linear 1-tensor ¦ can be expressed as ¦[+](x) + ¦[-](x)
where
¦[+] º ½(¦Ñ(x)+¦D(x)) = ½Ñx(x¿¦(x))
is symmetric
and
¦[-](x) º ½(¦Ñ(x)-¦D(x)) = ½x¿(ÑxÙ¦(x))
is antisymmetric.
2-tensors
Given a general multivector-valued function of two multivectors ¦(a,b) we can define
the symmetric symmetroll of ¦ by
¦[+](a,b) º
¦(a,b) + ¦(b,a) , and
the skewsymmetric skewsymmetroll of ¦ by
¦[-](a,b) º
¦(a,b) - ¦(b,a) .
We can express
¦[-](a,b) as a function of bivectors via
¦[-](a,b)=
¦[-](aÙb).
Clearly ¦(a,b) = ½¦[+](a,b) + ½¦[-](a,b)
so any function of two multivectors can be expressed as a sum of symmetric and skewsymmetric parts.
Further, (a¿Ñ)b - (b¿Ñ)a
= Ñ.(aÙb) + a(Ñ¿b) - b(Ñ¿a)
is bilinear and skewsymmetric in a,b.
Characterising General Functions
Connections
A general ¦(a) generates an even multivector field
wa º a-1¦(a)
called the right-connection of ¦ . If ¦ is linear, its right-connection
can be represented by
N even multivectors wi º ei¦(ei) .
¦(a) = awa
= w0aa - w2a.a
= w0aa - w2a×a
Scalar w0a=a-1¿¦(a) can be regarded as the "expansion" component of ¦ ;
bivector w2a=a-1Ù¦(a) as the rotation component .
¦(a) can also be represented by an even multivector field wa º ¦(a)a-1 called the left-connection of ¦. This is just the right-connection with its bivector component negated.
Connections are most useful, being pure bivectors, when a-1¿¦(a) = 0
" a .
We then have
¦(a) = a¿wa = - wa.a = - wa×a
.
In particular, this is the case when representing the differential ¦Ñ of either
a directional field (ie. a function mapping points to unit 1-vectors)
or a directional transform (ie.. a length-preserving function mapping 1-vectors to 1-vectors)
when we have the alternate affine approximisation
¦(a+da) » ¦(a) + da¿wdaa
.
[ Proof : ¦(a)2 = a Þ (Ña¿b)(¦(a)2) = 0
Þ ((Ña¿b)¦(a))¿¦(a) = 0 .]
For N=3, N pure bivectors are specified by 9 scalar parameters (_cf
the conventional 3x3 matrix representation of an affine transform).
For N=4, 24 scalar parameters are required as compared to the 16 elements
in a 4x4 array.
Lorentz Transforms
For a Lorentz transform ¦ , the transformed frame { ¦(ei) } remains orthonormal
and we can represent ¦ with a (nondirectional) unit rotor
R with
¦(a)
= R-1(a)
º RaR-1 = RaR§ .
Exponentiated Form A connection can be represented in exponentiated or spinor form as ra efa ba everywhere except at a=0 , with unit 2-blade ba=(¦(a)Ùa-1)~ ; scalar fa= cos-1(a~ ¿ ¦(a)~) ; scalar ra=|¦(a)|/|a| .
A general ¦(a,b) can be bilinearly approximated as
åi j=1N aibj¦(ei,ej)
using N2 1-vectors ¦(ei,ej).
A skewsymmetric bilinear
¦[-](a,b) = ¦[-](aÙb)
can be represented by ½N(N-1) 1-vectors ¦[-](eij)
or via
waÙb = ¦[-](aÙb)(aÙb)-1
if aÙb ¹ 0 ; 0 else
with ½N(N-1) 3-vectors wij º ¦[-](eij)eij-1 .
Directed Multivector Derivatives
Let
Ðd f(x) = Lime ® 0 e-1d-1(
f(a+ed) -
f(a) )
so that
Ðd = d-1 Ðd for invertible d
and Ñ = åi=1N Ðei .
We say f(x) is regular (aka. analytic , holomorphic,
or meromorphic)
at a particular x0 if
Ðd f(x) exists for x=x0 independant of "direction" d. More generally, we think
of the limit Limx ® x0 (x-x0)-1 (f(x)-f(x0))
existing with the same limit value regardless of how x approaches x0.
We can then meaningfully denote this direction-independant limit by f'(x).
Within C @ Â2 + @ Â0,1 , for example, for
f(x)= f(x+iy) = u(x,y)+ iv(x,y)
to be regular at x0 we traditionally require the partial derivatives exist there and satisfy
the Cauchy-Riemann equations
¶u/¶x = ¶v/¶y ;
¶u/¶y = -¶v/¶x .
We can combine the Cauchy-Riemann equations into the complex identity
Ð1f(z) = Ðif(z)
(equivalent to Ñ^f(x) where i^=-i )
as a necessary (but not sufficient) condition for regularity.
Only some functions are regular. f(z)=z is regular everywhere with f'(z)=1, for example, as is f(z)=
(az)↑ with f'(z)=a(az)↑ .
All rational polynomials in z are regular but f(z)=|z|+2 is regular only at z=0 where f'(z)=0.
Multivector Fractals
Let
y(n+1) = y(n)2 + y(0) n > 1.
We assume that y(0) and y(1) are given. If both are even
ÂN+ multivectors
then the "orbit" remains in
ÂN+. For N=2 we have the Mandelbrot set, for N=3 we have a Julia set.
In general we have a sequence
y(n+1) = F(y(n))
and we define a set S by
S(b) = { y(0) : ¦(y(n)) = b for some
n £ nmax }
where ¦(y) = |y|2 or a similar scalar valued function.
Letting Tij (n) =
¶y(n)[.i.]/¶y(0)[.j.]
= Ðe[.j.](e[.i.]¿y(n))
we have
Tij (n+1) =
¶y(n+1) i/¶y(0) j
= åk (¶F(y(n))[.i.]/¶y(n)[.k.])
(¶y(n) k/¶y(0) j)
= åk (¶F(y(n))i/¶y(n) k)
T(n) jk
= åk ((ÑFi)(y(n)))k T(n) jk
= (T(n)(ÑFi)(y(n)))j
so we can compute T(n+1) iteratively from
T(n) and y(n).
Suppose m£nmax is the lowest integer such that
¦(y(m)) ³ b.
The normal to the surface S(¦(y(m))) at y(0)
is given by
Ñ(¦(Fm))(y(0))
= T(n+1)((Ѧ)(y(m)))
[ Proof :
d ¦(y(m))/d y(0) i
=
åk(d ¦(y(m))/d y(m) k)
(d ¦(y(m) k)/d y(0) i)
= åk((Ѧ)(y(m)))kT(n+1) ik
= (T(n+1)(Ѧ)(y(m))))i ]
[Under Construction]
Next : Multivectors as Manifolds