This document is still under revision. All suggestions, critique, or comment gratefully received.
This document assumes familiarity with Multivectors.
Notations defined in that document are retained here. Note that we here use
labels e_{1},e_{2},... to denote a typically fixed, "base", "universal",
"fiducial" frame
and h_{ip } to denote tangent vectors. In much of the literature,
e_{i} represent tangent or otherwise "motile" vectors while
s_{i}
or g_{i} represent a "base frame"
.
This document makes extensive use of subscripts and superscripts to indicate dependencies
usually "dropped" in conventional treatments and is, in consequnce, theoretically ambiguous. Does v_{ip} , for example,
mean that v_{i} is defined over or dependant on p , or that
v is a function of i_{p}? In practice, meanings will be clear in context.
Tensors are traditionally a difficult concept but multivectors make them far easier to understand,
manipulate, and generalise. They are fundamental to many applications so we address them here.
Notations
Symbols such as d, d,¶, Ñ, and ð are used variously in the literature
for various "differentiating" operators. We will introduce the unorthodox notation Ð^{x}_{a}
for the "directed" derivative
with regard to a multivector parameter x in a particular multivector "direction" a, and use Ñ
to denote the un-directed ("splayed") derivatives traditionally denoted Ñ or ð
.
We will typically use d or d to denote a small scalar
and d to denote a 1-vector interpreted as a (possibly large) "displacement". We will sometimes use dx to denote a
a small change in multi-vector parameter x when ambiguity with multiplication by a scalar d cannot arise.
Multivector Functions as Tensors
The traditional presentation of an N-dimensional tensor of integer rank r is a point-dependant
N^{ r}
element "array" or "matrix" defined with respect to a given N-dimensional coordinate frame, that transforms according to
particular rules in accordance with transforms of the underlying coordinate frame. Multivectors provide an
attractive alternative (and more general) formulation under which the conventional tensor product
follows directly from the geometric product. More formal definitions of the following explicitly specify
a "scalar source"
from which to "build" linear combinations, but here we implicitly assume "real" scalars (from Â or
a (finite-precision) approximation thereof).
Fields
A field is a function F: Â^{p,q,r}
® Â_{p,q,r} . In other words: a point-dependant multivector.
If the function is (unit) k-vector valued, we have a (unit) k-field.
A 0-field thus associates a scalar value with every point. A 1-field associates a 1-vector with every point.
From a programmers' perspective, fields are functions having at least one 1-vector parameter. This "primary" parameter
is usually interpreted as a point or position.
When the primary argument is interpreted as a (scaled) direction rather than a point we will refer
here to a directional k-field.
Tensors
We regard an N-dimensional tensor of
degree k as a point-dependant
multilinear N-dimensional multivector-valued function of k N-dimensional 1-vectors.
F_{x } : (Â^{p,q,r})^{k} ® Â_{p,q,r}
where p+q+r = N .
By multilinear we mean linear in each argument.
If k=0 we have a point-dependant function taking no arguments
and returning a (point-dependant) multivector,
effectively a field. A (t;0)-tensor is thus a t-field,
often refered to as an invariant tensor, though its "value" does in general vary with x,
If F_{x }(a_{1},a_{2},..,a_{k}) = F_{x }(a_{1},a_{2},..,a_{k})_{<t>} (ie.. F_{x } is t-vector valued)
we say the tensor has type t and rank t+k
and refer to it here as a (t;k)-tensor .
From a programmers' perspective, tensors are multivector-valued functions of at least one 1-vector argument,
linear ("affine") in all but the primary argument.
When t=k we refer to a k-tensor rather than a (k;k)-tensor . A k-tensor is thus
a point-dependant k-vector-valued multilinear function of k 1-vectors.
In particular, a 1-tensor is a point-dependant directional 1-field.
The "scalar product" ¿ is a (0;2)-tensor, though we usually write a¿b in preference to ¿(a,b) .
The outter product Ù is a 2-tensor. The geometric product is a tensor of degree 2 but "mixed" type.
Forms
If a (t;k)-tensor is skewsymmetric in its arguments so that
F_{x }(a_{1},a_{2},..,a_{k})
= L_{x}(a_{1}Ùa_{2}...Ùa_{k})
= L_{x}(a_{k})
can be viewed as a function of a single k-blade rather than of k 1-vectors , then it is called a
a skewsymmetric (t;k)-tensor or a (t;k)-multiform . When t=k
we abbreviate to a k-multiform.
If t=0 (ie. F_{x } is scalar valued) then it is instead called a k-form. A 1-multiform is a 1-tensor.
It can be shown [see Hestenes & Sobczyk] that any k-form can be expressed as
L_{x}(a_{k})= u_{k}¿a_{k} where u_{k} is a point-dependant k-vector .
If a k-multiform maps any given k-blade to another k-blade (rather than to a k-vector) then we say the multiform
is blade preserving. A 1-tensor is thus a blade-preserving 1-form since any 1-vector is a 1-blade.
It can be shown that provided k¹½N , any blade preserving k-form is merely the outtermorphism
of a 1-form. For k=½N, the gemetric dual prserves k-blades but is not an outtermorphism.
Dyads
A k-dyad is a k-multiform of the form
D(a_{k}) = u_{k}(v_{k}¿a_{k})
where u_{k}, v_{k} are point-dependant k-blades.
A k-multiform can be expressed in dyadic form
as a sum of k-dyads. A 1-dyad is known as a dyad. A 0-dyad is the "succesive" multiplicative
combination of two scalar fields
D_{x}(a)=u_{x}v_{x} a
Multitensors
We can generalise a (t;k)-tensor to a (t;k)-multitensor) being a point dependant
multivector-valued function of k multivectors
F_{p}(a_{1},a_{2},...,a_{k}) which is t-vector-valued when acting on k 1-vectors.
, linear in all but the primary (point) argument .
We will henceforth use the term tensor to refer to a multilinear multivector-valued function
of k nonprimary 1-vector arguments and multitensor for a multilinear multivector-valued
function of k nonprimary multivector arguments.
We will typically restrict the grade of the nonlinear "primary" multivector argument p to 1 and consider
it as a 1-vector "point" p . If a multitensor is t-vector valued,
we can regard it as a sum of (t;k)-forms with k ranging from 0 to N.
Extended Fields
Suppose now that we have k multivector fields a_{i}_{p}=a_{i}(p).
We can then extend a given k-multitensor F_{p}(a_{1},a_{2},...,a_{k})
with these fields to form an extended field
which we will also denote F_{p} mapping U^{N}×U_{N}^{k} ® U_{N} and defined by F_{p} = F_{p}(a_{1}_{p},a_{2}_{p},..,a_{k}_{p}).
Outtermorphisms and Determinants
Let ¦ : Â^{p,q,r} ® Â^{p,q,r} be a linear
transformation (ie. a 1-field over Â^{p,q,r} typically regarded
as acting on and returning "points" rather than "vectors").
We can extend
¦ to a multivector field ¦^{Ù} over Â_{p,q,r} by defining
¦^{Ù}(a) º a ;
¦^{Ù}(a) º ¦(a) ;
and
¦^{Ù}(aÙb) º ¦(a)Ù¦^{Ù}(b).
This extension is known as the outtermorphism of ¦.
Clearly ¦^{Ù}(a_{<k>}) = ¦^{Ù}(a)_{<k>} and in particular
¦^{Ù}(i) = |¦|i where scalar |¦| is the
determinant of ¦
(nonzero iff ¦ invertible).
We will henceforth consider all linear 1-fields (1-tensors) to be so extended and
will frequently drop the ^{Ù} suffix
. We can similarly
extend any k-tensor to be defined over k multivectors rather than k 1-vectors.
Since ¯_{}(cÙd,b)
= (¯_{b}(c))Ù(¯_{b}(d)) ,
¯_{b}^{Ù} = ¯_{b} , and so ¯_{b} is an outtermorphism
and we can write
¯_{a}^{Ù} =; ¯_{a} .
It is worth explicitly noting that outtermorphisms preserve scalars.
Eigenblades
We now generalise the concept of eigenvectors and associated eigenvalues.
We say k-blade a_{k} is an left k-eigenblade of a general ¦:
Â_{p,q,r}®Â_{p,q,r} with associated scalar eigenvalue
a if ¦^{Ñ}(a_{k})=aa_{k} .
We say it is a right k-eigenblade if
¦^{D}(a_{k})=aa_{k} .
If a_{k} is both left and right eigenblade then the eigenvalue is common and we have a proper eigenblade
[ Proof : a_{Left}a_{k}^{2} = a_{k}¿¦^{Ñ}(a_{k}) = ¦^{D}(a_{k})¿a_{k} = a_{Right}a_{k}^{2}
.]
A proper 1-eigenblade is a conventional eigenvector.
Scalars are 0-eigenblades of eigenvalue 1.
i is an N-eigenblade of ¦ with eigenvalue Det(¦) .
If a_{k} and b_{r-k} are eigenblades with eigenvalues a,b then a_{k}Ùb_{r-k} is either degenerate (zero) or an eigenblade of eigenvalue ab. We say an eigenblade is irreducable if it is not itself the join of two eigenblades. For a transformation ¦ with ¦(i)=|¦|i , "factorising" N-eigenblade i into irreducible "sub" eigenblades corresponds to decomposing the space spanned by i into subspaces invariant under ¦.
If a and b are left and right eigenblades with eigenvalues a,b respectively then a¦^{D}(a¿b)=b(a¿b) and b¦^{Ñ}(aëb)=a(aëb) which is to say that the non-vanishing of contraction a¿b or b¿a is a right (or left) eigenblade having eigenvalue ab^{-1} (or a^{-1}b).
For any 1-vector a and linear outtermorphism ¦=¦^{Ñ}, the (k+1)-blade
aÙ¦(a)Ù¦^{2}(a)Ù...Ù¦^{k}(a)
must vanish for some k £ N because all (N+1)-blades are degenerate.
We then have ¦( aÙ¦(a)Ù...Ù¦^{k-1}(a) )
= l_{a} aÙ¦(a)Ù...Ù¦^{k-1}(a)
for some scalar eigenvalue l_{a} of k-eigenblade a_{k} = aÙ¦(a)Ù...Ù¦^{k-1}(a)
. We say a has ¦-eigenicity k .
But ¦ can also be expressed as a real N×N matrix which we know (from
the characteristic polynomial methods of traditional matrix theory) has N eigenvectors,
provided we allow complex vector coordinates and complex eigenvalues. Complex
eigenvectors occur in conjugate pairs, say
¦(a+ib) = r(iq)^{↑} (a+ib) and
¦(a-ib) = r(-iq)^{↑} (a-ib) for real scalars r and q with q¹0.
Taking real and imaginary parts we obtain ¦(a) = r( cos(q)a - sin(q)b) ;
¦(b) = r( cos(q)b + sin(q)a)
giving ¦(aÙb)
= 2 cosq sin(q)(aÙb) .
= r^{2} sin(2q)(aÙb) . The geometric interpreation of i in a Euclidean context is (aÙb)^{~}
Thus we can choose a basis in which each elements e_{i} has an ¦-eigenicity of either one (when e_{i} is an eigenvector of ¦) or two
(when e_{i}Ù¦(e_{i}) is a 2-eigenblade of ¦).
Coordinate-based Tensor representations
With regard to a given invertible frame {e_{1},..,e_{N}}, we have an N^{ r} 0-field
primary matrix representation of F_{x } .
F_{x }^{ m..q}_{i..l} º
e^{q..m} ¿ F_{x }(e_{i},e_{j},..,e_{l})
[ with t suffix q..m and k suffix i..l ]
, ie. the component of e_{m..q}
in F_{x }(e_{i},e_{j},..,e_{l}) .
Alternate matrix representations are possible
F_{x }_{ m..q i..l} = e_{q..m} ¿ F_{x }(e_{i},e_{j},..,e_{l})
giving the component of e^{m..q} in F_{x }(e_{i},e_{j},..,e_{l}) ;
F_{x }_{ m..q}^{i..l} = e_{q..m} ¿ F_{x }(e^{i},e^{j},..,e^{l})
giving the component of e^{m..q} in F_{x }(e^{i},e^{j},..,e^{l}) ; and so forth.
Hence the alternate coordinate expressions v = å_{i=1}^{N} v^{i}e_{i}
= å_{i=1}^{N} v_{i}e^{i}
for (1;0)-tensor v .
With regard to orthonormal frames in Â_{N} , e^{i}=e_{i} and all these representations are identical.
In particular, a 1-tensor (point-dependant 1-vector function of a 1-vector) has representations
F_{x }^{i}_{j} = e^{i}¿F(e_{j}) ;
F_{x }_{ij} = e_{i}¿F(e_{j}) ;
F_{x }^{ij} = e^{i}¿F(e^{j}) ;
F_{x }_{i}^{j} = e_{i}¿F(e^{j}) ;
Note that the "height" of a suffix is often used in three related but alternate ways:
We have the generalised (multivector) differential of v at a given x
v^{Ñ}_{x}(a)
º Ð^{x}_{a}(v(x))
º Lim_{e ® 0} ((v(x+ea)-v(x)) e^{-1} )
[ e a scalar ] which we will see is linear in a ; and the
generalised (multivector) centred differential of v at a given x
v^{oÑ}_{x}(a)
º ^{o}Ð^{x}_{a}(v(x))
º ½ Lim_{e ® 0} ((v(x+ea)-v(x-ea)) e^{-1} )
.
The centred differential has the advantage of sometimes being evaluable at x where v(x) is undefined
but tends to be less applicable at boundary points. When v(x) is defined
and the limit is well-defined being the same for e ® 0 from above as from below,
the centralised differential is equivalent to the differential.
Clearly Ð^{x}_{a} uxv = uav for any multivectors u,v independant of x
and, in particular, Ð^{a}_{p}p = a which we can also write as 1^{Ñ} = 1.
The differential of v(x) is thus the function which given a returns the a-directed derivative of v(x).
A directed derivative can be regarded as a result of a particular evaluation of a differential.
The full notation v^{Ñx}_{x}(a)
reminds us that the differential is both "with respect to" x and evaluated "at"
a partcular x. We will typically ommit at least one of these suffixes so that
v^{Ñx}_{x}(a)
º v^{Ñx}(a)
º v^{Ñ}_{x}(a)
º v^{Ñ}(a) .
We refer to Ð_{a} as the scalar a-directed derivative operator. By a "scalar" operator we here mean grade preserving
in that
Ð_{a}(v(x)_{<k>}) = (Ð_{a}(v(x))_{<k>} .
We will frequently drop the brackets and write Ð_{a}v_{x} for Ð_{a}(v(x)). We will use the notation =_{( )}= to indicate the mere addition or removal of brackets in accordance with our bracket conventions.
Restricting a and x to be 1-vectors gives the 1-differential of v(x) at a given x
v^{Ñx}(a)
º Ð^{x}_{a}(v(x))
º Ð_{a}(v(x))
º Lim_{e ® 0} ((v(x+ea)-v(x)) e^{-1} )
We can outtermorphically extend the 1-differential to act on multivectors but it is important to
recognise that even with 1-vector x=x,
v^{Ñx}(a) = v^{Ñx}(a) is in general
true only for 1-vector a.
Ð_{a} obeys the product rule
Ð_{a}(b_{x}Ùc_{x})
= (Ð_{a}b_{x})Ùc_{x}
+ b_{x}Ù(Ð_{a}c_{x})
.
Consequently
Ð_{a}(u(x)Ùv(x)) ¹
u^{Ñ}(a)Ùv^{Ñ}(a)
in general .
If v takes a 1-vector argument x (often interpreted as a "point")
then, given an
inverse frame, we can view v(x) as a
multivector-valued function of N scalars
v(x^{1},x^{2},..,x^{N}).
We write Ð_{xk }
or Ð_{ek}
for the partial derivative "scalar" operator
Ð_{xk }(v(x)) º ¶v(x)/¶x^{k}
º Lim_{d ® 0} ((v(x+de_{k})-v(x)) d^{-1} )
º Ð_{ek}(v(x)) .
The 1-differential at a given x of v(x)
can then be expressed as
v^{Ñ}_{x}(a) º Ð_{a}v(x)
º å_{k=1}^{N} Ð_{xk }((a¿e^{k})v(x))
= Lim_{d ® 0} ( (v(x+da)-v(x)) d^{-1} ) .
For 1-field v(x)=¦(x) the differential ¦^{Ñ}_{x}(a) is a 1-tensor
with matrix representation ¦^{Ñ}_{x}^{i}_{j} =
¶y^{i}/¶x^{j} ï_{x} where y=¦(x).
[ Proof : e^{i}¿¦^{Ñ}_{x}(e_{j})
= e^{i}¿(Ð_{ej}¦(x))
= e^{i}¿(¶¦(x)/¶x^{j})
= ¶y^{i}/¶x^{j}
.]
If ¦: V^{M} ® U^{N}
then ¦^{Ñ}: V_{M} ® U_{N} can still be defined
as Ð_{a}¦(x) for a,xÎV_{M}.
Of course, if N<M then ¦^{Ñ}(i_{M})=0 since any M-blade in
U_{N} must be degenerate.
Linearity of the Differential
That the differential is linear is surprising. One feels that one ought to be able to construct
pathological functions, "directed bumps" which can "fool" a particular coordinate basis
via a deceptive perfomance along the base axies.
But one cannot do so without violating continuity assumptions for ¦.
Consider as an example the 0-field v(x) = r sin2q
= 2x_{1}x_{2}
defined over Â^{2}.
We write Ñv = 2(x_{2}e_{1} + x_{1}e_{2}).
Ð_{e1}v(x)=2x_{2} while Ð_{e2}v(x)=2x_{1} and both of these are zero at x=0.
But Ð_{e1+e2}v(x)=x_{1}+x_{2} is also zero at x=0 so linearity of v^{Ñ}
survives there. Moving away from 0 to, say, point e_{1}, Ð_{e2}v(x) becomes non zero
but v^{Ñ} is linear there too.
Linearity survives at 0 by virtue of the r factor vanishing at 0, but only
by having such a zeroing factor can we eliminate the discontinuity arising from
q(x_{1},x_{2}) being undefined at 0.
We might attempt to cobble together something from splines with "flat areas" but anything so cobbled
will require a discontinuity in a derivative of some order. If a function is flat to first order somewhere,
it must be flat to first order everywhere, or face a second order discontinity at the "interface".
One might think that continuously differentable "non-centred" functions are flat nowhere or everywhere,
which would have crucial ramifications in physics since it implies that no truly continous fields can be entirely localised.
Though we could damp function values with distance, there would always be theoretically detectable oscillations
conceivably exploitable as an information channel. In non Euclidean spaces we can define inifnitely continous
functions which become ever flatter as they approach the boundary of the null cone and are fully flat outside it, however.
If a "particle" is modelled as a continous fluctation defined over relativistic timespace, that fluctaution must
extend not only spacially, but temporally into the distant past and future of any observer.
For linear ¦, ¦^{Ñ}=¦
[ Proof : (Ð_{a}¦(x) = ¦(Ð_{a}x)=¦(a) .] although note that ¦^{Ñ} is implicitly defined over all
U_{N} even if ¦ is defined only over a subspace.
For small ex , ¦(x+ex) » ¦(x)+¦^{Ñ}_{x}(ex) .
[ Proof : ¦(x+ex) »
¦(x) + å_{k=1}^{N} ex^{k}¦^{Ñ}_{x}(e_{k})
= ¦(x) + å_{k=1}^{N} (ex¿e^{k})¦^{Ñ}_{x}(e_{k}) =
¦(x)+¦^{Ñ}_{x}(ex) .]
The 1-differential ¦^{Ñ}_{x0} can thus be thought of as the linear approximator to ¦(x)-¦(x_{0}) for x close to x_{0}. If ¦^{Ñ}_{x0}(a) is constant " unit a then ¦ is radially symmetric at _{x0} (ie. can be expressed as a function of |x - x_{0}|).
¦^{Ñ}^{-1} = (¦^{-1})^{Ñ} when ¦^{-1} exists, so we can denote both by ¦^{-Ñ}.
[ Proof : ...
.]
The outtermorphism extension of ¦^{Ñ} is denoted by ¦^{Ñ}^{^} or just ¦^{Ñ}. Its determinant J_{¦} º |¦^{Ñ}| º (¦^{Ñ}^{^}(i))^{*} is the conventional Jacobian of ¦ at x.
Of particular interest is the self-directed 1-differential or streamline derivative
¦^{Ñ}_{x}(¦(x)), which
describes how a 1-field changes when it "follows itself".
The composite 1-differential at x is
¦^{ÑÑ}_{x}(b) º
¦^{Ñ}_{x}^{Ñ}(b) = (Ñ_{a}¿b)¦^{Ñ}_{x}(a) .
It is of limited usefulness.
Differentiating Exponentials
We have
Ð_{d}(x_{*}a)^{↑} =
Lim_{e ® 0} (e^{(x+ed)*a}-e^{x*a})e^{-1}
= e^{x*a} Lim_{e ® 0} (e^{dd*a}-1)e^{-1}
= (d_{*}a) (x_{*}a)^{↑}
and more generally if Ð_{d}F_{x} commutes with F_{x} then Ð_{d}(F_{x})^{↑} = (Ð_{d}F_{x})(F_{x})^{↑} .
If Ð_{d}F_{x} anticommutes with F_{x} then Ð_{d}(F_{x})^{↑}
= (Ð_{d}F_{x})(1+F_{x} |^{2}/3! + F_{x}^{4}/5! + ...)
= (Ð_{d}F_{x})F_{x}^{-1} sinh(F_{x})
Ð_{d}e^{(x*a)b} = (d_{*}a)e^{(½p+(x*a)b}
provided b^{2}=-1.
The Directed Chain Rule
If ¦(x)=g(h(x)) then we have the Chain Rule ¦^{Ñ}(a)=g^{Ñ}_{h(x)}(h^{Ñ}_{x}(a)) .
When g and h are linear this reduces to
¦^{Ñ}(a)=g^{Ñ}_{h(x)}(h(a)) =
(h(a)¿Ñ_{x})g(x)
= Ð^{x}_{h(a)}g(x) .
Primary Differential
Let us switch from x to p and suppose we have an extended field
F_{p} = F(p,a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) = F_{p}(a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) .
Ð_{d}F_{p} =
Ð_{d}(F_{p}(a_{1}_{p},a_{2}_{p},...,a_{k}_{p})) =
Lim_{e ® 0}e^{-1}(
F(p+ed,a_{1}_{p+ed},a_{2}_{p+ed},...,a_{k}_{p+ed})-F(p,a_{1}_{p},a_{2}_{p},...,a_{k}_{p}))
We might consider regarding F_{p} as a multivector field by holding the k linear parameters a_{i}_{p} constant at their p values throughout a neighbourhood of p
and then take its d-directed derivative,
defining
Ð_{ßd} F_{p}(a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) º
Lim_{e ® 0}e^{-1}(
F(p+ed,a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) - F(p,a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) )
but this raises difficulties if the a_{i}_{p} are restricted in some manner
and unable to hold their "at p" values away from p.
A better definition
for the d-directed primary derivative operator
is
Ð_{ßd} F_{p}(a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) º
(Ð_{d}F_{p})(a_{1}_{p},a_{2}_{p},...,a_{k}_{p})
º Ð_{d} (F_{p}(a_{1}_{p},a_{2}_{p},...,a_{k}_{p})) -
F_{p}(Ð_{d}a_{1}_{p},a_{2}_{p},...,a_{k}_{p})) -
F_{p}(a_{1}_{p},Ð_{d}a_{2}_{p},...,a_{k}_{p})) -
... - F_{p}(a_{1}_{p},a_{2}_{p},...,Ð_{d}a_{k}_{p})) .
Ð_{ßd} F_{p}(a_{1}_{p},...) is then a well-defined point dependant multilinear function of k multivector arguments
known as the d-directed primary derivative of F.
The choice of the _{ß} symbol is here intended to suggest of the "lowering" of the "scope" of Ð_{d} to apply only to
the "low-suffixed" primary p.
We have discussed a 1-vector-directed primary derivative. Generalising to a multivector "point" p
we have the obvious a-directed primary derivatives for general multivector a.
In particular, we have the traditonal derivative of a multivector-valued function of a scalar
¦ : Â®U_{N}
as Ð_{1}¦(x) = ¶¦(x)/¶x = ¦'(x)
Second Primary Differential
The first differential ¦^{Ñ}_{p}(a) = ¦^{Ñ}(a) can be extended via a given 1-field a_{p}=a(p)ºa into a field whose b-directed
primary derivative is given by
Ð_{ßb} ¦^{Ñ}_{p}(a_{p}) = (Ð_{b}¦^{Ñ}_{p})(a_{p}) = Ð_{b} (¦^{Ñ}_{p}(a_{p})) - ¦^{Ñ}_{p}(Ð_{b}a_{p})
= Ð_{b}Ð_{ap}¦(p) - ¦^{Ñ}_{p}(Ð_{b}a_{p})
= Ð_{b}Ð_{ap}¦(p) - Ð_{Ðbap}¦(p)
This provides a bilinear
second differential <1;2>-tensor ,
¦^{Ñ2}_{p}(a,b) º
Ð_{ßb}(¦^{Ñ}_{p}(a))
= (Ð_{b}¦^{Ñ}_{p}(a)) - ¦^{Ñ}_{p}(Ð_{b}a)
= Ð_{b}Ð_{a}¦(p) - ¦^{Ñ}_{p}(Ð_{b}a)
If the second differential ¦^{Ñ2} is symmetric we say ¦ satisfies the integrability condition
which we can consequently express as
Ð_{b}Ð_{a}¦(p) - ¦^{Ñ}_{p}(Ð_{b}a) = Ð_{a}Ð_{b}¦(p) - ¦^{Ñ}_{p}(Ð_{a}b)
Û (Ð_{b}×Ð_{a})¦(p) = ½¦^{Ñ}_{p}(Ð_{a}b - Ð_{b}a)
º ½¦^{Ñ}_{p}(a_{Ä}b) .
Provided Ð_{b}a = Ð_{a}b (which we can also denote a_{Ä}b=0) the commutability of
Ð_{ßa} and Ð_{ßb} is thus equivalent to the commutability of Ð_{a} and Ð_{b} ;
and this is trivially true in the particular case
Ð_{b}a = Ð_{a}b = 0 corresponding to "constant" a and b.
¦^{Ñ2}_{p}(a,b) is the directed derivative at p in direction b
of the a-directed derivative ¦^{Ñ}_{p}(a). It is maximised
when b is normal to the surface ¦^{Ñ}_{p}(q) = ¦^{Ñ}_{p}(a) .
Consider direction dq at point p + dp. ¦ is approximated near p
as
¦(p+dp+dq) » ¦(p) + ½(
¦^{Ñ}_{p}(dp) + ¦^{Ñ}_{p+dp}(dq) +
¦^{Ñ}_{p}(dq) + ¦^{Ñ}_{p+dq}(dp))
» ¦(p)
+ ¦^{Ñ}_{p}(dq) + ¦^{Ñ}_{p}(dp) +
½(¦^{Ñ2}_{p}(dq,dp) + ¦^{Ñ2}_{p}(dp,dq))
= ¦(p) + ¦^{Ñ}_{p}(dq) + ¦^{Ñ}_{p}(dp) + ¦^{Ñ2}_{p}(dq,dp) .
Third Primary Differential
The second differential can itself be primary differentiated
Ð_{ßb}(¦^{Ñ2}(a_{1 p},a_{2 p}))
= Ð_{b}(¦^{Ñ2}_{p}(a_{1 p},a_{2 p}))
- ¦^{Ñ2}_{p}(Ð_{b}a_{1},a_{2 p})
- ¦^{Ñ2}_{p}(a_{1 p},Ð_{b}a_{2 p})
Secondary Differential
We here define the secondary directed differential by
Ð_{Þd} F_{p}(a,b,...) º Ð_{d} F_{p}(a_{Ð},b,...)
º Lim_{e ® 0} e^{-1} (F_{p}(a+ed,b,...)-F_{p}(a,b,...)) .
a_{Ð} here denotes the scope of the differentiaon implicit in Ð_{} applying only
to parameter a.
Thus "secondary derivative" refers to differentiation with respect to the second (first nonprimary) parameter,
whereas "second derivative" usually refers to the combination of two successive primary derivatives,
More generally, we have the (i+1)^{ary} directed differential
Ð_{Þid} F_{p}(a,b,...)
º Ð_{d} F_{p}(a,b,.g_{Ð},..)
where g is the i^{th} non-primary parameter.
If F_{p} has k nonprimary parameters we have Ð_{a}(F_{p}(a_{1},....)) = (Ð_{ßa} + å_{i=1}^{k} Ð_{Þia}F_{p} .
Let F_{p} = F_{p}(a_{1},...a_{k}) be a tensor taking k non primary parameters . We can form
F_{p}^{Ñ} = F_{p}^{Ñ}(a_{1},..,a_{k},d) º
Ð_{ßd}F_{p}(a_{1},..,a_{k}) º (Ð_{d}F_{p})(a_{1},..,a_{k})
º Ð_{d} (F_{p}^{Ñ}(a_{1},..,a_{k})))
- F_{p}(Ð_{d}a_{1},..,a_{k}) - ... - F_{p}(a_{1},..,Ð_{d}a_{k}) .
Lie Product
Having defined a directed derivative operator Ð_{ap} we define
the skewsymmetric bilinear Lie product by
a_{p}_{Ä}b_{p} º Ð^{p}_{ap}b_{p} - Ð^{p}_{bp}a_{p}
º Ð_{ap}b_{p} - Ð_{bp}a_{p}
This is often known as the Lie Bracket
and denoted [a_{p},b_{p}] but we will favour the _{Ä} product notation here.
Undirected Derivatives
" Here, I'd like to introduce you to a close personal friend of mine.
M-41A 10mm pulse-rifle, over and under with a 30mm pump-action grenade launcher."
Corporal Dwayne Hicks, "Aliens".
"Undirected derivatives" can be thought of as "splayed out" directed derivatives, or as
"embodying" derivatives in multiple directions.
1-derivative Ñ
We define the 1-vector del-operator (aka. nabla) or 1-derivative
(aka. vector derivative)
Ñ = Ñ_{x} º å_{k=1}^{N} e^{k}Ð^{x}_{ek}
= å_{k=1}^{N} Ð_{ek}e^{k}
so that Ñv(x)
= å_{k=1}^{N} e^{k}Ð_{ek}(v(x))
= å_{k=1}^{N} Ð_{ek}(e^{k}v(x))
= å_{k=1}^{N} (¶/¶x^{k})(e^{k}v(x))
.
Note that this definition remains consistant for all frames {e_{i}} including nonorthonormal ones.
If v(x)=v(x) is scalar valued, Ñv(x) = å_{k=1}^{N} e^{i}Ð_{xk} v(x^{1},..,x^{N}) is the conventional gradient with regard to Euclidean Â^{N}.
Applying Ñ as a geometric product gives Ñ v(x) = å_{k=1}^{N} Ð_{xk } (e^{k} v(x)) = å_{k=1}^{N} Ð_{xk } (e^{k}¿v(x) + e^{k}Ùv(x)) = Ñ¿v(x) + ÑÙv(x).
If v(x)=v(x) is 1-vector valued, the scalar ¿ term is
just
å_{k=1}^{N} Ð_{xk } (v_{k}) which for a Euclidean space
(v^{k}=v_{k}) is the traditional
divergence of v(x), while the bivector Ù term is known as the curl of
v(x). For N=3, this is dual to (minus) the conventional
curl
Ñ×v(x).
We have thus essentially unified and generalised the three conventional differential operators grad, div, and curl.
It is possible to define Ñ independantly of an inverse coordinate frame
as the limit of a surface integral, as discussed briefly under
Tangential Derivative
below.
We refer to Ñ_{x}v(x) as the 1-derivative of v with respect to x.
We have
Ð_{a}
= (a¿Ñ)
= (a_{*}Ñ)
with the brackets here emphasising "precedence" rather than specifying the
"scope" of the Ñ which should be thought of as extending rightwards from the
expression.
Conventionally, a leftward, rightward, or double-headed horizontal arrow above the Ñ is used to indicate
the direction(s) of differential scope, but this technique is typographically unavailable here.
Ñ_{a}(¦^{Ñ}_{x}(a))
= Ñ_{a}((a¿Ñ_{x})¦(x))
= (Ñ_{a}(a¿Ñ_{x}))¦(x) = Ñ_{x}¦(x)
so we have the operator identity
Ñ_{a}(a¿Ñ_{x})
= Ñ_{a} Ð^{x}_{a}
= Ñ_{x}
which we can abbreviate as
Ñ_{a}Ð_{a} = Ñ .
It is customary to abbreviate Ñ(a_{p}) by Ña_{p}, treating Ñ as a "left-multiplier"
but we loose associativity in that
(Ña)b ¹ Ñ(ab) in general.
We can use Ñ as a right-multiplier eg. aÑ provided we understand the "scalar" Ð_{xk }
to apply "leftwards" as well as "rightwards".
The expression abÑcd is usually interpreted (defined) as
ab(Ñc)d
but could (perhaps more properly) be considered to mean
ab(Ñ(cd)) + ((ab)Ñ)cd .
We will retain the traditonal "rightward-only scope"
for Ñ here but when we include Ñ in a list of operators
fÑgh this should be thought of as abbreviating the composite operation
fÑgh(a_{p})
º
f( Ñ(gh(a_{p})) )
º f( Ñ( g(h(a_{p})) ) ) . The scope of Ñ thus extends rightwards to encompass all following symbols
unless contraindicated by brackets.
We will use ( ) to denote the extent of Ñ s whenever possible but this becomes complicated by brackets expressing product precedence.
When we wish the derivative aspect of a Ñ to "hop over"
intervening terms or to move leftwards rather than rightwards or just wish to emphasise that
the default applicability we will add a _{Ñ} suffix to the term to which the Ð_{ei} "apply".
The e^{k} act geometrically on "intervening" terms irrespective of any _{Ñ}'s.
In general we will here assume the "differentiating scope" of Ñ to extend rightwards but not leftwards.
Thus (Ñ_{p}¿a_{p}) and (a_{p}¿Ñ_{p}) are distinct scalar operators since
(a_{p}¿Ñ_{p})F_{p} = (a_{p}¿Ñ_{p})F_{p}_{Ñ} whereas
(Ñ_{p}¿a_{p})F_{p} =
(Ñ_{p}¿a_{p}_{Ñ})F_{p} + (Ñ_{p}¿a_{p})F_{p}_{Ñ}
= (Ñ_{p}¿a_{p}_{Ñ})F_{p} + (a_{p}¿Ñ_{p})F_{p}_{Ñ}
= (Ñ_{p}¿a_{p}_{Ñ})F_{p} + (a_{p}¿Ñ_{p})F_{p} .
Only if (Ña_{p}_{Ñ})_{<0>} = 0 (eg. if a_{p}=a independant of p) are they equivalent.
We have the geometric product rule
Ñ(a¨b)
= Ñ(a_{Ñ}¨b)
+ Ñ(a¨b_{Ñ})
where ¨ denotes any bilinear multivector product (¿,Ù,., geometric,etc. )
and _{Ñ} denotes the differentating scope scope of the Ñ.
As long as we remember the geometric product rule, we can derive many equations involving Ñ simple by
reference to its 1-vector nature. a¿(a¿b)=0, for example, gives
Ñ¿(Ñ¿b)=0.
However, care must be taken with Ñ. If can readily be verified that Ñx = N and that
Ñ(x^{2})=2x . Derivations such as
Ñ(x^{2}) = Ñx_{Ñ}x + Ñxx_{Ñ}
= 2(Ñx)x
= 2Nx = 2Nx are erroneous. We cannot commute x with x while we are varying one of them,
because the variation may not commute with x.
Useful Ñ results
ÑºÑ_{x} ; y^{↑} º e^{y} ; _{*} denotes scalar product throughout :
Ñ x = Ñ¿x = N | Ñ Ù x = 0_{2} = 0 |
Ñ_{x}(xb) = (Ñ_{x}x)b = Nb | Provided Ñ_{x}b=0. This grade decomposes into |
Ñ(x¿b_{k}) = (b_{k}.Ñ)x = kb_{k}
in particular: Ñ(x¿a) = (a¿Ñ)x = a | Ñ(xÙb_{k}) = (b_{k}ÙÑ)x = (N-k)b_{k}
in particular: Ñ(xÙa) = (N-1)a |
Ñ(b_{k}x) = (-1)^{k}(N-2k)b_{k} | in particular: Ñ(ax) = Ñ(2(x¿a)-xa) = (2-N)a |
Ñ_{x}(x_{*}a)^{↑} = (å_{i=1}^{N} e^{i}(e_{i}_{*}a))(x_{*}a)^{↑} = a_{<1>} (x_{*}a)^{↑} | and so: Ñ_{x}^{2}e^{x*a} = a_{<1>}^{2} e^{x*a} |
Ñ_{x}((x_{*}a)b)^{↑} = a_{<1>} ((½p + x_{*}a)b)^{↑} | and so: Ñ_{x}^{2}e^{(x*a)b} = -a_{<1>}^{2} e^{(p+x*a)b} provided b^{2}=-1 |
Ñ(lx^{2}b)^{↑} = 2lNx(lx^{2}b)^{↑}b | |
Ñ(¦(x)b)^{↑} = (Ñ¦(x))(¦(x)b)^{↑}b for central ¦(x) | |
Ñ(g(x)F(x)) = (Ñ(g(x))F(x) + g(x)(ÑF(x)) | so
Ñ(f(x)^{~})
= Ñ(|f(x)^{2}|^{-½}) f(x)
+ |f(x)^{2}|^{-½} Ñf(x)
= -½|f(x)^{2}|^{-3/2}Ñ(|f(x)^{2}|) f(x) + |f(x)^{2}|^{-½} Ñf(x) = |f(x)^{2}|^{-½} ( -/+ ½|f(x)^{2}|^{-1}Ñ(f(x)^{2}) f(x) + Ñf(x) ) according to sign of f(x)^{2}. |
Ñ(F(x)G(x)) = (ÑF_{Ñ}(x))G(x) + (Ñ¿F(x))G(x)_{Ñ} + (ÑÙF(x))G(x)_{Ñ} | so
Ñ(f(x)^{~})
= Ñ(|f(x)^{2}|^{-½}) f(x)
+ |f(x)^{2}|^{-½} Ñf(x)
= -½|f(x)^{2}|^{-3/2}Ñ(|f(x)^{2}|) f(x) + |f(x)^{2}|^{-½} Ñf(x) = |f(x)^{2}|^{-½} ( -/+ ½|f(x)^{2}|^{-1}Ñ(f(x)^{2}) f(x) + Ñf(x) ) according to sign of f(x)^{2}. |
According as x^{2} is ± : | |
Ñ(|x|^{m}) = ±m|x|^{m-2}x = ±m|x|^{m-1}x^{~} | Ñ(x|x|^{m}) = (N+m)|x|^{m} " mÎÂ Þ Ñ_{x}(x^{~}) = (N-1)|x|^{-1} |
Ñ ¦(|x|) = ±¦'(|x|)x^{~} | Ñ(¦(|x|)x^{~}) = ¦'(|x|) + (N-1)|x|^{-1}¦(|x|) |
Ñ x^{2k} = 2kx^{2k-1} | Ñ x^{2k+1} = Ñ¿x^{2k+1} = (2k+N)x^{2k} |
Ñ((lx)^{↑}) = l(lx)^{↑} + (N-1)x^{-1}¿((lx)^{↑}) | Only for N=1 do we have Ñ((lx)^{↑}) = l(lx)^{↑} . |
Ñ (¦(|x|)x^{~})^{↑} = ¦'(|x|)(¦(|x|)x^{~})^{↑} + sin(¦(|x|))(N-1)|x|^{-1} for x^{2} < 0. | |
Ñ_{x}((x-a)|x-a|^{-N})
= Ñ_{x}¿((x-a)|x-a|^{-N})
= o_{N} = |dS_{N-1}|
at x=a and 0 elsewhere. | |
Ñ_{x}((x¿a)^{m}) = m(x¿a)^{m-1}a | |
Ñb_{k}(x¿a_{2}) = 2(-1)^{k}(b_{k}Ùa_{2}-b_{k}.a_{2}) | k³2 |
Ñ((x¿a_{2})¿b_{k})=a_{2}×b_{k} + 2a_{2}¿b_{k} ; | k³2 |
(b_{k}.Ñ)(x¿a_{2}) = b_{k}×a_{2} + 2a_{2}¿b_{k} ; | k³2 |
Ñ((x¿a_{2})Ùb_{k}) = a_{2}×b_{k} + 2a_{2}Ùb_{k} | |
(b_{k}ÙÑ)(x¿a_{2}) = b_{k}×a_{2} + 2a_{2}Ùb_{k} | |
Ñ(a_{x}b)^{↑} = (Ña_{x})(a_{x}b)^{↑} b | |
b_{x}¿(Ña_{x}) = (b_{x}¿Ñ)a_{xÐ} | Holds only for scalar a_{x}. We cannot, in general, retrieve (b¿Ñ_{x})a_{x} from b and Ñ_{x}a_{x} . |
Ñ¿(a_{x}c_{x}) = a_{x}(Ñ¿c_{x}) + (Ña_{x})¿c_{x} | ÑÙ(a_{x}c_{x}) = a_{x}(ÑÙc_{x}) + (Ña_{x})Ùc_{x} |
Ñ¿(a_{x}Ùc_{x}) =
(a_{x}¿Ñ)c_{x}_{Ñ}
+ (Ñ¿a_{x}_{Ñ})c_{x}
- a_{x}Ù(Ñ¿c_{x}_{Ñ})
- a_{x}_{Ñ}Ù(Ñ¿c_{x})
In particular Ñ¿(aÙb_{x}) = (a¿Ñ)b_{x} - a(Ñ¿b_{x}) [ Proof : a¿(bÙc) = (a¿b)c - bÙ(a¿c) with a=Ñ .] |
ÑÙ(a_{x}¿c_{x})
= (a_{x}¿Ñ)c_{x}_{Ñ}
+ c_{x}(Ñ¿a_{x}_{Ñ})
- a_{x}¿(ÑÙc_{x}_{Ñ})
- (c_{x}ÙÑ).a_{x}_{Ñ}
In particular Ñ(a¿b_{x}) = (a¿Ñ)b_{x} - a¿(ÑÙb_{x}) [ Proof : a¿(bÙc) = (a¿b)c - bÙ(a¿c) with b=Ñ .] |
Ñ x^{~}(x^{~}+a)^{~} = 2^{-½}(1±x^{~}¿a)^{-½}|x|^{-1} (a(N-3/2) + ½x^{~}) | when x^{~}^{2} = a^{2} = ±1 |
Ñ |x|^{½}x^{~}(x^{~}+a)^{~} = 2^{-½}(1-x^{~}¿a)^{-½}|x|^{-½} (a(N-1)+x^{~}) | when x^{~}^{2} = a^{2} = ±1 |
Monogenic Functions
We say v(x) is monogenic (aka. analytic) if Ñ_{x}v(x) = 0 " x .
We say v(x) is meromorphic if it is monegenic at all x except some welldefined poles
x_{1},x_{2},...,x_{k}
at which we have Ñ_{x}v(x) ï_{xi}
= - o_{M} R_{i}
where multivector R_{i} is the residue at pole x_{i} and o_{M}
is the boundary content of a unit radius (M-1)-sphere.
Monogenic functions are fundamental in theoretical physics, particularly
(in nonrelativistic central potential theory)
spherical monogenics
Y_{x}
of the form
Y(x)
= x^{l} y(x^{~})
= r^{l} y(q,f)
for N=3 .
Monogenity condition Ñ_{x}Y_{x}=0 requires (xÙÑ_{x}) y(q,f) = l y(q,f)
interpreted as Y having constant scalar integer "angular-momentum operator" eigenvalue l, known as the angular quantum number .
[ The brackets around (xÙÑ_{x}) denote the precendece of the Ù ; the differentiating scope of the Ñ_{x} acting rightwards over the y.
].
For l<0 we have a single pole at 0.
Laplacian Ñ^{2}
Since Ñ_{x}v(x)
= Ñ_{a}v^{Ñ}_{x}(a)
= Ñ_{a}((a¿Ñ_{x})v(x)))
we have Ñ_{x}^{2}v(x)
= Ñ_{x}(Ñ_{a}v^{Ñ}_{x}(a))
= Ñ_{b}Ñ_{a}v^{Ñ2}_{x}(a,b)
= (Ñ_{b}¿Ñ_{a}
+ Ñ_{b}ÙÑ_{a})v^{Ñ2}_{x}(a,b) .
If v obeys the integrability condition, the symmetry of second differential v^{Ñ2}_{x}(a,b) causes the Ù term to vanish
[
(Ñ_{b}ÙÑ_{a})v^{Ñ2}_{x}(a,b)
= -(Ñ_{a}ÙÑ_{b})v^{Ñ2}_{x}(a,b)
= -(Ñ_{a}ÙÑ_{b})v^{Ñ2}_{x}(b,a)
]
and so
Ñ_{x}^{2} = Ñ_{x}¿Ñ_{x}
is a grade-preserving ("scalar")
operator known as the
second derivative or Laplacian or D'Alembertian operator.
Consequently
Ñ_{x}^{2} = Ñ_{x}¿Ñ_{x}
and
Ñ_{x}ÙÑ_{x} = 0 , at least with regard to its action on integrable tensors.
This is most clearly seen when expressed in coordinate terms with regard to an orthonormal basis
as
Ñ_{x}ÙÑ_{x} = (å_{i=1}^{N} e^{i}Ð_{ei})Ù(å_{j=1}^{N} e^{j}Ð_{ej})
= å_{i<j} e^{ij}(Ð_{ei}Ð_{ej}-Ð_{ej}Ð_{ei})
= 2 å_{i<j} e^{ij}(Ð_{ei}×Ð_{ej})
Thus, assuming Ð_{ei}Ð_{ej}v(x) = Ð_{ej}Ð_{ei}v(x), we have
Ñ^{2}v(x) =
å_{j=1}^{N} e^{j} Ð_{ej} (å_{k=1}^{N} e^{k} Ð_{ek} (v(x)))
= å_{j=1}^{N} å_{k=1}^{N} e^{j}e^{k}Ð_{ek}Ð_{ej}v(x)))
= å_{k=1}^{N} (e^{k})^{2} Ð_{ek}^{2}(v(x))
= å_{k=1}^{N} e_{k} Ð_{ek}^{2} v(x)
so that, when applied to a scalar function, Ñ^{2} is the conventional Laplacian but with the basis signatures
effecting the summation when acting in non-Euclidean spaces.
With regard to functions not obeying the integrability condition, the Laplacian Ñ_{x}^{2} includes a bivector operator Ñ_{x}ÙÑ_{x} known as the tortion which does not vanish, and so Ñ_{x} acts somewhat "less like" a 1-vector geometrically.
Suppose Ñ_{x}^{2} v_{x} = a_{x} v_{x} for central multivector a_{x} with
Ñ_{x} a_{x} = 0 .
(aÑ_{x})^{↑} v_{x}
= (1 + 2!^{-1}a^{2}a_{x} + 4!^{-1}a^{4}a_{x}^{2} + ...)v_{x}
+ (a + 3!^{-1}a^{3}a_{x} + ...)Ñ_{x}v_{x}
= cosh(aa_{x}^{½}) + sinh(aa_{x}^{½})Ñ_{x})v_{x}
provided a and "square root" a_{x}^{½} are both central.
Writing x=|x| we have Ñ_{x}^{2} ¦(x) = ±(¦"(x)+(N-1)x^{-1}¦'(x))
according as x^{2} is ± so Ñ_{x}^{2} ¦(x) = 0 provided
¦"(x) = (1-N)x^{-1}¦'(x) eg. for ¦(x)= x^{2-N} ;
and Ñ_{x}^{2} ¦(x) = l ¦(x) 0 provided
Useful Ñ^{2} results
According as x^{2} is ± : | |
Ñ^{2}(|x|^{m})
= ±m(N+m-2)|x|^{m-2}
Þ Ñ^{2}(|x|^{2-N})=0 |
Ñ^{2}(x|x|^{m}) = ±(N+m)m|x|^{m-2}x
Þ Ñ^{2}x = Ñ^{2} |x|^{1-N}x^{~} = 0 |
Ñ^{2}(x^{~}) = ±(1-N)|x|^{-3}x | |
Ñ^{2} ¦(|x|) = ±(¦"(|x|) + (N-1)|x|^{-1}¦'(|x|)) | Ñ(¦(|x|)x^{~}) = ¦'(|x|) + (N-1)|x|^{-1}¦(|x|)
Ñ^{2}(¦(|x|)x^{~}) = ±(¦"(|x|) - (N-1)|x|^{-2}¦(|x|) + (N-1)|x|^{-1}¦'(|x|))x^{~} |
Ñ^{2}(¦(|x|)x^{~})^{↑}
= (-¦"(|x|)x^{~}
+ ¦'(|x|)^{2}
- ¦'(|x|)(N-1)x^{~}|x|^{-1})(¦(|x|)x^{~})^{↑}
+ sin(¦(|x|))(N-1)|x|^{-2}x^{~} for x^{2}<0 |
Generalising v(x) to a multivector argument v(x) we can construct a multivector del-operator we will call
the multiderivative or "allblade gradient"
Ñ v(x) º
Ñ_{x}(v(x))
º
å_{k=0}^{2N-1} e^{[.k.]} Ð^{x}_{e[.k.]} v(x)
where e_{[.k.]} is the k^{th} pureblade element of a given ordered extended basis
for U_{N} and {e^{[.k.]}} is an extended inverse frame for that basis. Note that we include a scalar-directed
derivative due to e_{[.0.]} = 1 in this summation .
If multivector x is prohibited from "containing" specific basis blades, then those blades are omiitted from the summation.
In particular Ñ_{x<k>} v(x) =
å_{i=1}^{N k}C_{}e^{[.<k>i.]} Ð^{x}_{e[.<k>i.]} v(x)
where e_{[.<k>i.]} is the i^{th} of the ^{N}C_{k} k-blade elements
of a given ordered extended basis.
Ñ_{<k>x} º (Ñ_{x})_{<k>} = Ñ_{x<k>} is known as the k-derivative (so Ñ_{<1>x} = Ñ_{x} , the 1-derivative ).
More generally, for blade b we define the b-projected multiderivative by
Ñ_{[b]x} =
å_{k=0}^{2N-1} (¯_{b}e^{[.k.]}) Ð^{x}_{¯be[.k.]}
Clearly
Ñ_{[i]x} =
Ñ_{x} ;
Ñ_{[1]x} = Ñ_{x}_{<0>} .
For invariant a,
Ñ_{<k>x}ax =
Ñ_{<k>x}ax_{<k>} =
å_{i=0}^{N} Ñ_{<k>x}a_{<i>}x_{<k>} =
å_{i=0}^{N} H^{ i}_{k} a_{<i>}
In particular,
Ñ_{<k>x}ax =
Ñ_{x}ax_{<k>} =
Ñ_{<k>x}ax_{<k>} =
^{N}C_{k}a ( and hence
Ñ_{x}ax = 2^{N}a ) provided Ñ_{x}a=0 .
The generalised multivector-directed derivative operator is expressable as
Ð_{a} = (a_{*}Ñ_{x}) ,
providing an alternate coordinate-free defintion of Ñ_{x} .
Generalising
Ñ_{x}
= Ñ_{a}(a¿Ñ_{x})
we have
Ñ_{x} =
Ñ_{a}(a_{*}Ñ_{x})
.
Ñ_{x}(x_{*}a) = å_{j=1}^{2N} e^{[.j.]} (e_{[.j.]} _{*} a) = å_{j=1}^{2N} e^{[.j.]} a_{[.j.]} = a .
Ð^{x}_{b} xÙa = bÙa by the limit definition of directed derivatives
.
Thus Ñ_{x} xÙa =
å_{j=1}^{2N}
e^{[.j.]}
(e_{[.j.]} Ù a)
by the coordinate based defintion of Ñ .
Let us now assume that a is a (N-k)-blade and that an orthonormal frame is chosen with
e_{i}¿b = 0 " 1£i£k and a=|a|e_{(k+1)(k+2)..N}.
In this frame, only basis blades within e_{12...k} will have nonvanshing
nonvanshing
and the summation term reduces to a . We thus have frame-invarient identity
Ñ_{x} xÙa = å_{i=0}^{k} ^{k}C_{i} a for (N-k)-blade a and, more usefully,
Ñ_{<r>x} xÙa_{k} = ^{k}C_{r} a_{k} for x-independant k-blade a_{k}.
Similarly, Ñ_{x} x¿a =
å_{j=1}^{2N}
e^{[.j.]}
(e_{[.j.]} ¿ a)
which for k-blade a
Let us now assume that a is a (N-k)-blade and that an orthonormal frame is chosen with
e_{i}¿b = 0 " 1£i£k and a=|a|e_{(k+1)(k+2)..N}.
In this frame, only basis blades within e_{12...k} will have nonvanshing
nonvanshing
and the summation term reduces to a . We thus have frame-invarnient identity
Ñ_{x} xÙa = å_{i=0}^{k} ^{k}C_{i} a for (N-k)-blade a and, more usefully,
Ñ_{<r>x} xÙa_{k} = ^{k}C_{r} a_{k} for x-independant k-blade a_{k}.
The more general gradifying substitution gives that
(Ñ_{ß}_{*}Ñ_{Þ}) has the effect of replacing
the first nonprimary multivector argument by Ñ_{Ü} .
The following results are useful:
Hence Ñ(a_{p}¿b_{p}) = Ñ(a_{p}_{Ñ}¿b_{p}) + Ñ(a_{p}¿b_{p}_{Ñ}) = Ð_{bp}a_{p} + Ð_{ap}b_{p} - b_{p}¿(ÑÙa_{p}) - a_{p}¿(ÑÙb_{p}) . _{[ HS 2-1.43 ]}
Dropping all p suffixes for brevity, we have
(bÙc)¿(ÑÙa_{Ñ}) =
Ð_{c}(b¿a) - Ð_{b}(c¿a) + (b_{Ä}c)¿a
where
a_{p}_{Ä}b_{p} º Ð_{ap}b_{p} - Ð_{bp}a_{p} .
_{[ HS 2-1.46 ]}
[ Proof : (bÙc)¿(ÑÙa_{Ñ}) =
b¿(c¿(ÑÙa_{Ñ}))
= b¿(
(c¿Ñ)a_{Ñ} - (c¿a_{Ñ})Ñ
)
= (c¿Ñ)(b¿a_{Ñ}) - (b¿Ñ)(c¿a_{Ñ})
= (c¿Ñ)(b¿a) - (b¿Ñ)(c¿a)
- ((c¿Ñ)(b| _{Ñ}¿a) - (b¿Ñ)(c_{Ñ}¿a)
= Ð_{c}(b¿a) - Ð_{b}(c¿a)
- ((c¿Ñ)b_{Ñ} - (b¿Ñ)c_{Ñ})¿a)
= Ð_{c}(b¿a) - Ð_{b}(c¿a) + (b_{Ä}c)¿a
.]
Setting a=Ñ gives
(bÙc)¿(ÑÙÑ) =
Ð_{c}(b¿Ñ) - Ð_{b}(c¿Ñ) + (b_{Ä}c)¿Ñ
= Ð_{c}×Ð_{b} - Ð_{b}(c¿Ñ) + (b_{Ä}c)¿Ñ
_{[ HS 4-1.15 ]}
Partial Undirected Derivate ¶
If we have a function F(a,b,..,h) of several parameters
we have "partial" differentiation
¶F/¶b^{ij..} = Lim_{d ® 0}
d^{-1}(F(a, b+de_{ij..} ...,h)
- F(a, b, ...,h) )
in which other parameters like a and h are held constant even if they may in actuality depend on b.
¶_{x} º å_{i=1}^{N} e^{i} (¶/¶x^{i})
¶_{x}
º å_{ijk..=1}^{N} e^{ijk..} (¶/¶x^{ijk...})
are Ñ_{x} and Ñ_{x} with ¶/¶x^{ijk...} replacing
d/dx^{ijk...}
Secondary Undirected Derivative Ñ_{Þ}
Ñ_{Þ} F_{p}(a_{1},a_{2},..,a_{k}) º Ñ_{a1} F_{p}(a_{1},a_{2},..,a_{k})
º å_{i=1}^{N} e^{i}Ð^{a1}_{ei} F_{p}(a_{1},a_{2},..,a_{k})
= å_{i=1}^{N} e^{i} F_{p}(Ð^{a1}_{ei}a_{1},a_{2},..,a_{k})
= å_{i=1}^{N} e^{i} F_{p}(a_{i},a_{2},..,a_{k})
Ð^{b}_{d}(F_{p}^{Ñ}(a_{1},..,a_{k},d))
= (Ð_{b}F_{p})(a_{1},..,a_{k}) follows easily from the limit definition of Ð^{d}_{b}
so we have
Ñ_{Þk}F_{p}^{Ñ}(a_{1},..,a_{k},d)
= Ñ_{d}F_{p}^{Ñ}(a_{1},..,a_{k},d)
= (ÑF_{p})(a_{1},..,,a_{k})
which we can write as
Ñ_{Þ→}Ð_{ß} = Ñ_{ß}
abbreviating Ñ_{Þ→}Ð_{ß} F_{p}(a_{1},..,a_{k}) º
= Ñ_{Þ→}((ÐF_{p})(a_{1},..a_{k})) =
Ñ_{ß}F_{p}(a_{1},..,a_{k})
º (ÑF_{p})(a_{1},..,a_{k})
where the → indicates dervivative scope applying only to the rightmost ("last") parameter,
the parameter "introduced" by the Ð.
We will call this the differential derivative rule.
In particular, with regard to a simple field
F_{p}=F(p) with no nonprimary parameters we have
Ñ_{Þ}Ð_{ß}F_{p} = Ñ_{p}F_{p} .
(Ñ_{p}F_{p})^{Ñ}(a) º Ð^{p}_{a}(Ñ_{p}F_{p}) = Ñ_{b}F_{p}^{Ñ2}(a,b) Ð^{p}_{a}(Ñ_{p}F_{p})
Now (Ñ_{ß}¿Ñ_{Þ})F_{p}(a,b,..)
= å_{i=1}^{N} e^{i}^{2} Ð^{a}_{ei}(Ð_{ei}p)F_{p})(a,b,..)
= å_{i=1}^{N} e^{i}^{2} (Ð_{ei}p)F_{p})(e_{i},b,..)
= å_{i=1}^{N} (Ð_{ei}p)F_{p})(e^{i},b,..)
= F_{p}_{Ñ}(Ñ,b,..)
so the operator (Ñ_{ß}¿Ñ_{Þ})
= (Ñ_{Þ}¿Ñ_{ß}) has the effect of replacing the first nonprimary
1-vector parameter with Ñ_{Üp} .
We will punishly call this the gradifying substitution rule. By regarding a
(t;k)-multiform as a skewsymmetric (t;k)-tensor F_{p}(a_{1},a_{2},.,,,a_{k})=F_{p}(a_{1}Ùa_{2}Ù...a_{k})
we obtain
(Ñ_{a1}¿Ñ_{p})F_{p}(a_{1}Ùa_{2}Ù...a_{k}) = F_{p}(Ñ_{Üp}Ùa_{2}Ù...a_{k})
Simplicial Derivative Ñ_{(r)}
Any multitensor grade decomposes to a sum of multiforms so the derivatives of multiforms are of particular interest.
In this section we neglect any non-linear "primary" parameter and consider functions of k-blades.
Suppose first that L(a_{1},a_{2})=L(a_{1}Ùa_{2}) is bilinear and skewsymmetric in a_{1}, and a_{2}.
(Ñ_{a2}_{*}Ñ_{a1}) L(a_{1},a_{2})
=
((å_{j=1}^{N} e^{j}Ð^{a2}_{ej})_{*}
(å_{i=1}^{N} e^{i}Ð_{ej}a_{2}))
L(a_{1}Ùa_{2})
=
å_{j=1}^{N} å_{i=1}^{N} (e^{j}_{*}e^{i})
Ð^{a2}_{ej} Ð^{a2}_{ej} L(a_{1}Ùa_{2}) )
=
å_{i=1}^{N} (e^{i}^{2})
Ð^{a2}_{ei} L(e_{i}Ùa_{2}) )
=
å_{i=1}^{N} (e^{i}^{2})L(e_{i}Ùe_{i}) = 0 .
Thus with regard to action on linear functions of a_{1} Ù a_{2} ,
Ñ_{a2}Ñ_{a1}
= Ñ_{a2}ÙÑ_{a1}
= -Ñ_{a1}ÙÑ_{a2} .
Suppose now that L(x) is a linear multivector-valued function of a mulivector x.
Solely by its action on r-blades, L induces a function of a k-blade a_{k} and a (r-k)-blade
b_{r-k}
via L(a_{k}Ùb_{r-k}) .
The directed chain rule gives
Ð^{ak}_{ck} L(a_{k}Ùb_{r-k})
= L^{Ñx}_{akÙbr-k}(Ð^{ak}_{ck}(a_{k}Ùb_{r-k}))
= L(c_{k}Ùb_{r-k}) .
Similarly
Ð^{br-k}_{dr-k} L(a_{k}Ùb_{r-k})
= L(Ð^{br-k}_{dr-k}(a_{k}Ùb_{r-k}))
= (-1)^{k(r-k)}L(Ð^{br-k}_{dr-k}(b_{r-k}Ùa_{k}))
= (-1)^{k(r-k)}L(d_{r-k}Ùa_{k})
= L(a_{k}Ùd_{r-k}) .
We have the linear derivative factorisation theorem
of a linear multivector function L(x).
Ñ_{br-k}Ñ_{ak}L(a_{k}Ùb_{r-k})
= ^{r}C_{k} Ñ_{x}_{<r>} L(x) .
[ Proof :
Ñ_{br-k}Ñ_{ak}L(a_{k}Ùb_{r-k})
= å_{j=1}^{NCr-k} e^{[.<r-k>j.]}
Ð^{br-k}_{e[.<r-k>j.]}
å_{i=0}^{NCk-1} e^{[.<k>i.]}
Ð^{ak}_{e[.<k>i.]} L(a_{k}Ùb_{r-k})
= å_{j=0}^{NCr-k-1}
å_{i=0}^{NCk-1}
e^{[.<r-k>j.]}
e^{[.<k>i.]}
Ð^{br-k}_{e[.<r-k>j.]}
L(e_{[.<k>i.]}Ùb_{r-k})
= å_{j=0}^{NCr-k-1} å_{i=1}^{NCk}
e^{[.<r-k>j.]}e^{[.<k>i.]}
L(e_{[.<k>i.]}Ùe_{[.<r-k>j.]})
For each e_{[.<k>i.]} blade only (r-k)-blades
composed from the remaining N-k basis 1-vectors
will give a non-vanishing
e_{[.<k>i.]}Ùe_{[.<r-k>j.]} .
Consider a particular basis r-blade
e_{[.<r>q.]}
and chose a particular k of the basis 1-vector factors of this r-blade. The sign change
aquired in moving these k factors leftwards to factorise
e^{[.<r>q.]} as
e^{[.<k>i.]}e^{[.<r-k>j.]}
for some i,j is precisely the same sign change as that aquired reordering
e_{[.<r>q.]}^{§}
as
e_{[.<r-k>j.]}^{§}e_{[.<k>i.]}^{§} .
Thus
e_{[.<r-k>j.]}^{§}e_{[.<k>i.]}^{§}
L(e_{[.<k>i.]}Ùe_{[.<r-k>j.]})
= e_{[.<r>q]}^{§} L(e_{[.<r>q.]} .
Now, e_{[.<t>u.]}^{§}
= e^{[.<t>u.]} may not hold for nonEuclidean U_{N} but
if all are non-null, the scalar multiplier incurred replacing
e_{[.<r-k>j.]}^{§}e_{[.<k>i.]}^{§}
with
e^{[.<r-k>j.]}e^{[.<k>i.]}
is identical to that incurred by replacing
e_{[.<r>q.]}^{§} e^{[.<r>q.]} so we can safely write
e^{[.<r-k>j.]}e^{[.<k>i.]}
L(e_{[.<k>i.]}Ùe_{[.<r-k>j.]})
= e^{[.<r>q]} L(e_{[.<r>q.]}) .
Thus
e^{[.<r>q]} L(e_{[.<r>q.]} arises in the summation
^{r}C_{k} times and the result follows
.]
Taking k=1 the linear derivative factorisation theorem provides
Ñ_{br-1}Ñ_{a1}L(a_{1}Ùb_{r-1})
= r Ñ_{x}_{<r>} L(x)
and hence
(Ñ_{ar}...Ñ_{a2}Ñ_{a1})L(a_{1}Ù...a_{r}) =
r! Ñ_{<r>}L(a_{1}Ù...a_{r}) .
When acting on a skewsymmetric L, the multivector operator
Ñ_{ar}...Ñ_{a2}Ñ_{a1}
is equivalent to the r-vector operator
Ñ_{ar}Ù...Ñ_{a2}ÙÑ_{a1}
and so
(Ñ_{ar}Ù...Ñ_{a2}ÙÑ_{a1})L(a_{1}Ù...a_{r}) =
r!Ñ_{<r>}L(a_{1}Ù...a_{r}) .
Accordingly we define the simplicial r-derivative
by
Ñ_{(r)} = (r!)^{-1}Ñ_{ar}Ù...Ñ_{a2}ÙÑ_{a1}
, equivalent to the r-derivative when acting on a r>-multiform.
Conveyed Derivative Ñ_{→}
We reassume a primary "point" paramter, which we will denote by p rather than x,
and suppose we have an extended field
F_{p} = F(p,a_{1}_{p},a_{2}_{p},...,a_{k}_{p}) . We here refer to
Ñ_{p}_{→i} F_{p}(a_{1}_{p},..,a_{k}_{p})
º F_{p}_{Ñ}(a_{1}_{p},..,Ñ_{p}a_{i}_{p},..,a_{k}_{p})
= å_{j=1}^{N} Ð_{ ß}F_{p}(a_{1}_{p},..,e^{j}a_{i}_{p},..,a_{k})
= å_{j=1}^{N} (Ð_{ }F_{p})(a_{1}_{p},..,e^{j}a_{i}_{p},..,a_{k})
as the conveyed 1-derivative
assciated with the i^{th} linear parameter. The _{Ñ} here indicates that
the differtiating scope of the "interior" Ñ_{p} is to be considered as "leaping leftwards" to apply only to the
instance of p as primary parameter.
In the case of 1-multitensor F(p,a_{1}_{p}) = F_{p}(a_{p}) we can unambiguosuly define
the coveyed 1-derivative operator by
Ñ_{→p} F_{p}(a_{p}) º
F_{p}(Ñ_{Üp}a_{p}) º
F_{p}_{Ñ}(Ñ_{p}a_{p})
= å_{i=1}^{N} Ð_{eiß}F_{p}(e^{i}a_{p})
= å_{i=1}^{N} (Ð_{ei}F_{p})(e^{i}a_{p}) .
We can move the "scalar" Ð_{ßei} "leftwards" out of the F_{p}() only because
F_{p}(a) is linear in a.
.
As before, the differentiating scope of the Ñ_{p} geometrically acting on the a_{p} applies not to a_{p}
but to the p in the primary argument. a_{p} (ie. the value of a(p) at p) geometrically effects the
result of the differentation but there is no differentiation of a(p) itself and values of a
other than at p have no effect on the result, which is why we can informally think of "holding them constant".
The Ñ_{→} notation is intended as suggestive of the geometrical "nature" of the Ñ_{p}
hopping "rightwards" into the non-primary parameters while the differentiating scope remains
exclusively with the primary parameter.
The conveyed derivative grade decomposes as
F_{p}(Ñ_{Üp}a_{p}) º
F_{p}(Ñ_{Üp}¿a_{p} + Ñ_{Üp}Ùa_{p})
= F_{p}(Ñ_{Üp}¿a_{p}) + F_{p}(Ñ_{Üp}Ùa_{p})
= F_{p}_{Ñ}(Ñ_{p}¿a_{p} )
+ F_{p}_{Ñ}(Ñ_{p}Ùa_{p}) .
If a 1-tensor is extended outtermorphically to take a multivector argument with F_{p}(a)=a
it is natural to write
Ñ_{→p} F_{p}(1) = F_{p}(Ñ_{Üp})
= F_{p}_{Ñ}(Ñ_{p})
= å_{i=1}^{N} (Ð_{ei} F_{p})(e^{i})
= å_{i=1}^{N} Ð_{ei}F_{p}(e^{i})
.
We say a 1-tensor ¦_{p}(a) is zero divergent if
Ñ_{p}¿¦_{p}(a) = 0 " a
which is equivalent to ¦^{D}_{p}(Ñ_{Üp})¿a = 0 " a
Û ¦^{D}_{p}(Ñ_{Üp})=0 .
If ¦_{p} is symmetric, we thus have ¦_{p}(Ñ_{Üp})=0 Û Ñ_{p}¿¦_{p}(a)=0 " a .
Suppose we have a (t;k)-multiform (t-vector-valued point-dependant linear function
of a k-vector) F_{p}(a_{k}) for k³1. The conveyed derivative induces a (t;k+1)-multiform
F^{Ñ→p¿}(a_{k+1}) º
(Ñ_{→p}¿F_{p})(a_{k+1}) º
F_{p}(Ñ_{Üp}¿a_{k+1} )
= F_{p}_{Ñ}(Ñ_{p}¿a_{p}) ;
and a (t,k-1)-multiform
F^{Ñ→pÙ}(a_{k-1}) º
(Ñ_{→p}ÙF_{p})(a_{k-1}) º
F_{p}(Ñ_{Üp}Ùa_{k-1} )
= F_{p}_{Ñ}(Ñ_{p}Ùa_{p} ) .
F_{p}(a_{k+1}.Ñ_{Üp}) = (-1)^{k} F_{p}(Ñ_{Üp}¿a_{k+1}) is known as
the exterior 1-differential and is frequently denoted dF_{p} .
F_{p}(a_{k-1}ÙÑ_{Üp}) = (-1)^{k-1} F_{p}(Ñ_{Üp}Ùa_{k-1}) is known as
as the interior 1-differential. In most of the literature, the 1- is unstated.
The Expanded k-blade contraction rule then yields, for example,
Ñ_{→p} F_{p}(a_{1}Ùa_{2}Ùa_{3})
= F_{p}_{Ñ}(Ñ_{→p}¿(a_{1}Ùa_{2}Ùa_{3}))
= F_{p}((Ñ_{p}¿a_{1})(a_{2}Ùa_{3})
- (Ñ_{p}¿a_{2})(a_{1}Ùa_{3})
+ (Ñ_{p}¿a_{3})(a_{1}Ùa_{2}))
= (Ñ_{p}¿a_{1})F_{p}_{Ñ}(a_{2}Ùa_{3})
- (Ñ_{p}¿a_{2})F_{p}_{Ñ}(a_{1}Ùa_{3})
+ (Ñ_{p}¿a_{3})F_{p}_{Ñ}(a_{1}Ùa_{2})
= Ð_{ßa1}F_{p}(a_{2}Ùa_{3})
+ Ð_{ßa2}F_{p}(a_{3}Ùa_{1})
+ Ð_{ßa3}F_{p}(a_{1}Ùa_{2})
= S_{1←2←3} Ð_{ßa1}F_{p}(a_{2}Ùa_{3})
Operating on and with Ñ
It is important not to confuse:
It can be shown that (Ñ_{Üp}¿)^{2} = 0 with regard to its action on functions satisfying the integrability condition.
We can generalise to the
conveyed multiderivative
Ñ_{p→} F_{p}(a_{p})
º å_{k=0}^{2N-1} Ð_{ß}^{p}_{e[.k.]} (F_{p}(e^{[.k.]}a_{p}))
= å_{k=0}^{2N-1} (Ð^{p}_{e[.k.]} F_{p})(e^{[.k.]}a_{p})
Adjoints
The adjoint of a (nonlinear) 1-field ¦ is a 1-tensor
defined by
¦^{D}(u) º Ñ_{x}(u¿¦(x)) .
If ¦:V^{M} ® U^{N} then ¦^{D} : U_{N} ®
V_{M}.
Let a Î U^{N}, b Î V_{M}.
For linear ¦, the (outtermorphism extended) adjoint has the property that
(a¦(b))_{<0>} = (b¦^{D}(a))_{<0>}
and in particular we have the 1-vector ¿ crossover rule
a¿¦(b) = ¦^{D}(a)¿b
which provides an alternate definition for the adjoint of a 1-tensor.
Setting a=b=i
gives |¦| = |¦^{D}| .
The ¿ crossover rule has operator formulation
a¿((b¿Ñ_{x})¦(x)) = b¿(Ñ_{x}(a¿¦(x)))
for general ¦.
We also have
a¿¦^{D}(b) = ¦^{D}(¦(a)¿b)
and
b¿¦(a) = ¦(¦^{D}(b)¿a)
[ note ¿ rather than . ] .
If ¦(x)=(a+bb_{2})x where b_{2} is zero or a 2-vector containing x (so that ¦(x) remains 1-vector valued) then ¦^{D}(x)= x(a+bb_{2})=(a-bb_{2})x .
More generally, if ¦(x) = a_{x}xb_{x} preserves grade then
¦^{D}(x) = b_{x}xa_{x} .
[ Proof : Cyclic scalar rule .]
Hence for any multivector conjugation or operation ^{^} with a^{^}^{^}=a we have
a_{^}^{D} = a^{^}_{^} .
If ¦ is expressed with regard to an orthonormal basis {e_{i}} as ¦^{ i}_{j} then ¦^{D} is expressed as ¦^{D}^{ i}_{j} = e_{i}e_{j}¦^{ j}_{i} where e_{i} is the signature of e_{i}. The adjoint can thus be thought of as a "signature reflected transpose". [ Proof : ¦^{D}^{ i}_{j} = e^{i}¿(¦^{D}(e_{j})) = e_{i}e_{j}e_{i}¿¦^{D}(e^{j}) = e_{i}e_{j}¦(e_{i})¿e^{j} = e_{i}e_{j}¦^{ j}_{i} .]
If ¦ is invertible,
¦^{D}(a¿b) = ¦^{-1}(a)¿¦^{D}(b) .
In particular [b=i]
¦^{-1}(a) = (i^{2})¦^{D}(a^{*})^{*} / |¦| .
The crossover rule gives ¦^{-1}(e_{i})¿¦^{D}(e^{j}) = ¦(¦^{-1}(e_{i}))¿e^{j} = e_{i}¿e^{j}
so ¦^{D} provides an inverse frame for {¦^{-1}(e_{i})}.
If ¦ is an invertible linear Lorentz transformation, ¦^{D} = ¦^{-1} .
[ Proof : a¿b = ¦(a)¿¦(b)
= a¿¦^{D}(¦(b)) " a,b .]
Since ¦^{Ñ} is linear for general ¦, the above results relating ¦^{D} to ¦ for linear ¦ also hold relating
¦^{D} to ¦^{Ñ} for general ¦. In particular we have
¦^{-D} º (¦^{D})^{-1} = (¦^{-1})^{D}
[ Proved under "Chain Rule" below. ]
¦^{-Ñ}(a) = i^{2}¦^{D}(a^{*})^{*} / |¦| .
¦^{D}^{D} = ¦^{Ñ}
[ Proof :
a¿¦^{Ñ}(b) = b¿¦^{D}(a) = a¿¦^{D}^{D}(b) since ¦^{D} is linear .]
¦^{Ñ}^{D} = ¦^{D}
[ Proof :
¦^{Ñ}^{D}_{x}(v) = Ñ_{a}(v¿¦^{Ñ}_{x}(a))
= Ñ_{a}(v¿(a¿Ñ_{x}¦(x)))
= (v¿Ñ_{x}¦(x)) = ¦^{D}_{x}(v)
.]
We can generalise to a multivector adjoint
¦^{D}(u) = Ñ_{x}(u_{*}¦(x)) satisfying the
_{*}-crossover rule
a_{*}¦(b) = b_{*}¦^{D}(a)
for linear ¦(x).
The adjoint of ¦(a)= fa is ¦^{D}(a)=af so we say a multivector f is self adjoint
if it is central (commutes with everything).
An alternative (equivalent) definition of the adjoint based on the simplicial derivative
defined below is
¦^{D}(b_{<k>})
º
(k!)^{-1} (Ñ_{ak}Ù...ÙÑ_{a1}) (b_{*}(¦(a_{1})Ù...Ù¦(a_{k})))
= (k!)^{-1} (Ñ_{ak}Ù...ÙÑ_{a1})(b_{<k>} ¿ (¦(a_{1})Ù...Ù¦(a_{k})))
for k>0.
The Undirected Chain Rule
Given scalar field f and (nonlinear) 1-field ¦,
Ñ(f(¦(x)) = ¦^{D}_{x}( (Ñf)(¦(x)) )
.
[ Proof : a¿(Ñ(f(¦(x))) =
Lim_{d®0}(( f(¦(x+da)) - f(¦(x)) )/d)
= Lim_{d®0}(( f(¦(x)+d¦^{Ñ}_{x}(a)) - f(¦(x)) )/d)
= ¦^{Ñ}_{x}(a)¿(Ñ_{¦(x)}f(¦(x)))
= ¦^{Ñ}_{x}(a)¿((Ñf)(¦(x)))
= a¿(¦^{Ñ}_{x}^{D}((Ñf)(¦(x)))
= a¿(¦^{D}_{x}((Ñf)(¦(x)))
" a
.]
This provides the multivector calculus formulation of the Chain Rule which we can write as
Ñ_{x} = ¦^{D}_{x}(Ñ_{¦(x)})
;
Ñ_{¦(x)} = ¦^{-D}_{x}(Ñ_{x})
.
The Chain Rule gives
(¦g)^{D} = g^{D}¦^{D}
[ Proof :
(¦g)^{D}(v)=Ñ_{x}(v¿¦(g(x)))
= (g^{D}(Ñ_{g(x)}))(v¿¦(g(x)))
= g^{D}(Ñ_{g(x)})(v¿¦(g(x)))
= g^{D}(¦^{D}(v)) .]
Thus ¦^{-D} º (¦^{D})^{-1} = (¦^{-1})^{D} .
[ Proof : Set g=¦^{-1} in
(¦g)^{D} = g^{D}¦^{D}
.]
g
The directed chain rule can be expressed as Ð^{x}_{a} F(g(x)) = F^{Ñ}(g^{Ñ}(a)) = F^{Ñ}((Ñ_{x}_{*}a)g(x)) .
Of particular interest is the case where g(x)=g(x) is scalar valued with derivative
g^{Ñ}(a) = (a_{*}Ñ_{x})g(x) = a_{*}(Ñ_{x}g(x))
= (aÑ_{x}g(x))_{<0>} ;
and where F mapping scalars to multivectors
has a derivative F^{Ñ}(x)
= F^{Ñ<0>}(x)
º F'(x) = Ñ_{<0>} F(x) = ¶F(x)/¶x mapping scalar
x to multivectors of like grade(s) to F.
We then have
Ð_{a}F(g(x)) = F'(g(x))g^{Ñ}(a)
= (a_{*}(Ñ_{x}g(x))) F'(g(x)) which we will call the
directed scalar-funneled chain rule.
Whence the
undirected scalar-funneled chain rule
Ñ_{x} F(g(x)) = (Ñ_{x}g(x)) F'(g(x)) .
The Kinematic Rules
A particularly useful result is
(Ñ(bab^{§}))_{<0>}
= 2((Ñb) a_{<+§>} b^{§})_{<0>}
for constant (point-independant) a.
[ Proof :
(Ñ(bab^{§}))_{<0>}
= (Ñ(b_{Ñ}ab^{§})+Ñ(ba(b^{§}_{Ñ})))_{<0>}
= (Ñ(b)ab^{§})_{<0>}+(ba(b^{§}Ñ))_{<0>}
= (Ñ(b)ab^{§})_{<0>}+(ba(Ñb)^{§})_{<0>}
= (Ñ(b)ab^{§})_{<0>}+(((Ñb)a^{§}b^{§})^{§})_{<0>}
= (Ñ(b)ab^{§})_{<0>}+((Ñb)a^{§}b^{§})_{<0>}
= (Ñ(b)(a+a^{§})b^{§})_{<0>}
.]
Consequently, we have the kinematic rule
Ñ¿(b_{§}(a))
º Ñ¿(bab^{§}) = 2((Ñb)ab^{§})_{<0>}
for constant 1-vector a . Note that we must retain the _{<0>} on the right hand side even
if the left hand side is scalar-valued.
Also
(Ñ(bb^{§}))_{<0>}
= 2((Ñb)b^{§})_{<0>} = 0
for constant (point-independant) a.
If b_{§} preserves grade then
Ñ(bab^{§}) =
2(Ñb_{Ñ})ab^{§}
and also
ÑÙ(b_{Ñ}ab^{§}) - b(aÙÑ)b_{Ñ}^{§}
[ Proof :
ÑÙbab^{§} = ÑÙ(b_{Ñ}ab) - bab_{Ñ}^{§}ÙÑ
= ( (Ñb)ab^{§} - ba(Ñb)^{§} )_{<2>}
= ( (Ñb)ab^{§} + (ba(Ñb)^{§})^{§} )_{<2>}
= 2(Ñb)ab^{§} since a^{§}=a
ÑÙbab^{§} = ÑÙ(b_{Ñ}ab) - bab_{Ñ}^{§}ÙÑ
= ( (Ñb)ab^{§} - ba(Ñb)^{§} )_{<2>}
= ÑÙ(b_{Ñ}ab^{§}) - b(aÙÑ)b_{Ñ}^{§}
.]
Hence
Ñbab^{§} =
((Ñb)ab^{§})_{<0>} + (Ñb)ab - b(aÙÑ)b_{Ñ}^{§}
Given how many common geometric transformations are representable in the form
b_{§}(a) º bab^{§} ,
this is a profoundly important result.
Similarly
the Clifford kinematic rule
(Ñ(bab^{§}^{#}))_{<0>}
= 2((Ñb) a_{<-§#>} b^{§}^{#})_{<0>}
for constant a
yields
Ñ¿(bab^{§}^{#}) = 2((Ñb)ab^{§}^{#})_{<0>}
for constant 1-vector a, with
Ñ(bab^{§}^{#}) = 2((Ñb)ab^{§}^{#} if b_trvd_{#} preserves
grade.
[ Proof :
(Ñ(bab^{§}^{#}))_{<0>}
= (Ñ(b_{Ñ}ab^{§}^{#})+Ñ(ba(b^{§}^{#}_{Ñ})))_{<0>}
= (Ñ(b)ab^{§}^{#})_{<0>}+(ba(b^{§}^{#}Ñ))_{<0>}
= (Ñ(b)ab^{§}^{#})_{<0>}+(ba(Ñb^{#})^{§})_{<0>}
= (Ñ(b)ab^{§}^{#})_{<0>}+(((Ñ^{#}b)a^{§}^{#}b^{§}^{#})^{§}^{#})_{<0>}
= (Ñ(b)ab^{§}^{#})_{<0>}-((Ñb)a^{§}^{#}b^{§}^{#})_{<0>}
= (Ñ(b)(a-a^{§}^{#})b^{§}^{#})_{<0>}
ÑÙbab^{§}^{#} = ÑÙ(b_{Ñ}ab) - bab_{Ñ}^{§}^{#}ÙÑ = ( (Ñb)ab^{§} + ba(Ñb)^{§}^{#} )_{<2>} = ( (Ñb)ab^{§} - (ba(Ñb)^{§}^{#})^{§}^{#} )_{<2>} = 2(Ñb)ab^{§}^{#} since a^{§})ivl=-a .]
If even rotor field R_{p} has R_{p}R_{p}^{§}=1 then
w_{p}(d) º 2(Ð_{d}R_{p})R_{p}^{§} has grade <2;6;10;...>
and
we have the rotor equation of motion
Ð_{d}R_{p} = ½w_{p}(d)R_{p} .
For N<6 , w_{p}(d) is a pure bivector and so w is a (2;1)-tensor.
Taylor's Formula
The multivector version of Taylor's Formula :
¦(x+d) = å_{k=0}^{¥} ( (d¿Ñ_{x})^{k} / k ! ) ¦(x)
º e^{d¿Ñx}¦(x)
= e^{Ðd} ¦(x)
º Ð_{d}^{↑} ¦(x)
.
gives the approximation
¦(x+d) = ¦(x) + ¦^{Ñ}_{x}(d) + ½¦^{Ñ2}_{x}(d,d)
+ _{0}(|d|^{3})
A general ¦_{x} can be characterised at x (to second order) by
its value at x, a symmetric 1-tensor ¦^{Ñ}_{x}, and a symmetric 2-tensor ¦^{Ñ2}_{x}.
Contraction and Trace
Suppose ¦(a,b,..,d) = å_{i j k .. m}a^{j}b^{k}..d^{m}f^{ i}_{jk..m}e_{i}
[ where f^{ i}_{jk..m} º e^{i}¿¦(e_{j},e_{k},..,e_{m}) ] is a (1;k)-tensor.
Ñ_{a}¿¦(a,b,..,d)
= å_{i=1}^{N} e^{i}¿¦(e_{i},b)
is a (0;k-1)-tensor.
[ Proof :
Ñ_{a}¿¦(a,b,..,d) º
(å_{l}e^{l}d_{al})¿(å_{i j .. m}a^{j}b^{k}..d^{m}f^{ i}_{jk..m}e_{i} )
= å_{l i j k .. m}d_{al}a^{j}b^{k}..d^{m}f^{ i}_{jk..m}e^{l}¿e_{i}
= å_{i j k .. m}b^{k}..d^{m}f^{ i}_{jk..m}e^{j}¿e_{i}
= å_{k..m}b^{k}..d^{m}(å_{i}f^{ i}_{ik..m})
.]
Divergence,
usually known in this context as contraction, with regard to a particular nonprimary parameter (or suffix)
thus provides a frame invarient way of decrementing both the degree and
type of a tensor and so reducing its rank by two.
The resultant tensor has representation å_{i} f^{ i}_{ik..m} .
When t=k=1 (ie. a 1-vector valued function of a 1-vector) contraction produces scalar
å_{i }f^{i}_{i} known as the trace of ¦ which corresponds to the the traditional
matrix trace (sum of leading diagnonal elements).
Another way to decrement the type of a tensor is to take an inner product with a 1-vector
u¿¦(a,b,..,d) . This preserves the degree, so the rank is also decremented.
We introduce the notation Ñ_{Þ} to indicate contraction with regard to the first (leftmost)
non-primary parameter of a tensor ¦_{p}(a,b,...) in order to allow the abbreviation
Ñ_{Þ} ¦_{p} º Ñ_{a} ¦_{p}(a,b,...) .
The non-primary curl Ñ_{a}Ù¦(a,b,..,d) is known in this context as protraction. It provides a (t+1;k-1)-tensor, so preserving the rank.
The contraction and protraction of an r-multiform (ie. an r-vector valued linear function of a
r-blade) are of particular interest. We say an r-multiform
F(a_{1},a_{2},...a_{r})=F(a_{1}Ù...a_{r}) is contractionless if
Ñ_{a1}¿F(a_{1}Ù...a_{r}) = 0 and this then implies that
Ñ_{ak+1}¿(Ñ_{(k)}ÙF(a_{1}Ù...a_{r}))=0 for any 1£k<r
and that
Ñ_{(k)}F(a_{1}Ù...a_{r})) = Ñ_{(k)}ÙF(a_{1}Ù...a_{r})).
Similarly F is protractionless if
Ñ_{a1}ÙF(a_{1}Ù...a_{r}) = 0 and this then implies that
Ñ_{ak+1}Ù(Ñ_{(k)}¿F(a_{1}Ù...a_{r}))=0 for any 1£k<r
and that
Ñ_{(k)}F(a_{1}Ù...a_{r})) = Ñ_{(k)}¿F(a_{1}Ù...a_{r})).
Hence all contractions of a protractionless multiform are protractionless; and all
protractions of a contractionless multiform are contractionless.
[ Proof : Induction on k. See Hestenes & Sobczyk (3-9). .]
Thus when acting on a protractionless multiform
Ñ_{Þ}^{k}
= Ñ_{Þ}.Ñ_{Þ}.....Ñ_{Þ}
and when acting on a contractionless multiform
Ñ_{Þ}^{k} = Ñ_{Þ}ÙÑ_{Þ}Ù....Ñ_{Þ} .
Covariance
We will
temporarilly "promote" position x from its suffix position to a bracketed argument.
Let F(x,..) = F_{x} be a (t;k)-tensor.
Let ¦ be a (nonlinear) invertible 1-field which we interpret as
returning points (ie. a transformation of the pointspace) in a "relabelling" or "coordinate transform"
rather than a "warping" context. Set y º ¦(x).
¦ induces the
substitutive transform F^{¦}(x)ºF(¦^{-1}(x))
, ie. F^{¦} ºF¦^{-1} , or, equivalently,
F^{¦}(y) ºF(x) .
[ In much of the literature, inexplicit "prime" notations such as F' or F^{*} replace
F^{¦} ]
If at a given point x we have a nondegenerate but otherwise general (ie. neither necessarily orthogonal nor normal)
basis N-frame {e_{i}} then ¦ induces at y=¦(x) an "¦-transformed" N-frame { f_{i}=¦^{-Ñ}_{x}(e_{i}) }
having inverse frame { f^{i} = ¦^{D}_{x}(e^{i}) }
.
We have
f^{i}¿e_{j} = ¶y^{i}/¶x^{j} ï_{x} and also
e^{i}¿f_{j} = ¶x^{i}/¶y^{j} ï_{y} .
[ Proof :
f^{i}¿e_{j} = ¦^{D}_{x}(e^{i})¿e_{j} = e^{i}¿¦^{Ñ}_{x}(e_{j}) = ¶y^{i}/¶x^{j} ï_{x} .
Second result follows similarly. .]
With regard to the {f_{i}} N-frame, if t,k > 0 then F^{¦} has coordinates
F_{y }^{¦}^{ m..q}_{i..l} | º | f^{ q..m} ¿ F_{y}^{¦}(f_{i},f_{j},..,f_{l}) | = | f^{ q..m} ¿ _Fx(f_{i},f_{j},..,f_{l}) |
= | ¦^{D}(e^{q..m}) ¿ _Fx(¦^{-Ñ}(e_{i}),..,¦^{-Ñ}(e_{l})) | = | ¦^{D}(e^{q..m}) ¿ (¦^{-Ñ}_{x})^{k}(_Fx(e_{i},..,e_{l})) | |
= | e^{q..m} ¿ (¦^{-Ñ}_{x})^{k-1}(_Fx(e_{i},..,e_{l})) . | = | (¦^{D})^{k-1}(e^{q..m}) ¿ _Fx(e_{i},..,e_{l}) . |
1-tensors
Linear ¦ is symmetric iff ¦(a) = ¦^{D}(a)
(an equivalent condition is a¿¦(b) = ¦(a)¿b " a,b
[ and hence ¦^{ i}_{j}=¦^{ j}_{i}
]
).
In an N-D Euclidean space, symmetric tesors are diagonalisable, that is an
eigenframe {d_{i}}
exists with ¦(d_{i})
=l_{i}d_{i}
where l_{i} is the scalar
eigenvalue associated with 1-vector 1-eigenblade
d_{i}.
This remains true for Minkowski spaces only if N£3.
Projection is symmetric, ie. ¯_{b}^{D} = ¯_{b}, since
c¿(¯_{bk}(d))
= (-1)^{k+1}(¯_{bk}(d))ëc
= (-1)^{k+1}((d¿b)¿b^{-1})ëc
= (-1)^{k+1}((d¿b^{-1})¿b)ëc
= (-1)^{k+1}d¿¯_{_bkmv(b)}(c)
= ¯_{_bkmv(b)}(c)¿d .
A 1-tensor ¦ is skewsymmetric iff ¦(a) = -¦^{D}(a)
(an equivalent condition is a¿¦(b) = -¦(a)¿b " a,b
[ and hence ¦^{ i}_{j}=-¦^{ j}_{i} ]
).
a¿(ÑÙ¦(x))
= a¿(Ñ¦(x) - Ñ¿¦(x))
= a ¿(Ñ¦(x)) - a¿(Ñ¿¦(x))
=?= (a¿Ñ)¦(x) - a¿(Ñ¿¦(x))
= ¦^{Ñ}(a) - ¦^{D}(a)
= ¦(a) - ¦^{D}(a) .
Thus a symmetric 1-tensor has zero curl.
A skewsymmetric 1-tensor has ¦^{Ñ}(a) = ½a¿(ÑÙ¦(x))
= ½(a¿w(x))
where w(x) = (ÑÙ¦(x))) is a
bivector-valued functional fully characterising ¦,
The divergence of a skewsymmetric ¦ is zero.
A linear 1-tensor ¦ can be expressed as ¦^{[+]}(x) + ¦^{[-]}(x)
where
¦^{[+]} º ½(¦^{Ñ}(x)+¦^{D}(x)) = ½Ñ_{x}(x¿¦(x))
is symmetric
and
¦^{[-]}(x) º ½(¦^{Ñ}(x)-¦^{D}(x)) = ½x¿(Ñ_{x}Ù¦(x))
is antisymmetric.
2-tensors
Given a general multivector-valued function of two multivectors ¦(a,b) we can define
the symmetric symmetroll of ¦ by
¦^{[+]}(a,b) º
¦(a,b) + ¦(b,a) , and
the skewsymmetric skewsymmetroll of ¦ by
¦^{[-]}(a,b) º
¦(a,b) - ¦(b,a) .
We can express
¦^{[-]}(a,b) as a function of bivectors via
¦^{[-]}(a,b)=
¦^{[-]}(aÙb).
Clearly ¦(a,b) = ½¦^{[+]}(a,b) + ½¦^{[-]}(a,b)
so any function of two multivectors can be expressed as a sum of symmetric and skewsymmetric parts.
Further, (a¿Ñ)b - (b¿Ñ)a
= Ñ.(aÙb) + a(Ñ¿b) - b(Ñ¿a)
is bilinear and skewsymmetric in a,b.
Characterising General Functions
Connections
A general ¦(a) generates an even multivector field
w_{a} º a^{-1}¦(a)
called the right-connection of ¦ . If ¦ is linear, its right-connection
can be represented by
N even multivectors w_{i} º e^{i}¦(e_{i}) .
¦(a) = aw_{a}
= w_{0a}a - w_{2}_{a}.a
= w_{0a}a - w_{2}_{a}×a
Scalar w_{0a}=a^{-1}¿¦(a) can be regarded as the "expansion" component of ¦ ;
bivector w_{2}_{a}=a^{-1}Ù¦(a) as the rotation component .
¦(a) can also be represented by an even multivector field w_{a} º ¦(a)a^{-1} called the left-connection of ¦. This is just the right-connection with its bivector component negated.
Connections are most useful, being pure bivectors, when a^{-1}¿¦(a) = 0
" a .
We then have
¦(a) = a¿w_{a} = - w_{a}.a = - w_{a}×a
.
In particular, this is the case when representing the differential ¦^{Ñ} of either
a directional field (ie. a function mapping points to unit 1-vectors)
or a directional transform (ie.. a length-preserving function mapping 1-vectors to 1-vectors)
when we have the alternate affine approximisation
¦(a+da) » ¦(a) + da¿w_{daa}
.
[ Proof : ¦(a)^{2} = a Þ (Ñ_{a}¿b)(¦(a)^{2}) = 0
Þ ((Ñ_{a}¿b)¦(a))¿¦(a) = 0 .]
For N=3, N pure bivectors are specified by 9 scalar parameters (_cf
the conventional 3x3 matrix representation of an affine transform).
For N=4, 24 scalar parameters are required as compared to the 16 elements
in a 4x4 array.
Lorentz Transforms
For a Lorentz transform ¦ , the transformed frame { ¦(e_{i}) } remains orthonormal
and we can represent ¦ with a (nondirectional) unit rotor
R with
¦(a)
= R_{-1}(a)
º RaR^{-1} = RaR^{§} .
Exponentiated Form A connection can be represented in exponentiated or spinor form as r_{a} e^{fa ba} everywhere except at a=0 , with unit 2-blade b_{a}=(¦(a)Ùa^{-1})^{~} ; scalar f_{a}= cos^{-1}(a^{~} ¿ ¦(a)^{~}) ; scalar r_{a}=|¦(a)|/|a| .
A general ¦(a,b) can be bilinearly approximated as
å_{i j=1}^{N} a^{i}b^{j}¦(e_{i},e_{j})
using N^{2} 1-vectors ¦(e_{i},e_{j}).
A skewsymmetric bilinear
¦^{[-]}(a,b) = ¦^{[-]}(aÙb)
can be represented by ½N(N-1) 1-vectors ¦^{[-]}(e_{ij})
or via
w_{aÙb} = ¦^{[-]}(aÙb)(aÙb)^{-1}
if aÙb ¹ 0 ; 0 else
with ½N(N-1) 3-vectors w_{ij} º ¦^{[-]}(e_{ij})e_{ij}^{-1} .
Directed Multivector Derivatives
Let
Ð_{d} f(x) = Lim_{e ® 0} e^{-1}d^{-1}(
f(a+ed) -
f(a) )
so that
Ð_{d} = d^{-1} Ð_{d} for invertible d
and Ñ = å_{i=1}^{N} Ð_{ei} .
We say f(x) is regular (aka. analytic , holomorphic,
or meromorphic)
at a particular x_{0} if
Ð_{d} f(x) exists for x=x_{0} independant of "direction" d. More generally, we think
of the limit Lim_{x ® x0} (x-x_{0})^{-1} (f(x)-f(x_{0}))
existing with the same limit value regardless of how x approaches x_{0}.
We can then meaningfully denote this direction-independant limit by f'(x).
Within C @ Â_{2 +} @ Â_{0,1} , for example, for
f(x)= f(x+iy) = u(x,y)+ iv(x,y)
to be regular at x_{0} we traditionally require the partial derivatives exist there and satisfy
the Cauchy-Riemann equations
¶u/¶x = ¶v/¶y ;
¶u/¶y = -¶v/¶x .
We can combine the Cauchy-Riemann equations into the complex identity
Ð_{1}f(z) = Ð_{i}f(z)
(equivalent to Ñ^{^}f(x) where i^{^}=-i )
as a necessary (but not sufficient) condition for regularity.
Only some functions are regular. f(z)=z is regular everywhere with f'(z)=1, for example, as is f(z)=
(az)^{↑} with f'(z)=a(az)^{↑} .
All rational polynomials in z are regular but f(z)=|z|_{+}^{2} is regular only at z=0 where f'(z)=0.
Multivector Fractals
Let
y_{(n+1)} = y_{(n)}^{2} + y_{(0)} n > 1.
We assume that y_{(0)} and y_{(1)} are given. If both are even
Â_{N+} multivectors
then the "orbit" remains in
Â_{N+}. For N=2 we have the Mandelbrot set, for N=3 we have a Julia set.
In general we have a sequence
y_{(n+1)} = F(y_{(n)})
and we define a set S by
S(b) = { y_{(0)} : ¦(y_{(n)}) = b for some
n £ n_{max} }
where ¦(y) = |y|^{2} or a similar scalar valued function.
Letting T^{i}_{j (n)} =
¶y_{(n)[.i.]}/¶y_{(0)[.j.]}
= Ð_{e[.j.]}(e^{[.i.]}¿y_{(n)})
we have
T^{i}_{j (n+1)} =
¶y_{(n+1) i}/¶y_{(0) j}
= å_{k} (¶F(y_{(n)})_{[.i.]}/¶y_{(n)[.k.]})
(¶y_{(n) k}/¶y_{(0) j})
= å_{k} (¶F(y_{(n)})_{i}/¶y_{(n) k})
T_{(n) jk}
= å_{k} ((ÑF_{i})(y_{(n)}))_{k} T_{(n) jk}
= (T_{(n)}(ÑF_{i})(y_{(n)}))_{j}
so we can compute T_{(n+1)} iteratively from
T_{(n)} and y_{(n)}.
Suppose m£n_{max} is the lowest integer such that
¦(y_{(m)}) ³ b.
The normal to the surface S(¦(y_{(m)})) at y_{(0)}
is given by
Ñ(¦(F^{m}))(y_{(0)})
= T_{(n+1)}((Ñ¦)(y_{(m)}))
[ Proof :
d ¦(y_{(m)})/d y_{(0) i}
=
å_{k}(d ¦(y_{(m)})/d y_{(m) k})
(d ¦(y_{(m) k})/d y_{(0) i})
= å_{k}((Ñ¦)(y_{(m)}))_{k}T_{(n+1) ik}
= (T_{(n+1)}(Ñ¦)(y_{(m)})))_{i} ]
[Under Construction]
Next : Multivectors as Manifolds