Logarithmic Arithmetic
Logarithms can be used to quickly multiply two numbers or to raise a
number to a (possibly nonintegral) power.
Logarithms are defined with respect to a positive base a.
The logarithm base a of a positive number x is that number which
a must be raised to the power of to give x.
Writing log_{a}(x) for this number we have
x = a^{loga(x)}.
The inverse of log_{a} is written antilog_{a}
and we have antilog_{a}(y) = a^{y}.
Logarithms are useful because
xy = a^{ log(x)}a^{ log(y)}
= a^{ log(x)+ log(y)}
and so
log_{a}(xy) = log_{a}(x) + log_{a}(y).
Further x^{y} = (a^{ log(x)})^{y}
= a^{ylog(x)} and so log_{a}(x^{y})
= ylog_{a}(x).
Combining these two results we also have
log_{a}(x/y)
=log_{a}(x)log_{a}(y).
Clearly log_{a}(a)=1 and
log_{a}(1)=0
and so antilog_{a}(1)=a
and antilog_{a}(0)=1.
Common values for a are two, ten, and the natural base
e » 2.718.
We can change between these using:
log_{b}(x) = log_{a}(x)/log_{a}(b).
We write ln(x) for log_{e}(x).
How errors in Log and Antilog tables effect final results
We are primarily interested in how the errors in our tabulated log
and antilog functions effect the accuracy of operations performed using
them. We will assume for the moment that the maximum absolute error of
L(x) approximating L(x)=Klog_{a}(x) is d and that
the maximum fractional error of A(x) approximating A(x)=a^{x/K}
is e.
Noting that the first two derivatives of L(x)=Klog_{a}(x) are
L'(x) = K/( ln(a)x) ; L"(x) = K/( ln(a)x^{2})
and that the first two derivatives of A(x)=a^{x/K} are
A'(x) = ln(a)a^{x/K}/K ;
A"(x) = ln(a)2a^{x/k}/K^{2},
we can write:
A(L(x)+L(y))  £  A(L(x)+L(y))(1+e)

 £  A(L(x)+L(y)+2d>)(1+e)

 =  a^{2d/K}(1+e)A(L(x)+L(y))

 =  a^{2d/K}(1+e)xy.

Similarly A(L(x)+L(y)) ³ a^{2d/K}(1e)xy.
Now a^{g} » 1 + g ln(a)
for small g so, provided d is small compared to K, the fractional
error in A(L(x)+L(y)) as an approximation to xy is approximately
ln(a)2d/K+e.
For a=2 this is just under 1.39d/K+e.
If the A LUT has an index range from 0 to 2L(N) and if it is the
direct inverse of the log table so that A[L[x]]=x, then the product of
two positive integers in the range 1 to N is approximated by
A[L[x]+L[y]]. However, a table of 2L(N) cells each wide enough to hold
N^{2} may require more memory than is available.
Examples
To illustrate the above we will consider multiplying two positive
nonzero 16bit numbers using logs to give a 32bit result.
We will assume that a total of 32Kbytes is available for log and antilog tables.
Our first method will attempt to optimise speed, our second accuracy.
We will use binary logarithms (a=2) and assume that our options
for widths of LUT entries are 16 or 32 bits.
We will write N_{L},W_{L},N_{A},W_{A} for the length and width (in bytes) of
the log and antilog tables respectively.
Case 1  Speed
We take zero base functions and insist that all contraction and
expansion functions be simple binary shifts (ie. multiplication by
integral powers of 2).
This effectively forces us to make N_{L} and N_{A}
powers of two.
Let us tentatively assign 16 Kbytes = 2^{14} to the log table and
consider W_{L}=2, N_{L}=2^{13}.
We will use
X = 2^{3}Z_{216}
as a representative portion of Z_{216} and a
contraction mapping l_{}1(x)=[2^{3}x] obtained
by combining l_{}11(x)=2^{3}x with
l_{12}(x)=[x].
We tabulate L[i]=[Klog_{2}(i) i Î Z_{216}{0}, L[0]=0.
(This is better than setting X_{L}^{*}=X_{L}
and l_{}12(x)=[2^{3}x] since then we must tabulate
[Klog_{2}(2^{3}i) which is just the same table with 3K added to
every entry).
We choose K so that L[N_{L}] is as close as possible to 2^{15} while
still below it (so that the sum of two table entries is always less than
2^{16}). Since log_{2}(2^{13})=13 we set K=2^{15}/13.
Our expansion function is l_{}2(y,x)=y+3K but we will never actually
compute this.
Since only 16 bits of data are coming out of the log table there
is no point in having N_{A}>2^{16} and we will consider
W_{A}=2, N_{A}=2^{13}.
We need X_{}A= 2*(dynamic range of L)= 6K+Z_{216}_{1}. We will take
X_{A}^{*}=Z_{2161} as a representative portion of this using a_{11}(x)=(x6K).
Combining this with a_{12}(x)=[2^{3}x] yields a contraction function of
a_{1}(x)=[2^{3}(x6K)].
To tabulate A(x)=2^{x/K} over
X_{A}* = Z_{2161}
using a_{12}(x)=[2^{3}x] we would
store A[i] = 2^{8i/K}. However, this has a dynamic range of 1 to 2^{(216)/K} =
2^{26} so to to avoid exceeding our 2byte width we tabulate
A[i] = [2^{10}2^{8i/K}.
Our expansion function is the composition of a_{21}(y) = 2^{10}y and
a_{22}(y,x) = 2^{6}y (compensating for a_{11}(x) = x6K), ie. a_{2}(y,x) = 2^{16}y.
We thus have
xy = 2^{16}A[2^{3}(L[2^{3}x]+L[2^{3}y])]
(Where T[x] is taken to mean T[[x]]. T[[x] will give greater accuracy
but is not a simple binary shift.). However the errors are high:
Of all the pairs x,y whose product gives 2^{m}, x=2^{16} y=2^{m16} gives the
gretest fractional error.
This will be due to (writing p=m16):
 An absolute sparseness error in L of
Klog_{2}(1+1/2^{P3})+Klog_{2}(1+1/2^{163})»Klog_{2}(1+2^{3p}+2^{13});
 An absolute thinness error in L of 2^{1}+2^{1}=1;
 A fractional sparseness error in A of approx. 2^{3}ln(2)/K;
 A fractional thinness error in A of 2^{15}/xy = 2^{15m} = 2^{1p}.
These will contribute fractional errors of
 2K( log(1+2^{(3p)}+2^{13}))/K1 = 2^{3p}+2^{13};
 2^{1/K}1 = 1.13*2^{12} » 2^{12};
 2^{3}ln(2)/K = 1.13*29 » 2^{9};
 and 2^{1p}
to xy respectively.
We can prepare the following table of contributions to the net
fractional error in xy of the four error sources:
xy  p  Log Sparse  Log Thin  A‘log Sparse  A‘log Thin  Total Frac  Total Abs

<2^{16}  Big  2^{12}  2^{9}  Big  Big

2^{16}  0  2^{3}  2^{12}  2^{9}  2^{1}  2^{3}  2^{19}

2^{18}  2  2^{1}  2^{12}  2^{9}  2^{3}  2^{1}  2^{19}

2^{20}  4  2^{1}  2^{12}  2^{9}  2^{5}  2^{1}  2^{19}

2^{22}  6  2^{3}  2^{12}  2^{9}  2^{7}  2^{3}  2^{19}

2^{24}  8  2^{5}  2^{12}  2^{9}  2^{9}  2^{5}  2^{19}

2^{26}  10  2^{7}  2^{12}  2^{9}  2^{11}  2^{7}  2^{19}

2^{28}  12  2^{9}  2^{12}  2^{9}  2^{13}  2^{8}  2^{20}

2^{30}  14  2^{11}  2^{12}  2^{9}  2^{15}  2^{9}  2^{21}

2^{32}  16  2^{12}  2^{12}  2^{9}  2^{17}  2^{9}  2^{23}

Reducing N_{A} to 2^{12} and doubling W_{A} (ie. 32 bit entries)
eliminates the thinness errors in A at the expense of doubling the sparseness ones
but since the sparseness errors in L dominate A‘s thinness errors this
does not improove things.
If we allocate another 16 Kbytes to the LUTs we can either:
(i) Double W_{L}, (ii) Double N_{L},
(iii) Double W_{A}, or (iv) Double N_{A}.
(ii) is the most sensible since other than for large xy, the
sparseness of L is the major source of error and this would be
halved giving:
bs]
xy  p  Log Sparse  Log Thin  A‘log Sparse  A‘log Thin  Total Frac  Total Abs


2^{16}  0  2^{2}  2^{12}  2^{9}  2^{1}  2^{2}  2^{18}

2^{18}  2  2^{0}  2^{12}  2^{9}  2^{3}  2^{0}  2^{18}

2^{20}  4  2^{2}  2^{12}  2^{9}  2^{5}  2^{2}  2^{18}

2^{22}  6  2^{4}  2^{12}  2^{9}  2^{7}  2^{4}  2^{18}

2^{24}  8  2^{6}  2^{12}  2^{9}  2^{9}  2^{6}  2^{18}

2^{26}  10  2^{8}  2^{12}  2^{9}  2^{11}  3*2^{9}  3*2^{17}

2^{28}  12  2^{10}  2^{12}  2^{9}  2^{13}  3*2^{10}  3*2^{18}

2^{30}  14  2^{12}  2^{12}  2^{9}  2^{15}  2^{9}  2^{21}

2^{32}  16  2^{13}  2^{12}  2^{9}  2^{17}  2^{9}  2^{23}

Case Two  Accuracy
We now sacrifice a little evaluation speed and do not restrict our
expansion and contraction functions to binary shifts. We will assume 32
Kbytes and again consider
N_{A}=N_{L}=2^{13}, W_{A}=W_{L}=2.
We will set K=2^{15}/13 as
before.
We know that the greatest fractional error in xy derives from
sparseness errors in L so we will tackle these first. The problem lies
in our choice of l_{11}(x)=2^{3}x. When this is followed by l_{}12(x)=[x] the
bottom three bits of x are lost which for low x is disastrous. A better
contraction mapping is based on l_{}11(x)=2^{m}^{(x)}x where m(x) is the minimal
positive ineteger such that
l_{}11(x)ÎX_{L}*=[1,2^{13}).
This is the contraction
function discussed in the theoretical section on Log Tables with q=0 and
t=13. Application of l_{}12(z)=[z] where z=l_{}11(x) will then only cause data
loss if m(x)>0. But if this is so z will be at least N_{L}/2 so our
absolute sparseness error in L is at most Klog_{2}(1+2/N_{L})»2K/(N_{L}ln(2))
contributing a fractional error in xy of 4/N_{L}, ie. 2^{11} in this case.
Rather than incorporating an addition of Km(x) into l_{}2 we multiply
the final result by 2^{Km(x)/K}=2^{m(x)} thus:
xy = 2^{m(x)+m(y)+16}A[2^{3}(L[2^{m(x)}x]+L[2^{m(y)}y])].
where L[i]=[Klog_{2}(i) iÎZ_{N}{0},
A[i]=[2^{10}2^{8i/K}
The disadvantage of this contraction function is that the
operation "shift x right until x is first less than y and remember the
number of shifts performed" is usually slow to impliment with time
proportional to the number of shifts required. This problem aside, we
could reduce N_{L} to 2^{10} (a 4 Kbyte LUT) before the L errors would again
become significant compared to th A ones.
Having reduced the Log Sparseness errors to 2^{11} our next priority is
to tackle the Antilog Thinness ones. Doubling W_{A} at the expense of N_{A}
eliminates them entirely but doubles the fractional Antilog Thiness
errors to 2^{8} for all products (including those less than 2^{16}).
At this stage we have:
xy  p  Log Sparse  Log Thin  A‘log Sparse  A‘log Thin  Total Frac  Total Abs

All  2^{11}  2^{12}  2^{8}  0  2^{8}  2^{8}xy

To reduce the sparseness errors in A requires linear
interpolation. This can be done relatively painlessly as in the
Sparseness Errors section of Antilog Tables above with d=0, b=2^{4}, p=4.
We set y=[2^{4}x], z=2^{4}xy, so that x=2^{4}(y+z) and express z as R_{z}/2^{4},
ie. R_{z}=bottom 4 bits of x.
We set G=K(log_{2}(2^{16/K}1)4)»11.83K»&746E
and define
g(x)  =  A(2^{4}y) + A(2^{4}y + Klog_{2}(R_{z}) + G)

 »  A(2^{4}y) + A(24y + L[R_{z}] + G)

 »  2^{16}A[y] + 2^{16}A[y+[2^{4}(L[R_{z}]+G)]]

 =  2^{16}(A[2^{4}x] + A[[2^{4}x]+[2^{4}(L[x AND 15]&746E)]]

Our product function is then
xy = 2^{m(x)+m(y)}g(L[2^{m(x)}x]+L[2^{m(y)}y]).
As described above, this doubles our thinness errors (zero in this
case) and reduces the sparseness ones to
e=9e^{2}/8 = 9*2^{16}/8 » 2^{16}. So
our final errors are:
xy  Log Sparse  Log Thin  A‘log Sparse  A‘log Thin  Total Frac  Total Abs

All  2^{11}  3*2^{13}  2^{16}  0  2^{10}  2^{10}xy

a fractional error of less than 0.1%. This may be reduced further by
introducing linear interpolation for L or increasing N_{A}.
Case Three  Compromise
The following represents only one possible balance between speed and
accurracy. The preceeding rather lengthy discussion should enable the
reader to find for herself one more suited to any given application. A
sensible approach is often to have several different multiplication
routines using the same tables in different ways.
We will take K=2^{k} where k=12 (the largest integer giving
2^{k}log_{2}(2^{k}1)<2^{16}).
We will use an N_{L}entry 16bit LUT L[i]=[2^{k}log_{2}(i) 1<i<N_{L}
; and an N_{A}=2^{a}entry bbit LUT A[i]=[2^{b2^(16k)+(2^(16a))i/K} 0<i<N_{A}.
We will use the contraction function l_{1}(x)=[2^{m(x)}x]
where m(x) is some shift designed so that m(x)<N_{A}.
Our composite contraction function for A is
a_{1}(x)=[2^{a16}(x MOD 2^{16})]
and our expansion function is
a_{2}(y,x) = 2^{32b+(x DIV 216)2(16k)}y.
Since L(N_{L}1) is greater than 2^{15} it is possible that adding L(x) to
L(y) will generate a result greater than 2^{16}.
Writing z for (L[2^{m(x)}x]+L[2^{m(y)}y])MOD 2^{16} ;
C for (L[2^{m(x)}x]+L[2^{m(y)}y])DIV 2^{16} (=0 or 1, the "carry")
we have: xy » 2^{m(x)+m(y)+32b+2^(16k)C}A[2^{a16}z].
Our errors in xy are:
 4/N_{L} fractional error due to sparseness of L;
 2^{1/K}1 = 2^{k}ln(2) fractional error due to thinness of L;
 2^{2^16ak1} = 2^{16akln(2)} fractional error due to sparseness of A;
 2^{31b} absolute error due to thinness of A.
If N_{L}=2^{13},k=11,a=13,b=32 these are 0.05%,0.03%,0.27%, and 0.5%
respectively.
Glossary
Contents
Author
Copyright (c) Ian C G Bell 1998
Web Source: www.iancgbell.clara.net/maths or
www.bigfoot.com/~iancgbell/maths
18 Nov 2006.