Date: Sun, 12 Feb 95 18:52:00 +0000
Here is a transcription of algorithm #3 from the Collected Algorithms
from ACM. I took a few liberties to convert it into ASCII code, but I
think they are mostly self-evident.
Jose R. Valverde
European Bioinformatics Institute
txomsy@ebi.ac.uk
---------------------------------------------------------------------------
3. Solution of Polynomial equation by Bairstow-
Hichtcock method
A. A. Grau
Oak Ridge National Laboratory, Oak Ridge, Tenn.
procedure
BAIRSTOW (n, a[], eps0, eps1, eps2, eps3, K) =:
(m, x[], y[], nat[], ex[]);
comment The Bairstow-Hitchcock iteration is used to find
successively pairs of roott of a polynomial
equation of degree n with coefficients a[i]
(i = 0, 1, ... n), where a[n] is the constant term. On
exit from the procedure, m is the number of pairs
of roots found, x[i] and y[i] (i=1, ..., m) are
a pair of real roots if nat[i] = 1, the real and imagi-
nary parts of a complex pair if nat[i] = -1, and
ex[i] indicates which of the following conditions
was met to exit from the iteration loop in finding
this pair:
1. Remainders, r1, r0, become absolutely less
than eps1.
2. Corrections, incrp, incrq, become absolutely
less than eps2.
3. The ratios, incrp/p, incrq/q, become ab-
solutely less than eps3.
4. The number of iterations becomes K.
In the last case, the pair of roots found is not
reliable and no further effort to find additional
roots is made. The quantity eps0 is used as a
lower bound for the denominator in the expres-
sions from which incrp and incrq are found.;
begin
integer (i, j, k, n1, n2, m1) ;
array (b, c[0 : n + 1]) ;
BAIRSTOW
for i:= 0 (1) n ; b[i] := a[i]
b[n+1] := 0 ; n2 := entire((n + 1) / 2)
n1 := 2 * n2
for m1 := 1 (1) n2 ; begin p := 0 ; q := 0
for k := 1 (1) K ; begin
step: for i := 1 (1) n1 ; c[i] := b[i]
for j := n1 - 2, n1 - 4 ; begin
for i := 0 (1) j ; begin
c[i+1] := c[i+1] - p * c[i]
c[i+2] := c[i+2] - q * c[i] end end
r0 := c[n1] ; r1 := c[n1 - 1]
s0 := c[n1 - 2] ; s1 := c[n1 - 3]
v0 := -q * s1 ; v1 := s0 - s1 * p
det0 := v1 * s0 - v0 * s1
if (abs(det0) < eps0) ; begin
p := p + 1 ; q := q + 1 ; go to step end
det1 := s0 * r1 - s1 * r0
det2 := r0 * v1 - v0 * s1
incrp := det1 / det0 ; incrq := det2 / det0
p := p + incrp ; q := q + incrq
if (abs(r0) < eps1) ; begin
if (abs(r1) < eps1) ; begin
ex[m1] := 1 ; go to next end end
if (abs(incrp) < eps2) ; begin
if (abs(incrq) < eps2) ; begin
ex[m1] := 2 ; goto next end end
if (abs(incrp / p) < eps3) ; begin
if (abs(incrq / q) < eps3) ; begin
ex[m1] := 3 ; go to next end end end
ex[m1] := 4
next: S := p/2 ; T := (S * S) - q
if (T >= 0) ; begin T := sqrt(T)
nat[m1] := 1 ; x[m1] := S + T
y[m1] := S - T end
if (T < 0) ; begin nat[m1] := -1 ; x[m1] := S
y[m1] := sqrt(-T) end
if (ex[m1] := 4) ; go to out
for j := 0 (1) (n1 - 2) ; begin
b[j+1] := b[j+1] - p * b[j]
b[j+2] := b[j+2] - q * b[j] ; end
n1 := n1 - 2 ; if (n1 < 1)
out: begin m := m1 ; return end
if (n1 < 3) ; begin
m1 := m1 + 1 ; ex[m1] := 1
p := b[1] / b[0] ; q := b[2] / b[0]
go to next end
end end
-------------------------------------------------------------------
CERTIFICATION
3. Solution of polynomial equation by Bairstow-
Hitchcock Method, A. A. Grau, _Communications_
_ACM_, February, 1960.
Henry C. Thacher, Jr.,* Argonne National Labora-
tory, Argonne, Illinois.
Bairstow was caded for the Royal_Precision LGP-30 computer
using an interpretive floating point system (24.2) with 28 bits of
significance. The translation fron ALGOL was made by hand.
The following minor corrections were found necessary.
a. det2 := r0 * v1 - v0 * s1 should be det2 := r0 * v1
- v0 * r1
b. S := p / 2 should be S := -p / 2.
After these were made, the program ran smoothly for the fol-
lowing equations:
4 3 2
x - 3x + 20x + 44x + 43 = 0 x = -.97063897 +- 1.0058076i
x = -2.4706390 +- 4.6405330i
6 5 4 3 2
x - 2x + 2x + x + 6x - 6x + 8 = 0 x = 0.5000000 +- 0.86602539i
x = 1.0000000 +- 1.0000000i
x = 1.5000000 +- 1.3228756i
5 4 3 2
x + x - 8x - 16x + 7x + 15 = 0 x = .000000005,** - 0.999999999
x = 3.00000000, 0.999999999
x = -2.000000000 +- 1.00000000i
5 4 3 2
With the equation x + 7x + 5x + 6x + 3x + 2 = 0 cover-
gence was slow, and full accuracy was not obtained. However, the
6 4 3 2
equation with reciprocal roots, 2x + 3x + 6x + 5x + 7x + 1 = 0,
converged rapidly.
------------
* Work supported by the U. S. Atomic Energy Commision.
** Spurious zero real roots are introduced for equations of odd
order.
---------------------------------------
CERTIFICATION OF ALGORITHM 3
SOLUTION OF POLYNOMIAL EQUATIONS BY
BAIRSTOW HITCHCOCK METHOD (A. A. Grau,
_Comm._ACM_, February. 1960)
James S. Vandergraft
Stanford University, Stanford, California
Bairstow was coded for the Burroughs 220 computer using the
Burroughs ALGOL. Conversion from ALGOL 60 was made by hand
on a statement-for-statement basis. The integer declaration had
to be extended to include n, k, NAT, EX, and the corrections
noted in the certification by Henry C. Thacher, Jr., _Communica-
tions_ACM_, June, 1960, were incorporated.
By selecting the input parameters carefully, all branches of
the routine were tested and the program ran smoothly. The fol-
lowing polynomials equations were solved:
6 4 2
x - 14x + 49x - 36 = 0, x = +- 1.0000000
x = +- 1.9999998
x = +- 3.0000001
8 6 4 2
x - 30x + 273x - 820x + 576 = 0, x = +- 1.0000000
x = +- 2.0000000
x = +- 2.9999999
x = +- 4.0000001
Several minor errors were found in the certification by Mr.
Thacher. The constant term in the first polynomial should be 54
instead of 43, the second pair of roots for that polynomial should
be + 2.470639 +-4.6405330 i, and the second pair of roots for the
second polynomial should be -1.0 +- i.
------------------------------------------------------------------------
REMARKS ON ALGORITHMS 2 and 3 (_Comm._
_ACM_, February 1960), ALGORITHM 15 (_Comm._
_ACM_, August 1960) AND ALGORITHMS 25 AND 26
(_Comm._ACM__, November 1960).
J. H. Wilkinson
National Physical Laboratory, Teddington.
Algorithms 2, 15, 25 and 26 were all concerned with the cal-
culation of zeros of arbitrary functions by successive linear or
quadratic interpolation. The main limiting factor on the accuracy
attainable with such procedures is the condition of the _method_
of evaluating the function in the neighborhood of the zeros.
It is this condition which should determine the tolerance which
is allowed for the realtive error. With a well-conditioned method
of evaluation quite a strict convergence criterion will be met,
even when the function has multiple roots.
For example, a real quadratic root solver (of a type similar to
Algorithm 25) has been used on ACE to find the zeros of triple-
diagonal matrices T having t[ij] = a[i], t[i+1,i] = b[i+1], t[i,i+1] =
c[i+1]. As an extreme case I took a[1] = a[2] = ... = a5 = 0, a[6] =
a[7] = ... = a[10] = 1, a[11] = 2, b[i] = 1, c[i] = 0 so that the func-
tions which was being evaluated was x^5 (x-1)^5 (x-2). In spite
of the multiplicity of the roots, the answers obtained using float-
ing point arithmetic with a 46-bit mantissa had errors no greater
than 2^-44. Results of similar accuracy have been obtained for the
same problem using linear interpolation in place of the quadratic.
This is beacuse the method of evaluation which was used, the two-
term recurrence relation for the leading principal minors, _is_a_
_very_well-conditioned_method_of_evaluation_. Knowing this, I was
able to set a tolerance of 2^-42 with confidence. If the _same_function_
had been evaluated from its explicit polynomial expansion, then
a tolerance of about 2^-7 would have been necessary and the mul-
tiple roots would have obtained with very low accuracy.
To find the zero roots it is necessary to have an absolute toler-
ance for |x[r+1] - x[r]| as well as the relative tolerance condition.
It is undesirable that the preliminary detection of a zero root
should be necessary. The great power of root finders of this type
is that, since we are not saddled with the problem of calculating
the derivative, we have great freedom of choice in evaluating the
rootfinder so that it finds the zeros of x = f(x) since the true func-
tion x - f(x) is arbitrarily separated into two parts. The formal
advantage of using this formulation is very slight. Thus, in Certi-
fication 2 (June 1960), the calculation of the zeros of x = tan x
was attempted. If the function (-x + tan x) were used with a
general zero finder then, provided the method of evaluation was,
for example
x = nPI + y
3 5
y y
--- - ----- - ...
3 30
tan x - x = -nPI + ------------------- ,
cos y
the multiple zeros at x = 0 could be found as accurately as any
of the others. With a slight modification of common sine and co-
sine routines, this could be evaluated as
(sin y - y) - y (cos y - 1)
-nPI + -----------------------------
1 + (cos y - 1)
and the evaluation is then well-conditioned in the neighborhood
of x = 0. As regards the number of iterations needed, the re-
striction to 10 (Certification 2) is rather unreasonably small.
For example, the direct evaluation of x^60 - 1 is well conditioned,
but starting with the values x = 2 and x = 1.5 a considerable
number of iterations are needed for Newton's method, starting
with x = 2. If the time for evaluating the derivative is about the
same as that for evaluating the function (often is much longer),
then linear interpolation is usually faster, and quadratic inter-
polation much faster, than Newton.
In all of the algorithms, including that for Bairstow, it is use-
ful to have some criterion which limits the permissible change
from one value of the independent variable to the next [1]. This
condition is met to some extent in Algorithm 25 by the condition
S4, that abs(fprt) < abs(x2 * 10), but there the limitation is
placed on the permissible increase in the value of the function
from one step to the next. Algorithms 3 and 25 have tolerances on
the size of the function and on the size of the remainders r1 and
r0 respectively. They are very difficult tolerances to assign since
these quantitites may take very small values without our wishing
to accept the value of x as a root. In Algorithm 3 (Comm. ACM
June 1960) it is useful to return to the original polynomial and to
iterate with each of the computed factors. This elimininates the loss
of accuracy which may occur if the factors are not found in in-
creasing order. This presumably was the case in Certification 3
when the roots of x^5 + 7x^4 + 5x^3 + 6x^2 + 3s + 2 = 0 were
attempted. On ACE, however, all roots of thispolynomial were
found very accurately and convergence was very fast unsing single-
precision, but the roots emerged in increasing order. The reference
to _slow_ convergence is puzzling. On ACE, convergence was fast
for all the initial approximations to p and q which were tried.
When the initial approximations used were such that the real
root x = -6.35099 36103 and the spurious zero were found first,
the remaining two quadratic factors were of lower accuracy,
though this was, of course, rectified by iteration in the original
polynomial. When either of the other two factors was found first,
then all factors were fully accurate even without iteration in the
original polynomial [1].
REFERENCE
[1] J. H. Wilkinson. The evaluation of the zeros of ill-conditioned
polynomials Parts I and II. _Num. Math._ 1 (1959), 150-180.
----------------------------------------------------------------------
CERTIFICATION OF ALGORITHM 3
SOLUTION OF POLYNOMIAL EQUATION BY
BARSTOW-HITCHCOCK (A. A. Grau, _Comm._ACM_
Feb. 1960)
John Herndon
Stanford Research Institute, Menlo Park, California
Bairstow was transliterated into BALGOL and tested on the
Burroughs 220. The corrections supplied by Thatcher, _Comm._
_ACM_, June 1960, were incorporated. Results were correct for
equations for which the method is suitable. x^4 -16 = 0 is one
of those which gave nonsensical results. Seven-digit results were
obtained for 12 test equations, one of which was x^6 - 2x^5 + 2x^4 +
x^3 + 6x^2 - 6x + 8 = 0.
----------------------------------------------------------------------