II.4 Finiteness, Historical Examples, Euclid’s Algorithm: Iteration, The Method of Archimedes to Calculate Pi: Approximation and Finiteness, The Newton-Raphson Method: Recurrence Formulas, Does an Algorithm Always Exist, Hilbert’s Tenth Problem: The Need for Formalization, Recursive Functions: Church’s Thesis, Pages 106 – 111

Originally “algorithm” referred to the arithmetic “calculation based on the decimal notation for the integers”.  By the 17th century D’Alembert was using the term more generally to refer “to methods in algebra and other calculation procedures”.  It “came to  mean any process of systematic calculation that could be carried out by means of very precise rules”.    Incorporating finiteness into the definition of an algorithm gives: “An algorithm is a set of finitely many rules for manipulating a finite amount of data in order to produce a result in a finite number of steps”.  This is not a mathematically precise definition.

Iteration is used in algorithms.  Iteration is the repetition of simple procedures.  Long hand multiplication is an example of this iteration. 

Euclid’s algorithm of the 3rd century B.C.  determines the greatest common divisor of two integers using iteration.   The basic algorithm uses alternate subtraction.  A modification uses division with remainder that is polynomial time rather than exponential time.  A generalization of the division process allows the algorithm for use on the ring of Gaussian integers of the form a + b i where and and b are integers.  The algorithm may also be used on non-integers and leads to the concept of continued fractions.   This is called a discrete algorithm.

Archimedes in the third century B.C. used inscribed and circumscribed regular polygons to bracket the area of a circle and hence to estimate or calculate p.  This is a numerical algorithm because it is used to approximate in a finite number of steps rather than to calculate exactly in a finite number of steps. 

Around 1670 Newton create a method to determine roots of polynomial equations.  The method uses an initial approximation and tangents or derivatives of the polynomial.  The method, with a modification supplied by Raphson in 1690, is called the Newton-Raphson Method and is said to converge quadratically.  The sequence of iterates if it converges is part of the domain of attraction.  For cubic polynomials in 1918 Julia studied these domains of attraction which are now known as fractal sets.  Using this method to approximate square roots was known by Heron of Alexandria in the first century.   The Newton Raphson method starts with an initial value an and function f(x) and the iterates using the formula:  an+1 = an – f(an)/f’(an).  This is called the recurrence formula or recurrence relation.

In 1900 at the Second International Congress of Mathematicians Hilbert proposed a list of 23 influential problems.  Hilbert’s 10th problem is: “given a Diophantine equation a process is sought by which it can be determined, in a finite number of operations, whether the equation is solvable in integral numbers”.  To solve this problem on is required to define what is meant by an algorithm.

Different people worked on this problem of determining precisely what is meant by an algorithm in a mathematical context.  These included: Leibniz in the 17 th century, Charles Babbage, Boole, Frege and Peano in the 19 th century followed by Godel, Church and Stephen Kleene in the 1930’s.  This final result was the notion of recursive functions.  A primitive recursive function is one that has an inductive definition.  We can create functions by substitution and recursion.  A primitive recursive function is any function that can be built form the initial stock of functions using substitution and primitive recursion.

Leave a Reply

You must be logged in to post a comment.