II.4 Recursive Functions, Effective Calculability, Turing Machines, Properties of Algorithms, Iteration versus Recursion, The Tower of Hanoi, Extended Euclid Algorithm, Complexity, Euclid Algorithm Complexity, Modern Aspects of Algorithms, Algorithms and Chance, Pseudorandom Numbers, The Influence of Algorithms on Contemporary Mathematics, Constructivist Schools and Effective Results, II.5 The Development of Rigor in Mathematical Analysis, Eighteenth Century Approaches and Techniques and Euler: Pages 112 – 120

Doubly inductive functions are not primitive recursive but are effectively computable.  An example of this is the Ackermann function.  A similar double inductive function to Ackermann’s is: (i) A(1,y) = 2 + y for every y, (ii) A(x,1) = 2 for every x and (iii) A(x+1,y+1) = A(x,A(x+1,y)) whenever x > 1 and y > 1.  For every primitive recursive function, f, there is some x for which A(x,y) grows faster than f(y).  A new definition of recursive functions, rather than primitive recursive functions was provided by each of Godel, Church and Kleene.  Kleene’s added a third method of construction called minimization.  Under this new definition it turns out that the Ackermann and A function, as well as all functions that one can write a computer program to compute, are recursive.  This gives the formal definition of computability, it is functions that are recursive.

Church’s thesis is that recursive functions are exactly those that are effectively calculable.

In 1936 Turing described the Turing machine.  Turing machine computable functions and recursive functions are the same.  Hilbert’s tenth problem had the answer of no by using the concept of Turing machines.  That is, there is no mechanical process by which every mathematical statement could be proved.

The difference between iteration and recursion is as follows.  In iteration one computes the first terms and then recursively determines the other terms.  Thus the confusion between the two terms, since iteration uses the word recursion in its explanation.  In recursion the procedure calls on itself with smaller values of the variables.  Iteration seems simpler to compute the factorial n!.

The solution to the Tower of Hanoi problem is best seen by recursion.  The problem goes back to Edouard Lucas in 1884.  One has a tower of increasing size from top to bottom one holed disks on a peg.  One then moves the entire pile to another disk also in increasing size from top to bottom.  There are a total of three pegs and no disk may ever be placed on a smaller disk.  The disks may only be moved one at a time from one peg to another peg.

If one has n disks then the tower of Hanoi problem requires 2n – 1 moves to solve.  One must also keep more and more memory to solve this problem as n becomes larger.  There is another solution to the problem that requires less memory.

There is an extension of Euclid’s Algorithm that uses Bezout’s Lemma.  Euclid’s Greatest Common Divisor, gcd, algorithm is a recursive algorithm that works by successively reducing  two integers to one of the integers and a remainder.  Bezout’s lemma states that for any pair of positive integers (a,b) there exists integers u and v such that ua + vb = d = gcd(a,b).  This lemma can be used to also find the gcd of two integers.

To solve an algorithm on a computer requires both time and space complexity.  Space here refers to the maximum amount of memory required by the algorithm.  Complexity theory studies the amount of computation resources, time and space, required to carry out various tasks.

The complexity of Euclid’s algorithm and Euclid’s extended algorithm has been studied.  Pierre-Joseph-Etienne Finck in 1841 found the number of divisions needed for Euclid’s algorithm to be bound by 2 log2a + 1.  This bound has been improved to 2 logfa + 1, where f is the Golden ratio.   The space complexity of the extended Euclid’s algorithm is also small if it is properly implemented.

It has been come to be realized that randomness can be a very useful tool in algorithms.  One example of this is in selecting a very large sample of an even larger population in order to compute certain population parameters.  It is also used in the development of the notion of a quantum algorithm. 

In order to use randomness in computations one uses random numbers.  A computer can be used to devise pseudorandom numbers.  In the mid 1940’s Von Neuman created a method to create pseudorandom numbers.  There have been various methods suggested for determining how close a particular set of psudorandom numbers are to random numbers.  Von Mises had a method for the limiting frequency of 0’s and 1’s in the 1919.  He suggested that not only the original sequence have limiting frequency equal to 1/2 but also any subsequence that is extracted by means of a reasonable procedure have the limit frequency of 1/2.  In 1940 Church made this method more precise by using recursive functions for the reasonable procedure.  In 1966 Martin-Lof  created the definition in his thesis that a random sequence is a sequence that satisfies all of the effective statistical sequential tests.  This idea is still being debated.

Mathematics has concerned itself with questions about existence.  These are of two types of answers when something is proved to exist.  One either exhibits the item in question or proves by a logical argument that the object exists. 

Around 1910 Brouwer lead the intuitionist school of mathematics which rejected the notion of proof by contradiction.  This was the first of several constructivist schools of mathematics.  A further level of refinement of this idea is that a particular algorithm be made to work in a useful period of time.

 In number theory there is a distinction between effective and ineffective results.  Ineffective results do not allow one to determine specific solutions by means of a computer. 

Another issue was raised by the solution in 1976 by Appel and Haken of the four color problem that was proposed by Francis Guthrie a student of de Morgan in 1852.  Their solution requires a computer due to the huge number of cases that need to be examined.  One asks if this actually constitutes a proof?

In the early 1700’s the mathematical analysis lacked foundational clarity.  Nowadays, almost all working mathematicians are trained in, and concerned with, the production of rigorous arguments that justify their conclusions.  Here the rigour of mathematics in the field of analysis, differential and integral calculus, is examined.  The common feature of the work in this are was the use of infinities.  This includes many in quantity as well as infinitesimally small or infinitesimal objects.

The Taylor series expansion provoked many questions.  This series was named after Newton’s disciple Brook Taylor.  This series involves an infinite sequence of repeated differentiation of a function.

In the eighteenth century the calculus was very successful.  There were still outstanding issues.  One particularly troublesome series is 1 – 1 + 1 – 1 + 1 – 1 + … .  Various people tried to clarify the differential used in calculus.

Euler used various arguments that are not rigorous to “prove” the equality of various series expansions.  There were still controversial matters.  Fourier, in the early 19th century, introduced the idea that an arbitrarily shaped curve could be the starting point for a vibrating string.  This Fourier curve would consist of sums of trigonometric functions.  This eventually let to the study of broken graphs and then discontinuous functions.

Leave a Reply

You must be logged in to post a comment.