Archive for March, 2010

II.5 The Development of Mathematical Rigor in Mathematical Analysis: Weierstrass and His School, The Aftermath of Weierstrass and Riemann: Pages 125 – 128.

Saturday, March 13th, 2010

Weierstrass lectured in Berlin from the 1860’s to 1890 a series of 4 lectures on mathematical analysis.  These lectures were attended by many mathematicians and had indirect influence by circulation of the lecture notes.  Hermite and Jordan in France also incorporated aspects of his teachings.  Weierstrass’s approach followed that of Cauchy.  He banned the idea of motion from limit processes and considered functions of complex variables.  He did not incorporate set theory into his ideas.

He included the idea of uniform continuity and utilized the now common idea of epsilon – delta, ε – d, proofs.  To quote Weierstrass “After the assumption of an arbitrarily small quantity ε a bound d for x may be found, such that for every value of x for which |x| < d, the corresponding value of |y| will be less than ε“.

He arithmetized analysis with a very axiomatic and stepwise approach to the subject.  He generated counter-examples to illustrate difficulties and to distinguish between different kinds of analytical behavior.   One of his famous counterexamples was that of an everywhere continuous but nowhere differential function, namely, f(x) = S bn  cos(an x).    f(x) is uniformly convergent for b < 1 but is not differentiable for any x if ab > 1 +(3/2) p. He provided my other examples and counterexamples.  From the 1880’s this forced the form of rigorous argument as used by the Weierstrass school to be thrust upon mathematics generally.

The ideas of Weierstrass and Riemann forced a rigor into mathematical analysis.  Dedekind advanced their ideas using Dedekind cuts of the real numbers.  These divided the real line into three sections.  These ideas prompted discussion by the Cantor, E. Heine, Frege.  Leopold Kronecker denied the existence of the reals instead relying on finite sets.  These ideas influenced Brouwer and Kurt Hensel.

From 1890 to 1910 the foundational framework of analysis shifted toward the theory of sets.  The origin of this is in the work of Cantor a student of Weierstrass.  Cantor studied how to distinguish between different infinite sets.  He proved the rational and algebraic numbers have a countable infinite number while the reals have an uncountable infinite number.  This led to a hierarchy of infinite sets of different cardinality.

Cantor began to realize that set theory could come to be used as a foundational tool for all of mathematics.  Lebesgue and Borel around 1900 tied set theory to the calculus in a very concrete and intimate way. 

Hilbert in the 1890’s provided an axiomatic approach to geometries which helped to provide a new emphasis on mathematical theories as axiomatic structures in the foundations of analysis of the early twentieth century.  Peano in Italy had similar aims as Hilbert.  Hilbert’s student Ernst Zermelo worked on axiomatizing set theory.

 Russel had a famous paradox in set theory, if S is the set of all sets that do not contain themselves, then it is not possible for S to be in S, nor can it not be in S.  Zermelo’s theory sought to avoid this difficulty by not defining a set.

By 1910 Weyl referred to mathematics as the science of “set membership” rather than the science of “quantity”.

However, Zermelo’s axioms of set theory lacked a consistency proof. 

Weiner in 1914 provided a set-theoretic definition of an ordered pair.  This was followed by set-theoretic definition of a function. 

The basic elements of basic analysis, derivatives, integrals, series, existence and convergent behaviors of series, along with the axioms for the reals and set theory were well formulated in the early twentieth century.  Further debates about the infinitesimally small and infinitely large were mostly over.

There is a further research on this provided by Robinson in 1961.  He created differentials which are obtained by adjoining differentials to the real numbers.  The differentials satisfied the axioms of an ordered field.  His differentials include elements that are smaller than 1/n for every positive integer n.   These differentials seem to satisfy the ultimate goal of Leibniz to have a theory of infinitesimals which is part of the same structure as that of the reals.  This approach has never been widely accepted as a working foundation for analysis.

II.5 Rigor in Mathematical Analysis: Cauchy, Riemann, the Integral and Counterexamples: Pages 123 – 124

Friday, March 12th, 2010

Cauchy revisited the definition of the convergence of a sum of an infinite series to be the limit of the sequence of partial sums.  This unified the approach for a series of numbers and a series of functions.  This approach is called the “arithmetization of analysis”.  He defined a continuous function as one for which “an infinite  increase of the variable produces an infinitely small increase of the function itself”.

Cauchy understood the importance of jumps in the function and their relationship to understanding the integral of the function.  Cauchy did assume continuity when defining the definite integral in his lectures.

Cauchy’s ideas became standard in France and in French schools.  Abel, Dirichlet and Riemann all studied Cauchy’s ideas.

Bolzano a Bohemian priest and Weierstrass of Germany also independently had ideas related to those of Cauchy. 

Riemann generalized the ideas of Cauchy for functions that are not necessarily continuous.  Riemann defined the Riemann integral in his second thesis.  He provided an example of a function that is discontinuous on any interval but yet can still be integrated. The integral thus has points of non-differentiability on each interval.  “Riemann’s definition rendered problematic the the inverse relationship between differentiation and integration …”.  This and the use of other counterexamples flourished at this time.  This definition was published in 1867 only after Riemann’s death.

Gaston Darboux published an expository version of the Riemann integral in 1873.  The rigor used by Riemann was appreciated later by the Weierstrass school.  Cantor also later utilized the work of Cauchy in the 1870’s in the investigations of point sets.

The Dirichlet problem is: “Given a function g, defined on the boundary of a closed region in the plane, does there exist a function f that satisfies the Laplace partial differential equation in the interior and takes the same values as g on the boundary?”  Riemann asserted that the answer to this question is “yes”.  Weierstrass published a counterexample in 1870. In 1900 Hilbert expressed ideas which made more precise and broad hypothesis that rehabilitated the Dirichlet problem.

II.5 Responses from the Late Eighteenth Century, Lagrange, The First Half of the Nineteenth Century, Cauchy: Pages 121 – 122.

Wednesday, March 10th, 2010

Colin Maclaurin in his 1742 textbook “A Treatise of Fluxions” attempted to clarify the foundations of calculus and do away with infinitesimals.  He influenced D’Alembert and they both rejected the infinitesimals as real quantities. 

In the eighteenth century the calculus methods separated themselves from the applications of mechanics and physics.  Lagrange focused on power series expansions and the calculus.. Through this focus the term analytic function became associated with at the existence of a convergent Taylor series representation of a function.  Lagrange did not see a simple series expansion as based on the limit process.  This approach was improved upon by Cauchy.

In the first half of the nineteenth century Cauchy provide clearer definitions of limit, continuity and of the derivative.

II.4 Recursive Functions, Effective Calculability, Turing Machines, Properties of Algorithms, Iteration versus Recursion, The Tower of Hanoi, Extended Euclid Algorithm, Complexity, Euclid Algorithm Complexity, Modern Aspects of Algorithms, Algorithms and Chance, Pseudorandom Numbers, The Influence of Algorithms on Contemporary Mathematics, Constructivist Schools and Effective Results, II.5 The Development of Rigor in Mathematical Analysis, Eighteenth Century Approaches and Techniques and Euler: Pages 112 – 120

Tuesday, March 9th, 2010

Doubly inductive functions are not primitive recursive but are effectively computable.  An example of this is the Ackermann function.  A similar double inductive function to Ackermann’s is: (i) A(1,y) = 2 + y for every y, (ii) A(x,1) = 2 for every x and (iii) A(x+1,y+1) = A(x,A(x+1,y)) whenever x > 1 and y > 1.  For every primitive recursive function, f, there is some x for which A(x,y) grows faster than f(y).  A new definition of recursive functions, rather than primitive recursive functions was provided by each of Godel, Church and Kleene.  Kleene’s added a third method of construction called minimization.  Under this new definition it turns out that the Ackermann and A function, as well as all functions that one can write a computer program to compute, are recursive.  This gives the formal definition of computability, it is functions that are recursive.

Church’s thesis is that recursive functions are exactly those that are effectively calculable.

In 1936 Turing described the Turing machine.  Turing machine computable functions and recursive functions are the same.  Hilbert’s tenth problem had the answer of no by using the concept of Turing machines.  That is, there is no mechanical process by which every mathematical statement could be proved.

The difference between iteration and recursion is as follows.  In iteration one computes the first terms and then recursively determines the other terms.  Thus the confusion between the two terms, since iteration uses the word recursion in its explanation.  In recursion the procedure calls on itself with smaller values of the variables.  Iteration seems simpler to compute the factorial n!.

The solution to the Tower of Hanoi problem is best seen by recursion.  The problem goes back to Edouard Lucas in 1884.  One has a tower of increasing size from top to bottom one holed disks on a peg.  One then moves the entire pile to another disk also in increasing size from top to bottom.  There are a total of three pegs and no disk may ever be placed on a smaller disk.  The disks may only be moved one at a time from one peg to another peg.

If one has n disks then the tower of Hanoi problem requires 2n – 1 moves to solve.  One must also keep more and more memory to solve this problem as n becomes larger.  There is another solution to the problem that requires less memory.

There is an extension of Euclid’s Algorithm that uses Bezout’s Lemma.  Euclid’s Greatest Common Divisor, gcd, algorithm is a recursive algorithm that works by successively reducing  two integers to one of the integers and a remainder.  Bezout’s lemma states that for any pair of positive integers (a,b) there exists integers u and v such that ua + vb = d = gcd(a,b).  This lemma can be used to also find the gcd of two integers.

To solve an algorithm on a computer requires both time and space complexity.  Space here refers to the maximum amount of memory required by the algorithm.  Complexity theory studies the amount of computation resources, time and space, required to carry out various tasks.

The complexity of Euclid’s algorithm and Euclid’s extended algorithm has been studied.  Pierre-Joseph-Etienne Finck in 1841 found the number of divisions needed for Euclid’s algorithm to be bound by 2 log2a + 1.  This bound has been improved to 2 logfa + 1, where f is the Golden ratio.   The space complexity of the extended Euclid’s algorithm is also small if it is properly implemented.

It has been come to be realized that randomness can be a very useful tool in algorithms.  One example of this is in selecting a very large sample of an even larger population in order to compute certain population parameters.  It is also used in the development of the notion of a quantum algorithm. 

In order to use randomness in computations one uses random numbers.  A computer can be used to devise pseudorandom numbers.  In the mid 1940’s Von Neuman created a method to create pseudorandom numbers.  There have been various methods suggested for determining how close a particular set of psudorandom numbers are to random numbers.  Von Mises had a method for the limiting frequency of 0’s and 1’s in the 1919.  He suggested that not only the original sequence have limiting frequency equal to 1/2 but also any subsequence that is extracted by means of a reasonable procedure have the limit frequency of 1/2.  In 1940 Church made this method more precise by using recursive functions for the reasonable procedure.  In 1966 Martin-Lof  created the definition in his thesis that a random sequence is a sequence that satisfies all of the effective statistical sequential tests.  This idea is still being debated.

Mathematics has concerned itself with questions about existence.  These are of two types of answers when something is proved to exist.  One either exhibits the item in question or proves by a logical argument that the object exists. 

Around 1910 Brouwer lead the intuitionist school of mathematics which rejected the notion of proof by contradiction.  This was the first of several constructivist schools of mathematics.  A further level of refinement of this idea is that a particular algorithm be made to work in a useful period of time.

 In number theory there is a distinction between effective and ineffective results.  Ineffective results do not allow one to determine specific solutions by means of a computer. 

Another issue was raised by the solution in 1976 by Appel and Haken of the four color problem that was proposed by Francis Guthrie a student of de Morgan in 1852.  Their solution requires a computer due to the huge number of cases that need to be examined.  One asks if this actually constitutes a proof?

In the early 1700’s the mathematical analysis lacked foundational clarity.  Nowadays, almost all working mathematicians are trained in, and concerned with, the production of rigorous arguments that justify their conclusions.  Here the rigour of mathematics in the field of analysis, differential and integral calculus, is examined.  The common feature of the work in this are was the use of infinities.  This includes many in quantity as well as infinitesimally small or infinitesimal objects.

The Taylor series expansion provoked many questions.  This series was named after Newton’s disciple Brook Taylor.  This series involves an infinite sequence of repeated differentiation of a function.

In the eighteenth century the calculus was very successful.  There were still outstanding issues.  One particularly troublesome series is 1 – 1 + 1 – 1 + 1 – 1 + … .  Various people tried to clarify the differential used in calculus.

Euler used various arguments that are not rigorous to “prove” the equality of various series expansions.  There were still controversial matters.  Fourier, in the early 19th century, introduced the idea that an arbitrarily shaped curve could be the starting point for a vibrating string.  This Fourier curve would consist of sums of trigonometric functions.  This eventually let to the study of broken graphs and then discontinuous functions.

II.4 Finiteness, Historical Examples, Euclid’s Algorithm: Iteration, The Method of Archimedes to Calculate Pi: Approximation and Finiteness, The Newton-Raphson Method: Recurrence Formulas, Does an Algorithm Always Exist, Hilbert’s Tenth Problem: The Need for Formalization, Recursive Functions: Church’s Thesis, Pages 106 – 111

Friday, March 5th, 2010

Originally “algorithm” referred to the arithmetic “calculation based on the decimal notation for the integers”.  By the 17th century D’Alembert was using the term more generally to refer “to methods in algebra and other calculation procedures”.  It “came to  mean any process of systematic calculation that could be carried out by means of very precise rules”.    Incorporating finiteness into the definition of an algorithm gives: “An algorithm is a set of finitely many rules for manipulating a finite amount of data in order to produce a result in a finite number of steps”.  This is not a mathematically precise definition.

Iteration is used in algorithms.  Iteration is the repetition of simple procedures.  Long hand multiplication is an example of this iteration. 

Euclid’s algorithm of the 3rd century B.C.  determines the greatest common divisor of two integers using iteration.   The basic algorithm uses alternate subtraction.  A modification uses division with remainder that is polynomial time rather than exponential time.  A generalization of the division process allows the algorithm for use on the ring of Gaussian integers of the form a + b i where and and b are integers.  The algorithm may also be used on non-integers and leads to the concept of continued fractions.   This is called a discrete algorithm.

Archimedes in the third century B.C. used inscribed and circumscribed regular polygons to bracket the area of a circle and hence to estimate or calculate p.  This is a numerical algorithm because it is used to approximate in a finite number of steps rather than to calculate exactly in a finite number of steps. 

Around 1670 Newton create a method to determine roots of polynomial equations.  The method uses an initial approximation and tangents or derivatives of the polynomial.  The method, with a modification supplied by Raphson in 1690, is called the Newton-Raphson Method and is said to converge quadratically.  The sequence of iterates if it converges is part of the domain of attraction.  For cubic polynomials in 1918 Julia studied these domains of attraction which are now known as fractal sets.  Using this method to approximate square roots was known by Heron of Alexandria in the first century.   The Newton Raphson method starts with an initial value an and function f(x) and the iterates using the formula:  an+1 = an – f(an)/f’(an).  This is called the recurrence formula or recurrence relation.

In 1900 at the Second International Congress of Mathematicians Hilbert proposed a list of 23 influential problems.  Hilbert’s 10th problem is: “given a Diophantine equation a process is sought by which it can be determined, in a finite number of operations, whether the equation is solvable in integral numbers”.  To solve this problem on is required to define what is meant by an algorithm.

Different people worked on this problem of determining precisely what is meant by an algorithm in a mathematical context.  These included: Leibniz in the 17 th century, Charles Babbage, Boole, Frege and Peano in the 19 th century followed by Godel, Church and Stephen Kleene in the 1930’s.  This final result was the notion of recursive functions.  A primitive recursive function is one that has an inductive definition.  We can create functions by substitution and recursion.  A primitive recursive function is any function that can be built form the initial stock of functions using substitution and primitive recursion.

II.2 1800 Shift of Focus, Bolyai and Lobachevskii, Acceptance of Non-Euclidean Geometry, Convincing Others, Looking Ahead, II.3 The Development of Abstract Algebra, Old Babylon and Hellenistic Era, The Medival Islamic World, The Latin West, Algebra is Born, The Search for Roots of Algebraic Equations, Exploring the Behavior of Polynomials in n Unknowns, The Quest to Understand the Properties of Numbers, Modern Algebra, II.4 Algorithms, What is an Algorithm?, and Abacists and Algorists, Pages 88 – 105.

Thursday, March 4th, 2010

Gauss was suspicious of the parallel postulate and thought that there might be another geometry other than Euclidean.  Geometry was divided into the cases were the angles of a triangle summed to greater than two right angles, called G, equal to two right angles, called E, and less than two right angles, called L.  The E case is Euclidean geometry.  The G case was proved to not exist.  The L case loomed.

The fame for discover of non-Euclidean geometry goes to Bolyai (1831) and Lobachevskii (1829).  Analogous to the circle was the L-curve of Bolyai and the horocycle of Lobachevskii.  The arguments were both made in 3 dimensional geometry.  Here Bolyai described the F-surface and Lobachevskii the horosphere.  The question became which geometry is “true”?  During their lifetimes their ideas were mostly neglected.  Gauss did not support the ideas during his lifetime.

When Gauss died in 1855 unpublished papers showed support for the ideas of Bolyai and Lobachevskii.  Gauss’s student Riemann advanced the ideas through his development of manifolds.  Riemann died in 1866 and by this time Eugenio Beltrami had also developed the same ideas.  Minding developed a pseudosphere that had constant negative curvature.  This was rediscovered by Liouville and developed further by Codazzi.  In 1871 Klein developed these ideas further.  Klein became a full professor at age 23.

Klein moved the direction away from the figures of geometry to the rigid body transformations of rotation, translation and reflection.  Poincare reformulated Beltrami’s disk model to make it conformal meaning that angles in non-Euclidean geometry were represented by the same angles in the model.  Augments ensued and it was postulated that one could not determine whether or not light rays were straight or angles did not sum as expected.  Poincare stated that in a particular situation there were two possible conclusions:  light rays are straight and the geometry of space is non-Euclidean or light rays are somehow curved and and space is Euclidean.  It was decided there was no logical way to choose between these positions.  This was called conventionalism.  Federigo Enriques was a prominent critic of conventionalism.  A discussion ensued on whether there was a difference and if it mattered that the law of gravity could not be altered but masses could be moved and it studied.  He argued that a one could decide whether a property was geometrical or physical by seeing whether we had any control over it.

In 1899 Hilbert axiomatized geometry.  In 1915 Einstein proposed his general theory of relativity which is in a large part a geometric theory of gravity.  He used Riemann’s work on manifolds to describe gravity as a kind of curvature in the four-dimensional manifold of spacetime.

Abstract algebra started with  he usual high school variables and constants represented by letters of the alphabet and morphed into the study of groups, rings and fields by the research mathematician.   Abstract algebra includes the abstract structures of groups, rings and fields defined in terms of a few axioms and built up of substructures, such as, subgroups, ideals and subfields.  Also included are maps such as homomorphisms and automorphisms.  In what follows is the relationship between high school algebra which includes the analysis of polynomial equations and modern or abstract algebra.

First and second degree polynomial equations can be found in old Babylonian cuneiform texts of around 2,000 B.C.  The solutions of these texts always followed a geometric interpretation of the equations.  Euclid and Archimedes continued along this geometric solution idea.  Diophantus had a text called Arithmetica.  In this text any solution was regarded as the answer and he did not always follow a geometric interpretation of a solution method.

Al-Khwarizimi in Baghdada advanced many of the old mathematical ideas.  The title of one of his books contained the word for completion “al-jabr” from which the word algebra was passed down to us.  He made explicit the relationships between geometric areas and lines interpreted in terms of multiplications, additions and subtractions.  This suggested a move away from geometric solutions and towards algebraic solutions of problems.

Around 1100 A.D. in a book entitled “al-jabr” Omar Khayyam analyzed a form of the cubic equation.  Al-Karaji synthesized this work and the work of Diophantus.  He and Bragmagupta four hundred years later had techniques for finding integer solutions to Pell’s equations.  Pell’s equations are equations of the form ax2 + b = y2, where a and b are integers and a is not a square.

After the fall of the Roman Empire in the Latin West, Fibonacci, in Italy around 1202, presented Al-Khwarizmi’s work almost verbatim, the book was called “Liber Abbaci”.  In 1494 the Italian Luca Pacioli published a compendium of all known mathematics.  It is one of the earliest printed mathematical texts.  This book highlighted the question ” Could algorithmic solutions be determined for the various cases of the cubic?”  Cardano and his student Ludovico Ferrari answered this in the affirmative for the cubic and extended it to the quartic.

Algebra was popularized with the translation of Diophantus’s “Arithmetica” in the 1560’s.  Viete in 1591 was concerned with the law of homogeneity.  He was concerned about the units of his numbers.  He also used letters to designate the numbers.

After Viete both Fermat and Descarte removed the law of homogeneity problem.   Descarte understood that there were n roots to the n-th degree polynomial, this is called the fundamental theorem of algebra.  Descarte highlighted the fundamental theorem of algebra and the solution of polynomial equations of degree greater than four.  D’Alembert, Euler and Gauss were all concerned with the fundamental theorem of algebra.  

The search for roots of algebraic equations propelled algebra through the 1700 and 1800’s.  This search advanced the use of radicals and the idea of solubility by radicals.  Quadratics, cubics and Quartics had been solved.  The insolubility of the quintic was shown Abel in the 1820’s.   Modular arithmetic was developed by Gauss who in 1801 showed that the 17th root of unity was constructable.  He also developed cyclic groups and the idea of a primitive element or generator of a group.

Galois around 1830 formulated a theoretical process to determine whether or not an equation was solvable.  He used fields and groups and included automorphisms and the idea of invariance.  In 1893 Heinrich Weber gave abstract definitions for the ideas of a field and a group.

Liebniz was interested in equations with multiple unknowns.  He used the determinant as later call it.  The determinant is associated with an n by n square array of number or matrix.  Cramer, Vandermonde and Laplace also studied determinants.  Sylvester originally developed matrices to linearly transform variables.  Cayley explored these and eventually developed the area of vector spaces.  Sylvester coined the term invariant which was further used by Boole.  This area was further developed by Gotthold Eisenstein, Hermite, Otto Hesse, Paul Gordon and Alfred Clebsch.  Hilbert in the 1880’s and 1890’s advanced their ideas and shifted the emphasis to abstract modern algebra.

It is the work of Hilbert that was used in my PhD thesis to develop multidimensional smoothing splines.

In the 6th centry BC Pythagoreans had defined perfect numbers a positive integer which is the sum of it divisors.  6 = 1 + 2 + 3 and 28 = 1 + 2 + 4 + 7+ 14.  Complex numbers were used by Cardano and Bombelli in the sixteenth century.  In the 17th century Fermat claimed that he could prove that xn + yn = zn for n an integer greater than 2 had no nontrivial solution in the integers.  This is known as Fermat’s last theorem.

In the 1770’s Gauss had used a type of complex number with integer coefficients to prove  Fermat’s last theorem with n = 3.   He extended this to integer coefficient complex numbers in the 1820’s.  These are called Gaussian integers and are closed under addition, subtraction and multiplication.  He also defined notions of unit, prime, and norm in order to prove an analogue of the fundamental theorem of arithmetic.

Quaternions were developed by Hamilton in the 1830’s.  The quaternions are not commutative and represent 4 dimensions.

Further generalizations to n dimensional matrices and vector spaces were developed.

Th classification of finite simple groups inspired set theoretic and axiomatic work of Cantor and Hilbert.  In 1930 the classic textbook “Moderne Algebra” by van der Waerden describes modern algebra that continues to characterize algebraic thought today. 

A precise definition of algorithm is not available but it means rule, technique, procedure or method.  A formal definition was achieved in the 20th century.

Abacists are ones that use an abacus to calculate.  The Chinese counting frame is an abacus.  Our number system today (base 10) is positional rather than additive (such as the Roman numerals).  The arithmetic operations of the decimal number system were labled algoritmus.  Algorists used these algoritmus methods to perform calculations.

The origin of the word algorithm is Arabic and it came from a distortion of the name La-Khwarizmi who is the author of the oldest known work on algebra in the 9th century.  The title translates to: “The compendious book on calculation by completion and balancing”.  This title gave rise to the word “algebra” from the word “al-jabr” from the Arabic title “al-Kitab al-mukhtasar fi hisab al-jabr wa’l-muqabala”.

II.1 Are Lengths Numbers?, Decimal Place Value, People Want a Number, Equal Status to All Numbers, Real, False and Imaginary, Number Systems Old and New, II.2 Geometry, Naive Geometry and The Greek Formulation, Pages 78 – 85

Wednesday, March 3rd, 2010

The Greeks figured out that lengths may not be commensurable.  They discovered incommensurable magnitudes.  The common example was the side and diagonal of a square.  They demonstrated that there is no common unit that could be used to represent the side and the diagonal as a product of this unit and a whole number.  Of course, they discovered that √2 and 1 were incommensurable or that √2 is irrational.

They did not like to work with lengths of this sort and used ratios instead.  They did not consider lengths to be numbers.  That may not seem reasonable to us today but it made sense to them back then (fourth century B.C.).  Eventually this system of ratios gave way to a generalized notion of numbers.

In the 5th century A.D. in India the system of 9 symbols with base 10 was created and used.  A place market to indicate an unused decimal space was also used.  This place marker or “nothing” eventually morphed into the use of a zero.

By the 9th century the decimal system had made its way to Baghdad of the Islamic world.  It was used in a popular book about Indian numeration by Al-Khwarizmi.  A few centuries later the decimal system became very popular in Europe.

Al-Khwarizmi in another book on algebra states “When I considered what people generally want in calculating, I found that it is always a number.”

In 1585 a Flemish mathematician and engineer, Stevin, popularized the decimal fractions in a booklet that translates to “The Tenth”.  He used his numbers for all positive numbers (and lengths), often realizing that they were only terminating decimal approximations to more precise numbers.  After the invention of the logarithm, sine and cosine tables appeared in decimal form and these cemented the use of the decimal representation of numbers.

In 1572 book “Algebra” a mathematician and engineer, Bombelli, used radicals to solve cubic equations.  About 50 year s later Albert Girard and Descartes said equations may have three types of roots: true (positive), false (negative) and imaginary (complex).  The understanding culminated by Gauss establishing the Fundamental Theorem of Algebra that every polynomial equation had a complete set of roots in the complex number system.

It became clear that one muse be concerned with what number system one was using.  a + bi with a and b both integer was explored by Gauss and Kummer as similar to the integers.  Galois defined and used rational numbers.  Johann Lambert in the 18th century established that e and pi are irrational and conjectured that they are not roots of a polynomial equation and hence transcendental numbers.  Subsequently Cantor showed that the vast majority of real numbers are transcendental. 

Hamilton developed the 4 dimensional quaternions in 1843 which are not commutative.

Galois introduced finite fields and function theorists worked with fields of functions.

Early in the 20-th century Kurt Hensel introduced the p-adic numbers, these were built from rational numbers with a special role given to the prime number p. 

Eventually these theories were brought together by Noether and the subject was known as “abstract algebra”.  Cayley created octonions.

Modern geometry was influenced by Hilbert and Einstein in the early 1900’s.  For several thousand years the ideas of Euclid had defined geometry layed out in his book “Elements”.

 Naive geometry is that of length, width and depth of our everyday experience.  This experience can be axiomatized into a mathematical subject.  The ideas of length, straightness and angle had to be understood before developing the mathematical theory. 

Egypt and Babylonia had ideas about geometry as evidenced by the large cities they built.  Euclid of Alexandria, around 300 B.C., wrote the definitive text on algebra entitled “Elements”. 

The elements consists of a sequence of books.  There are 13 books in all.  They cover, the study of plane figures: triangles, quadrilaterals and circles, the Pythagorean theorem, theory of ration and proportion, theory of similar figures, whole numbers, elementary number theory, lengths of the form √(a ±√b), three dimensional geometry.  Included here is the proof that there are an infinite number of prime numbers the construction of the five regular solids and proof that there are no more.

The logic used for the proofs in the Elements is exquisite.  It does not use circularity, has clear and acceptable inference rules and adequate definitions.

There is discussion of parallel lines and how they fit into the axiomatic system.

The mathematical space defined by the Elements is in fact infinite and so one may question whether or not it was intended as a simple idealization of the physical world.

I.4 and II.1: What Do You Find in a Mathematical Paper? and The Origins of Modern Mathematics, From Numbers to Number Systems, Numbers in Early Mathematics, Pages 73 – 77.

Tuesday, March 2nd, 2010

In a mathematical paper one is trying to establish mathematical statements.  The most important mathematical statements are called theorems, there are also propositions, lemmas and corollaries.  A theorem is a mathematical statement that is regarded as intrinsically interested.  A proposition is like a theorem but is not as interesting as a theorem.  Lemmas are statements that are subgoals intermediate in the argument of the paper.  A corollary of a mathematical statement is a statement that follows easily from the mathematical statement.

A proof is used to establish these statements.  A proof is a sequence of mathematical statements written in a formal language with the first few statements the initial assumptions or premises, each remaining statement follows from the earlier ones by means of simple and clear logical rules and the final statement in the sequence is the statement that is to be proved.

Definitions, problems and conjectures are also included in a mathematical paper.

Prime palindromes are used to show a problem that is rather artifical from a mathematical point of view.  It depends upon the base that is used to solve the problem and is not even well defined, as trailing zeroes become leading zeroes of the reversed palindrome.  Thus, due to trailing zeroes, the reverse of the reverse of a palidrome may me a different number than one started with.

The next 80 pages are concerned with the origins of modern mathematics.

Numbers are initially used as adjectives.  When the same number or adjective is used on multiple nouns then it eventually takes a life of its own.  In this way it becomes an entity in itself and finally as a member of a system of such numbers.

In early mathematics fractions were not trivial to deal with.  In Egypt fractions tended to be thought of as sums of recipricals of integers.  In Mesopotamia fractions were written in base 60.  This is called a sexagesimal place value system.  Both of these systems had their issues.  An exact value for 1/7 was not posssible in the sexagesimal system.  An egyption surviving papyri includes the number 14 and 1/4 and 1/56 and 1/97 and 1/194 and 1/388 and 1/679 and 1/776 which in modern notation is the number 14 and 28/97.  And 4,000 years later I confirmed this, in a few seconds, with the software Mathematica!

The fact that we still have 60 minutes in an hour and 60 seconds in a minute goes back to the Babylonian sexagesimal fractions of about 4,000 years ago.

I.4: More Determining Whether Different Mathematical Properties are Compatible, Working With Arguments that are not Fully Rigorous, Conditional Results, Numerical Evidence, Illegal Calculations, Finding Explicit Proofs and Algorithms, Pages 67 – 72.

Monday, March 1st, 2010

A group, G, is called finitely generated if there is a finite subset of elements of G, say X, such that the elements in G that are not in X can be written as products of the elements in X.  The set of all 2 by 2 matrices of determinant one and entries that are integers is called SL2(Z).  SL2(Z) is finitely generated.

An element, x, of a group, G, is said to be of finite order if there is some power of x that equals the identity of the group G.  The smallest such power is called the order of x.

In a group with order of all elements, except the identity, equal to 2 the group is Abelian, that is, ab = ba.  This shows that the general property of finite order equal to 2 implies the Abelian property.

A finite group G has a minimal set of generators if every element of the group can be formed from the generators and all of the generators are required to do this.

For the group, G, with order 2 and a minimal set of generators one can define a standard form.  This is done by ordering the elements the same was as the generators are ordered and by eliminating or reducing and powers greater than 1 of any multiply occurring generator.  Thus G has at most 2k elements.  This means that G is finite.

A finitely generated group of order 2  implies that G is finite.

The Burnside problem is: if G is finitely generated of order n then is G finite?  Yes, for n = 3.  It was shown by Ivanov in 1992 that G does not have to be finite for n ≥ 13.  It is still not known whether a group with 2 generators of order 5 must be finite.

There are arguments that are not fully mathematically rigorous.  These include conditional results, numerical evidence and illegal calculations.

The Riemann hypothesis is the most famous unsolved problem in mathematics.  The Riemann hypothesis is equivalent to the statement that p(n) and li(n) differ by at most c √nlog(n) for some constant c.  If true this would mean li(n) is a good approximation to p(n).  Here li(n) =  Integral(0,N) 1/log(t) dt.  Although unproven many arguments depend upon the Riemann hypothesis. 

 Another unproven problem is whether NP problems are equivalent to P problems.    

 Algebraic number theory has many conjectural statements.

The Goldbach conjecture states that every even number greater than or equal to 4 is the sum of two primes.  There is numerical evidence for this as it has been checked for ever even number up to 1014.  There is an even stronger version of Goldbacj’s conjecture.

It is sometimes the case that nonrigorous calculations lead to conjectures that may or may not be true.  Sometimes physicists provide such arguments that appear to lead to true conjectures.  These may include having something tend to zero that doesn’t make sense at zero or having a limiting form of integers rather than continuously defined real numbers.

There may be proofs that show existence of some object but give no clue as to how to find such an object.  The real numbers are dominated by transcendental numbers but only a few of them are known explicitly, for example, p.

One can sometimes find clever ideas to overcome difficulties.  An example is provided of finding volumes of convex objects in high dimensions.