I’ve been nibbling through Douglas R. Hofstadter’s Pulitzer Prize-winning book, Gödel, Escher, Bach. In Chapter XVII he mentions the mathematician Srinivasa Ramanujan (1887-1920) who had a talent for extremely fast mathematical analysis. This class of mind (including so-called idiot savants) he called lightning calculators. In this passage he discusses the unlikelihood that such minds have access to resources or processes outside of general recursive functions:
One could probably make a nice plot showing how the time taken by a lightning calculator varies with the size of the numbers involved, and the operations involved, and from it deduce some features of the algorithms employed.
This got me thinking about my own mathematical algorithms. By that term I mean the series of atomic steps taken to produce a result such as the sum
14 + 38 = 52 or the quotient
39 ÷ 3 = 13.
I vaguely remember being taught to add two-digit numbers in grade school. The process begins with the numbers stacked so that the ones and tens places are vertically aligned. A horizontal line is drawn to separate the stack from the result below. Starting with the right-most column, we sum each column and place the result below the stack. If the result is two digits, place the “ones” numeral below the current column and the “tens” numeral above the next colmn to the left. (“Carry the one.”) Here is the final product, showing the work done:
1 ← carry the one 14 + 38 52
That is the most basic algorithm in my mathematical toolbox. You probably have a similar algorithm that you use to add a column of numbers. Maybe you visualize the stack and proceed consciously through the steps or perhaps you have trained your mind to produce mathematical results at a subconscious level. Hofstadter’s point was that everyone’s result must be produced, at some level, by a series of simple steps he called a general recursive function (Church-Turing Thesis).
Reading about this, I realized that my own addition algorithm proceeds not from right to left but from left to right. Whereas the standard method begins with the least significant digits, my method begins with the most significant digits. (Lets leave the Freudian isomorphisms out of this discussion, interesting though they may be.) Here is the way I add numbers:
Stack the numbers as before. Sum the left-most column and write the result below. (Begin loop.) Sum the next column and if it exceeds one digit, increment the previous result and append the new result. (End loop.)
I wonder whether my algorithm can produce results with fewer operations on average. I guess that if the likelihood of column sums exceeding one digit is less than a certain threshold, my method will be faster (completing in fewer operations). Perhaps one method is easier for minds having a specific learning preference, i.e. visual or auditory or tactile.
Here’s homework for the curious: write a program that compares these algorithms in terms of number of operations to sum every possible set of two, three, and four numbers having two, three, and four digits. If my method is faster for some class of sums, such as those having an instantly recognizable feature like a low occurrence of digits greater than five, would the extra steps of recognizing such a class and selecting the most appropriate algorithm improve the overall speed of doing sums?
My hope is that somebody can produce objective proof that my summing algorithm is not always slower than the right-to-left method taught to me in school. If not, I might be afflicted with mathematics disorder—an actual diagnosis in the DSM-IV. Pfizer?