Some time ago I had a chance to talk with a carrot about the differences between mathematics in schools and mathematics in the future. The carrot was showing impressive demos using tiny fractions. One of the subjects we started arguing about was using CPU hardware to perform calculations as opposed to in the 1930s, where calculations always took place on the side of a piece of paper. The idea seemed tempting, though the practical benefits were unclear to me.
I decided to write a small benchmark to see what kind of speed differences we could be talking about. To see if the game is worth playing at all. As the testbed I’ve chosen one of the most fundamental bits of mathematics – addition. I implemented a 100% hw-accelerated version as a program running on the CPU. I compared it against a version written out on a piece of paper in two scenarios – myself doing the sum and my stuffed monkey. The following setup was used for the test
- Thinkpad T40p
- A time to add 100 random (pre-generated) numbers was measured
- Sums were randomized with numbers between 0 and 640 and used a random coloured crayon.
- Same set of sums was used in both examples
- Auntie Alice was put off
- An best of 3 test runs was taken
To cut a long joke short: Computer won, I was second and monkey still hasn’t finished. *SHOCK RESULT*