Fibonacci Redux

Time for more microbenchmarks! Someone just pointed me out to this post which describes another fibonacci shootout. It gave the brand new Ruby 1.9 VM a test drive. Ruby 1.9’s execution model is quite different from 1.8, featuring a highly optimized bytecode interpreter (codenamed YARV). I must admit that I totally overlooked YARV. All the buzz around Rubinius distracted me.

YARV beats the pants off ruby 1.8. It also outperforms Panda Smalltalk, not by much though. I have prepared a fresh benchmark. This time I present the fibonacci code using images since wordpress refuses to let me format language code sanely. I also got a bit carried away with GIMP and made the code look all pretty.

fib-rb190.png

fib-st2.png

Progress

I am having great fun working on my Smalltalk VM. I spent the last few weeks mainly on implementing a compacting garbage collector. There have been some nice additions to the class library, including the Set, Dictionary, and System classes.What’s been particularly pleasing to me personally is that certain benchmarks have come out very fast, faster than Python or Ruby.

Fibonacci Benchmark

fibonacci
    self <= 2
        ifTrue: [^ 1]
        ifFalse: [^ (self - 2) fibonacci + (self - 1) fibonacci].

The naive recursive fibonacci function isn’t a very exemplary benchmark. Nevertheless, I am advertising the results here because I think its a good start. In my opinion, the fibonacci function does at least give a good benchmark for method invocation performance (i.e changing the current context or activation record of an interpreter). The benchmarks below are all based on calculating the 32nd fibonacci number. I assume that the Python and Ruby binaries are compiled using the optimal CFLAGS. If that’s not the case, then I guess Ubuntu has a bit of a problem.

Squeak Smalltalk: 0.427s

The winner, Squeak. I can’t reason about why its faster than Panda. I suspect squeak’s context management scheme is slightly more optimized at the moment.

Panda Smalltalk: 0.67s

Yes, that’s my VM! coming a respectable second. As I stated above, there’s still work to be done on context management. The executable was compiled using “-g -O3 -fomit-frame-pointer”.

Python: 1.18s

Python has a bytecode interpreter and uses the C stack to store activation records. So its context management is probably an order of magnitude faster than Panda. I suspect the python is slower as it allocates an actual object for each integer. In Panda, integers are just stuffed into pointers, which are then tagged in the two low-order bits so that the interpreter can differentiate between normal objects and integers.

Ruby: 4m0.2s

No excuse. This gives dynamic languages a bad name. I suspect the main performance killer here is that Ruby’s execution model is based upon an abstract syntax tree rather than bytecodes. The problem with AST’s is that the nodes are spread throughout the heap, reducing locality. A lot of indirect memory references are required during execution, which slows things down. Hopefully Rubinius or JRuby will improve matters since they include bytecode interpreters.

3 + 4

As some of you might have noticed, I have become somewhat of a Smalltalk weenie recently. This was a long time coming, but basically a reaction to many years of tedious hacking in C. Now don’t get me wrong, C is a great language, I still love it, but I have been using it in entirely inappropriate places.

After reading up on Smalltalk a few months ago, I was entranced, and quickly decided that I just had to build my own Smalltalk system. Since I didn’t want it to be tied to any sort of platform (i.e. Java), I chose to write it in C (C99 to be specific).

I started out with the compiler. In order to learn more about compiler theory, I wrote a Smalltalk source-to-bytecode compiler from the ground up. The compiler translates source code into a compact bytecode representation consisting of about 30 different bytecodes. Currently, all Smalltalk-80 syntax is supported (except for message cascades). Of course, the compiler needs to manipulate on Smalltalk objects, so I also designed an object memory model (basically how objects are stored in physical memory). All objects have the same header, consisting of book-keeping, hash, and class fields. I intend to reduce the number of header fields to two though, since the hash field is often unnecessary. For example, instances of String will have a custom hash function, and will never need to access the hash field.

As with most Smalltalk VMs (and possibly Python/Ruby), integers are represented using tagged pointers. This kind of pointer has a 1-bit tag in the low-order position. If the bit is 1, then the rest of the pointer contains a 31-bit integer, otherwise, the pointer simply points to a heap-allocated object. This arrangement increases performance greatly, as having to heap-allocate integer objects would absolutely kill performance. We can get away with this, since on most architectures, pointers are aligned on 4 byte boundaries, thus leaving the low-order 2 bits of each pointer unused.

Just this evening, the VM passed a critical milestone, that of being able to evaluate the expression “3 + 4”. This sounds trivial, but a whole of lot nuts-and-bolts have to be in place to make this work. A few hours after the “3+4” test, the VM was capable of executing unary methods, as well as handling local variables and activation records.

Below is some sample Smalltalk code which shows what the VM can do currently, basically some arithmetic, local variable manipulation, and a method call to “increment”. Note that the method doIt belongs to the UndefinedObject class, and increment to SmallInteger.

smalltalk.png

This work is licensed under a Attribution-ShareAlike 3.0.