Doug Bagley's well-known and contentious programming language "shootout" compares the performance of 30 different languages in 25 different benchmark tasks, with none of the tasks taking more than a few hundred lines of code at worst (and I don't think any of them requires more than 60 lines of lisp code).
Such a comparison is not completely useless. It does give some sense of the expressive power of each language, and there really are projects that need to fit in a couple megabytes of RAM (and a test like Bagley's does give you at least a rough lower bound on memory requirements). And I suppose if you didn't know that most lisps compile to native code while Ruby is interpreted, CMUCL's time of 0.33 seconds on the matrix multiplication test compared to Ruby's 47.20 seconds would clue you in pretty quickly.
But comparions using tasks more representative of larger and more complicated applications (which is the space in which lisp is usually assumed to come into its own) would be helpful.
This set of slides by Ken Anderson (from 1994, spotted recently on the ll1-discuss list) compares C and lisp implementations of two machine learning benchmarks using CASCOR1 and Genetic Programming. And the results are interesting, though not too surprising: Both benchmarks took at least twice as many lines of C code (about 8000) as lisp (about 3500). The lisp versions were initially considerably slower than the C implementation, but eventually both ACL and CMUCL were sped up to the point of being slightly faster than the gcc version (Lucid got close to gcc).
Another interesting statistic was that for the GP benchmark, the C code spent over 30% of it's time in malloc/free, while the lisp code spent about 15% of its time consing/gcing.
Ken Anderson has more information on lisp (and Java performance). Other sources of information that I've found helpful are the section on optimization in Norvig's PAIP and various columns from Jon Bentley (collected in Programming Pearls and More Programming Pearls, which don't mention lisp but are pretty good).
Posted by jjwiseman at August 06, 2002 09:39 AMApropos Programming Pearls: in there is a fine example of data structure optimization: A telephone application would need to plough through 4 million telephone numbers to find out the first vacant one. The challenge in this otherwise simple task: make this work on a machine without 4 megamachineword of memory, as efficiently as possible.
The author's first guess is a merge sort over these telephone numbers. The other (more efficient) is a bit array, with binary toggles for "vacant" or "taken" in each number's position.
So my question (or, see it as a challenge): Is there a way to optimize the data structures a program uses for optimal performance versus memory? I would be happy with a "simple" optimizer for vector element sizes, like the one Bentley did in his head. Or is there a CL construct and I'm just too clueless to find it?
Posted by: Andreas Fuchs on August 6, 2002 12:15 PM