October 11, 2004
Brent Fulgham has revived Doug Bagley's original Great Computer Language Shootout, “with new languages and revised to work with modern compilers.”
The versions of SBCL and CMUCL they've used are at most a few months old, and the shootout does include tests of newLisp.
Posted by jjwiseman at October 11, 2004 01:04 PM
That's a bit depressing: It used to be that if you ranked LOC as high on the scorecard and added a sprinkling of performance, the lisp compilers used to rank high or in first place- Now they're not even close... Anyone know what happened?
The other language implementations are forging ahead?
I don't think cmucl or sbcl have very good instruction schedulers (incresaingly important on modern machines), and their register allocators have needed work.
Newlisp doesn't look like much of a contender though, does it?
They used to be high up there because a bunch of us did the obvious o-p-timizations and broke good code to make sure the lisp code was as fragile as C and almost as fast. Under the new management, some of the benchmark code apparently no longer works. I am not entirely sure all are being run correctly either. I'll take a look when I have a large chunk of time again. Brent Fulgham is very receptive to feedback and code, so feel free guys. (There's an obvious algorithmic improvement to be tried for the moments/order statistics code in particular AFAIR.)
The "questionable"ness of the O word above is hilarious if intentional BTW. Made me laugh out loud, either way.
Don't worry, Lua's losing under the new management too.
The two main culprits appear to be:
1) The Lua core now lives in a shared library rather than the main program. While arguably Debianly-correct, it now eats all the PIC costs (unlike perl). In addition, newer versions of gcc need -fno-crossjumps to avoid pessimizing many bytecode interpreters. Perhaps this kind of issue is a good thing; it will encourage people to fix performance problems in the Debian implementations of their pet language.
2) The scoring system feels less intuitive now. There's a substantial penalty for missing tests, and the logarithmic scoring means that the relative ranking of two languages can be drastically affected by improvements in a third-party leader.
Also, I'm not sure that having squads of experts performing heroic optimi$ations on code in their pet language is helping us much in understanding the relative strengths and weaknesses of languages. People on lua-l have been tweaking scripts in ways not intuitive to amateurs, and I'm dead certain other language fans have been as well.
Stuff like the hard-core optimi$ed C matrix math for Python isn't particularly revealing about the languages themselves so much as "how many extension libraries could you get into Debian." I've got ~100 lines of C code that fix Lua's string concat problem, and I could probably snipe matrix math in a weekend too, but so what.
"optimi$ed C matrix math for Python isn't particularly revealing about the languages"
Jay, make your opinion heard on the shootout mailing-list and/or send a message to the 'new management' from the webform.