September 23, 2002

Somehow, I just couldn't resist coding up the Luhn algorithm of which Joey deVilla is collecting implementations.

(defun luhn (string)

  (let* ((checksum 0)

         (cc-length (length string))

         (double-p (evenp cc-length)))

    (dotimes (i cc-length)

      (let ((digit-value (digit-char-p (schar string i))))

        (when double-p

          (setf digit-value (* 2 digit-value))

          (when (> digit-value 9)

            (decf digit-value 9)))

        (incf checksum digit-value)

        (setf double-p (not double-p))))

    (zerop (mod checksum 10))))

It's not clever, it's not pretty. It's just straightforward. The only slightly interesting thing might be using digit-char-p to get the numeric value of digit characters (e.g., (digit-char-p #\2) => 2).

Once I had this micro function, I then of course could not resist some micro benchmarking. I feel so dirty.

So I compared the run times of the Java and Python versions from Joey's page with Lisp. I tried both ACL and OpenMCL. The following table shows relative times, using the Java time as a reference.

Java Lisp (OpenMCL) Lisp (ACL) Python
1.0 2.36 3.60 56.3

Not so bad. Python is slow, but it's not native code (though it should be noted that just for fun I tried clisp, which compiles to bytecode, and it was still 3x as fast as Python).

So then, of course, I had to optimize. All I did was throw in some fixnum declarations (sort of the equivalent of doing arithmetic with ints instead of Integers in Java) , then tell the compiler to optimize for speed at the expense of all else.

Java Lisp (OpenMCL) Lisp (ACL) Python
1.0 1.58 1.97 56.3

A pretty impressive showing from Java, and not bad for Lisp either.

Though if the bottleneck you're facing is in your credit card validation routines, you can't be all that bad off.

Posted by jjwiseman at September 23, 2002 10:21 PM

I'm no lisp expert but this is what I got after cutting and pasting jjw's code, compiling it, and then running it with (time (luhn ";;my credit card #")):

Evaluation took:

0.0 seconds of real time

0.0 seconds of user run time

0.0 seconds of system run time

0 page faults and

0 bytes consed.


So I'm curious as to if my data for benchmarking is any good or did I bungle my benchmarking process?

Posted by: Charlie Mac on September 24, 2002 06:51 AM

I forgot to mention my previous comment was done in CMUCL 18d

Posted by: Charlie Mac on September 24, 2002 06:52 AM

Well, with SBCL I get the following

Java: 1.0

Lisp (SBCL 0.7.7): 0.70

So my conclusion is:

A pretty impressive showing from Lisp, and not bad for Java either.

Posted by: Frederic Dumont on September 24, 2002 07:23 AM

To get results that have a hope of being meaningful, you need to actually measure some time. I did 100,000 iterations, so something like this is what you want:

(time (dotimes (i 100000) (luhn "my cc #")))

Frederic: Heh. I have to admit that I was curious how cmucl or sbcl would do, but I couldn't easily install them on my Mac (yet).

And just in case you're curious, the Java time was 613 ms for 100,000 iterations on an 800 MHz G4. Which may or may not have been in power saving reduced speed mode, I don't remember.

This whole thing is so pointless, isn't it?

Posted by: jjwiseman on September 24, 2002 08:47 AM

The use of digit-char-p is nice, but I doubt it is efficient. With the C-like approach (difference of char-code), SBCL is about 5 times faster than Java (1.3.1), but on another machine than the previous test, so it may be unrelated.

And John, the whole point is to prove that Lisp is faster than Java :)

Posted by: Frederic Dumont on September 24, 2002 10:05 AM

The thing I thought was a little neat about using digit-char-p was that it is character-set independent. The CLHS says that (char-code #\0) is less than (char-code #\1), but not that they are consecutive[1].

digit-char-p will work for ASCII, EBCDIC, Unicode, etc.

However, this is firmly in the category of "totally irrelevant" since the characters 0-9 are in fact consecutive in both EBCDIC and Unicode, and I can't think of any other character sets that are in common use (I think the AS/400 uses EBCDIC, so it actually is pretty common).

Frederic, I am slightly curious to see how Python on your machine would compare to Java, if you have time to kill.


Posted by: jjwiseman on September 24, 2002 10:22 AM

Ok, here are the complete results (1.000.000 iterations):

Java: 1.0 (1.07)

SBCL: 0.2 (0.2)

Python: 221 (3'56)

I don't know why Python performs so badly here.

Posted by: Frederic Dumont on September 24, 2002 01:14 PM

Using cmucl 18d, optimizing for speed, and using char-code instead of digit-char-p, I got:

Java: 1.0

C++, compiled "-O2": 0.34

Lisp: 0.30

Put another way:

Lisp: 1.0

C++: 1.11

Java: 3.27

The C++ and Java versions had to do a fork & exec, but running them several times in a row should've cached them, minimizing this overhead.

Not /at all/ bad for Lisp. :)

Posted by: Larry Clapp on September 24, 2002 02:51 PM

Yikes, fork + exec. We're measuring different things, then. My Java timing code, e.g., was something like

for (i = 0; i

Posted by: jjwiseman on September 24, 2002 06:19 PM

Hi, I red your Luhn algorithm in Lisp and I'm quite surprised that it's run time is less in Java. Are you sure you were using Java? ;)
Tell me a little about the comparison. I am working mostly with web apps, but I tend to think outside the bun when it comes to conventional algorithms whenever I'm inspired with a clever one! I hope our interests may lead to useful cooperation.

Posted by: Medo Bassyou on May 24, 2004 12:32 AM
Post a comment

Email Address:


Unless you answer this question, your comment will be classified as spam and will not be posted.
(I'll give you a hint: the answer is “lisp”.)


Remember info?