July 08, 2002
The New York Times has an interesting
article on Richard Wallace and his alicebot automated chat software.
Richard Wallace sounds like a smart guy, but sadly suffers from depression and manic depression, as well as some drama associated with that (e.g. a restraining order barring him from the campus of UC-Berkeley). Possibly most unfortunate, however, is that he's spent so much time on Alice, which is based on ideas that Marvin Minsky describes as "basically wrong". I'm with Minksy--Alice looks like a fancy eliza.
Posted by jjwiseman at July 08, 2002 11:02 AM
But that's the point; how "wrong" can it be, if it works (by the Turing Test definition) better than anything else?
Isn't that like saying that Rodney Brooks' robots aren't "right" because their individual state machines are too "simple" to represent intelligence, and therefore they can't "know" anything? (Maybe Marvin doesn't like Rodney's robots either...)
Whether or not the Turing Test is really a sufficient test of artificial intelligence (and I'm sure it isn't), if the ability to make "idle" conversation is going to be necessary in a "humanoid" robot, wouldn't this be an adequate way of doing it?
Just 'cause Minsky doesn't like it, doesn't mean it is without merit. I've got to give the guy credit (as one engineer to another) just for getting it to work so well. It certainly deserves the term "intelligent agent" more than anything else I've seen... ;)
Alice may work great as a chatbot, but i think that can only take you so far. The architecture of a good chatbot may have nothing to do with a general model of intelligence. Brooks' robots work pretty well at getting around in unknown and dynamic environments, Alice does well at chatting. I don't think either one can be said to *know* anything or will be very useful for making more intelligent machines.
It reminds me a little of this thought experiment (I think Kris Hammond mentioned this to me, I probably got all the details wrong): Imagine a program that contained statistics on stuttering. Text spoken by the program is indistinguishable from utterances by a human stutterer.
Do you learn anything about the the language structures in the human brain from such a program? Do you learn anything about human language at all?