After reading Lanier's article there was some discussion about potential in the field of Artificial Intelligence and the perception that it didn't seem to have any brilliantly new ideas about "general reasoning". Machine learning techniques from the 60's are in some senses still the state of the art. Or are they? With this background, we thought it would be interesting to spend some time trying to find something new and good in AI. The Emergent Epistemology Salon met on December 16, 2007 to discuss...
Why Heideggerian AI failed and how fixing it would require making it more Heideggerian
Quoting from Hubert L. Dreyfus's text (links added):
As luck would have it, in 1963, I was invited by the RAND Corporation to evaluate the pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive Simulation (CS)...
As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes' claim that reasoning was calculating, Descartes' mental representations, Leibniz's idea of a "universal characteristic" – a set of primitives in which all knowledge could be expressed, -- Kant's claim that concepts were rules, Frege's formalization of such rules, and Russell's postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.
At the same time, I began to suspect that the critical insights formulated in existentialist armchairs, especially Heidegger's and Merleau-Ponty's, were bad news for those working in AI laboratories-- that, by combining rationalism, representationalism, conceptualism, formalism, and logical atomism into a research program, AI researchers had condemned their enterprise to reenact a failure.
Dreyfus's proposed solutions are (very generally) to "eliminate representation" by building "behavior based robots" and to program the ready-to-hand.
In our discussion we spent a good chunk of time reviewing Heidegger. Heideggerian scholars are likely to be horrified by the simplification, but one useful way we found to connect the ideas to more prosaic concepts was to say that Heidegger was taking something like flow in Csikszentmihalyi's sense as the primary psychological state people are usually in and the prototypical experience on which to build the rest of philosophy (and by extension through Dreyfus, the mental architecture that should be targeted in AI research).
There was discussion around difficulties with Dreyfus's word choices. Dreyfus is against "symbols" and "representation" but it would seem that he means something more particularly philosophical than run of the mill computer scientists might assume. It's hard to see how he could be objecting to 1's and 0's working as pointers and instructions and a way of representing regular expressions... or how he could object to clusters of neurons that encode/enable certain psychological states that happened to work as neurological intermediaries between perceptions and actions. In some sense these are symbols but probably not in the way Dreyfus is against symbols. There's a temptation to be glib and say "Oh yeah, symbol grounding is a good idea."
One side track I thought was interesting was the degree to which object oriented programming could be seen as a way for programmers to create explicit affordances over data by writing methods that dangle off of objects in a way that hides potentially vast amounts of detail from other programmers to use in the course of solving other problems.
Lastly, it's amusing that others were blogging about Heideggerian AI just after we discussed it. The subject must be in the air :-)
Wednesday, February 13, 2008
Subscribe to:
Posts (Atom)