Wednesday, April 30, 2008

Cogent Confabulation

Continuing the "new ideas in artificial intelligence" theme from the Heidegger meeting and the Baum meeting, on Sunday January 27, 2008 the Emergent Epistemology Salon met to talk about an idea that one of our members was exposed to by virtue of taking a class from the author.

Cogent Confabulation
Robert Hecht-Nielsen

A new model of vertebrate cognition is introduced: maximization of cogency P(a,b,g,d GIVEN j). This model is shown to be a direct generalization of Aristotelian logic, and to be rigorously related to a calculable quantity. A key aspect of this model is that in Aristotelian logic information environments it functions logically. However, in non-Aristotelian environments, instead of finding the conclusion with the highest probability of being true (a popular past model of cognition); this model instead functions in the manner of the ‘duck test;’ by finding that conclusion which is most supportive of the truth of the assumed facts.

When we sat down to talk about the idea there were two seemingly unrelated ideas going on. One had to do with probabilistic reasoning and the other had to do with the uses, setup, and general engineering orientation of thinking software.

First, there appeared to be the rather simple idea of training a model using a linear data set so that it can use what it's seen in the past to predict "the next symbol/word/letter/whatever that should show up" independently for the i-th symbol back (for i from 1 to N). Brutally simple markov modeling would look up the last N symbols all together and be done with it but the memory costs of the simple markov model would be roughly (numSymbols)^N whereas the memory costs of Hecht-Neilson's cogent confabulator would be only N*(numSymbols). Nonetheless, this didn't seem like a particularly "deep" idea. Whether it is a good idea or not depends in part of the structural patterns in the data and there is already reconstructability analysis to help there.

Second, there was the idea of cogence - the new standard proposed by Hecht-Nielsen for generating a "theory" from co-occurrence tallies. The goal was not to use new data to update the probability for a theory according to the standard understanding of Bayesian rationality. The way he puts it is that, instead of maximizing a posteriori probability, humans instead maximize a priori probability. This "probabilistically backwards" number that a theory maximizes is called "cogence". In other words, a cogent confabulator is supposed to leap to something like "the obvious but kind of dumb theory" in a defiance-with-a-twist on the growing Bayesian orthodoxy. There's an attempt to support the claim in normal life by arguing that with a phrase "company rules forbid taking" the technically most likely word to follow should be some part of speech like "the" whereas humans are likely to guess something like "naps".

Hecht-Neilson claims that this is a general cortical theory and his writings include gestures into neurobiology but we didn't discuss those aspects of his writings.

Most of our substantive discussion revolved around understanding the algorithm, and then looking for refinements of and uses for a cogent confabulator to get a sense of why it might be as important as Hecht-Neilson claims it is. When we first started discussing the papers we weren't that impressed by the ideas. It's a small paper that's not that different from a lot of things you can find elsewhere. But the the productivity of the conversation was sort of impressive versus those expectations.

For example, the mechanism might be helpful to give some notion of easily calculated kind of "obviousness to a listener" that could then be corrected for by a speaker in the course of producing speech that would be maximally informative. Both sides could do computationally cheap confabulation, then the speaker would compare the impression that conversation would have to the actual model she was trying to convey. The uttered speech could be something slightly more carefully chosen that corrects what's wrong with the (presumptively) mutual confabulation of where the conversation was going. Neat :-)

2 comments:

Jennifer Rodriguez-Mueller said...

In the intervening time between the salon and the time that I composed this blog entry I've returned a number of times to the general idea of cogent confabulation in my thoughts and noticed the agreement between cogency and a number of documented biases in human reasoning that, if you squint, seem reasonably consistent with the single basic mechanism of jumping to one obvious conclusion after another with relatively few pauses to take in a larger picture and consider alternatives. It makes me suspect that there's here worth looking at more deeply.

This Brazen Teacher said...

Girl, you have some brain over here you know that?

If my puny cerebellum could wrap it's fibers around "Emergent Epistemology" and all that it entails, I would leave my own commentary.

However, since your IQ would probably kick my IQ's ass- I will refrain and say only: "Thank you for visiting my corner of the world, and sharing a bit with the readers there."

Take care,
Brazen