Showing posts with label Algorithms. Show all posts
Showing posts with label Algorithms. Show all posts

Monday, April 28, 2008

Evolutionary Economics of Intelligence (Take Two)

After the Cybernetic Totalism talk, the Emergent Epistemology Salon wanted to hunt around for something brilliantly new in the ballpark of "general reasoning" that could actually be implemented. To that end we looped back to something we talked about back in May of 2007 and met again on on Jan 13, 2008 to talk about...

Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998

PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.

We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.


Compared to the previous discussion on this paper, we were more focused on the algorithm itself instead of on the broader claims about the relevance of economics to artificial intelligence. We were also more focused on a general theme the group has been following - the ways that biases in an optimizing process are (or are not) suited to the particularities of of given learning problem.

We covered some of the same ideas related to the oddness that a given code fragment within the system needed both domain knowledge and "business sense" in order to survive. Brilliant insights that are foolishly sold for less than their CPU costs might be deleted and at the same time, the potential for "market charlatanism" might introduce hiccups in the general system's ability to learn. By analogy to the real world, is it easier to invent an economically viable fusion technology or to defraud investors with a business that falsely claims to have an angle on such technology?

We also talked about the reasons real economies are so useful - they aggregate information from many contexts and agents into a single data point (price) that can be broadcast to all contexts to help agents in those contexts solve their local problems more effectively. It's not entirely clear how well the analogy from real economies maps into the idea of a general learning algorithm. You already have a bunch of agents in the real world. And there's already all kinds of structure (physical distance and varied resources and so on) in the world. The scope of the agents is already restricted to what they have at hand and the expensive problem that economics solves is "getting enough of the right information to all the dispersed agents". In a computer, with *random access* memory, the hard part is discovering and supporting structure in giant masses of data the first place. It seemed that economic inspiration might be a virtue in the physical world due the necessities imposed by the physical world. Perhaps something more cleanly mathematical would be better inside a computer?

Finally, there was discussion around efforts to re-implement the systems described in the paper and how different re-implementation choices might improve or hurt the performance.

Wednesday, February 13, 2008

Heideggerian A.I.

After reading Lanier's article there was some discussion about potential in the field of Artificial Intelligence and the perception that it didn't seem to have any brilliantly new ideas about "general reasoning". Machine learning techniques from the 60's are in some senses still the state of the art. Or are they? With this background, we thought it would be interesting to spend some time trying to find something new and good in AI. The Emergent Epistemology Salon met on December 16, 2007 to discuss...

Why Heideggerian AI failed and how fixing it would require making it more Heideggerian
Quoting from Hubert L. Dreyfus's text (links added):

As luck would have it, in 1963, I was invited by the RAND Corporation to evaluate the pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive Simulation (CS)...

As I studied the RAND papers and memos, I found to my surprise that, far from replacing philosophy, the pioneers in CS had learned a lot, directly and indirectly from the philosophers. They had taken over Hobbes' claim that reasoning was calculating, Descartes' mental representations, Leibniz's idea of a "universal characteristic" – a set of primitives in which all knowledge could be expressed, -- Kant's claim that concepts were rules, Frege's formalization of such rules, and Russell's postulation of logical atoms as the building blocks of reality. In short, without realizing it, AI researchers were hard at work turning rationalist philosophy into a research program.

At the same time, I began to suspect that the critical insights formulated in existentialist armchairs, especially
Heidegger's and Merleau-Ponty's, were bad news for those working in AI laboratories-- that, by combining rationalism, representationalism, conceptualism, formalism, and logical atomism into a research program, AI researchers had condemned their enterprise to reenact a failure.

Dreyfus's proposed solutions are (very generally) to "eliminate representation" by building "behavior based robots" and to program the ready-to-hand.

In our discussion we spent a good chunk of time reviewing Heidegger. Heideggerian scholars are likely to be horrified by the simplification, but one useful way we found to connect the ideas to more prosaic concepts was to say that Heidegger was taking something like flow in Csikszentmihalyi's sense as the primary psychological state people are usually in and the prototypical experience on which to build the rest of philosophy (and by extension through Dreyfus, the mental architecture that should be targeted in AI research).

There was discussion around difficulties with Dreyfus's word choices. Dreyfus is against "symbols" and "representation" but it would seem that he means something more particularly philosophical than run of the mill computer scientists might assume. It's hard to see how he could be objecting to 1's and 0's working as pointers and instructions and a way of representing regular expressions... or how he could object to clusters of neurons that encode/enable certain psychological states that happened to work as neurological intermediaries between perceptions and actions. In some sense these are symbols but probably not in the way Dreyfus is against symbols. There's a temptation to be glib and say "Oh yeah, symbol grounding is a good idea."

One side track I thought was interesting was the degree to which object oriented programming could be seen as a way for programmers to create explicit affordances over data by writing methods that dangle off of objects in a way that hides potentially vast amounts of detail from other programmers to use in the course of solving other problems.

Lastly, it's amusing that others were blogging about Heideggerian AI just after we discussed it. The subject must be in the air :-)

Thursday, December 6, 2007

A Theory of Cortical Function: Hierarchical Temporal Memory (HTM)

On Sunday, November 4th 2007 the Emergent Epistemology Salon tried an experiment in meeting to discuss videos rather than text. The videos were of talks by Jeff Hawkins regarding his work developing a quantitative and biologically plausible general theory explaining the human cortex. His proposed "cortical algorithm" is named Hierarchical Temporal Memory (HTM). Here are some videos of him talking about this stuff:

A 20-minute chat at TED (which appears to have been given when the actual algorithm was only in the intuition stage) entitle "Brain science is about to fundamentally change computing".


"Prospects and Problems of Cortical Theory" given at UC Berkeley on October 7, 2005 - it's a little over an hour long and gives all the basics of the theory. (Warning: the words don't perfectly sync with the images... it's a pretty good talk but the medium imposes on you a little.)


This talk, "Hierarchical Temporal Memory: Theory and Implementation" is less chatty and spends more time on the theory. There's a pitch at the end for the software tools his startup wrote.


Significant parts of this material are also covered in his 2004 book On Intelligence and he has working algorithms (designed by Dileep George) with publicly accessible code (if you don't mind some non-standard licensing) that you can find by clicking around the website of his startup company, Numenta. The code is implemented in C++ with an eye towards scaling to clusters and has Python bindings.

Our actual discussion of the material was weaker than normal. Mostly we went over the algorithms and talked about whether they might be able able to capture various kinds of cognition and/or concept formation. Part of the problem may have been that we turned out to be a much more literate crowd than a video watching crowd.

Sunday, May 27, 2007

Manifesto for an Evolutionary Economics of Intelligence

On May 8, 2007 the Emergent Epistemology Salon met to talk about:

Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998
http://www.whatisthought.com/manif5.ps

PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.

We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.

In our discussion we went over content from the paper and ranged into related tangents. We considered issues having to do with the degree to which economies could be said to have goals and what the goal of the actual global economy might be if considered as an intelligent agent conserving property rights for it's agents so that they might be forced to accomplish goals or perish. We also spent some time discussing the degree to which "the ability to have invented an economy in the first place and then created the science of economics" was something that simulated agents should have if algorithms inspired by human economies were to display the amazing results that human economies generate. This trended into a discussion of "agents with good insights but bad business sense", charlatanism, and possible effects of similar issues on the results of Baumian AI architectures (and the actual economy).

A.I. as a Positive and Negative Factor in Global Risk

Starting out with some back history :-)

Anna and I (more her than me, she's the inspired one :-)) schemed up something we ended up calling the Emergent Epistemology Salon (EES) for lack of a less painfully trendy name. The first get together was back on March 25, 2007 and as things have bounced along we kept telling ourselves it would help us write stuff and work up a "big picture" if we had a place to post thoughts and links and whatnot.

So, I'm going to start a tag called "Discussion" and every time we meet (that I have time for it) I'll post a link to what we talked about under that tag. That should ensure at least one post to the blog every two weeks or so... Here was the first thing we talked about back in March:

Artificial Intelligence as a Positive and Negative Factor in Global Risk
Eliezer Yudkowsky, 2006

An essay on the dangers and benefits of (succeeding at) building a general artificial intelligence. Of general theoretical interest is the discussion of the human reasoning biases that seem to lead many people to radically over estimate the degree to which they "understand intelligence".

--

I'm still trying to figure out what the right dynamics for "the blog version of the salon" should be. The meeting was pretty good and two of us (me and another person) wrote up a summary of where the discussion went. Those summaries might be worth posting? Or not? Or maybe it would be invasive of private conversation? I think I'll ask first and then post my summary here as a comment unless someone vetoes the idea.