Sunday, May 27, 2007

Manifesto for an Evolutionary Economics of Intelligence

On May 8, 2007 the Emergent Epistemology Salon met to talk about:

Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998
http://www.whatisthought.com/manif5.ps

PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.

We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.

In our discussion we went over content from the paper and ranged into related tangents. We considered issues having to do with the degree to which economies could be said to have goals and what the goal of the actual global economy might be if considered as an intelligent agent conserving property rights for it's agents so that they might be forced to accomplish goals or perish. We also spent some time discussing the degree to which "the ability to have invented an economy in the first place and then created the science of economics" was something that simulated agents should have if algorithms inspired by human economies were to display the amazing results that human economies generate. This trended into a discussion of "agents with good insights but bad business sense", charlatanism, and possible effects of similar issues on the results of Baumian AI architectures (and the actual economy).

No comments: