Sunday, May 27, 2007

Nativist Evolution (take one)

On April 20, 2007 the EES met to talk about evolution in a way that wasn't totally focused on "natural selection" but more on the sources and kinds of variation that it are available to it (and why). We basically bit off more than we could chew because there are a lot of neat ideas lurking here that our conversation looped around but never really brought into focus. (If you're wondering about the "nativism" in the title, you'll be in the right ballpark if you imagine idealism, platonism, or better still, looking for ways to apply psychological nativism to "evolution interpreted as a mindful process".)

How the Leopard Changed Its Spots: The Evolution of Complexity

Brian C. Goodwin, 2001

The link will take you to "the sorts of fragments of text that might induce you to buy the book" which are enough to smell the ideas (if not see them fully realized). There's a lot in here documenting the easy-to-implement structures and patterns that you might argued evolution "discovered" (if you're disposed to see things that way) like fractals and so on.

The Jigsaw Model: An Explanation for the Evolution of Complex Biochemical Systems and the Origin of Life
John F. McGowan, 2000

An essay on "how things might be set up so that single mutations generate correlated traits". The paper is interestingly (to me anyway) steeped in creationist conceptualizations of evolution (the text is pro-evolution... but it's striking in taking creationist objections to "blind evolution" as having a point able to be addressed by thinking about ways biological traits might hypothetically be encoded).

The Rate of Compensatory Mutation in the DNA Bacteriophage {PHI}X174
Art Poon and Lin Chao, 2005

...and the creationist resonances and hand waving were just sort of asking for a counterpoint with "actual real quantitative biology" in case you were somehow thinking that real world biological systems aren't robust and fixable instead of brittle due to "irreducible complexity" :-P

Manifesto for an Evolutionary Economics of Intelligence

On May 8, 2007 the Emergent Epistemology Salon met to talk about:

Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998

PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.

We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.

In our discussion we went over content from the paper and ranged into related tangents. We considered issues having to do with the degree to which economies could be said to have goals and what the goal of the actual global economy might be if considered as an intelligent agent conserving property rights for it's agents so that they might be forced to accomplish goals or perish. We also spent some time discussing the degree to which "the ability to have invented an economy in the first place and then created the science of economics" was something that simulated agents should have if algorithms inspired by human economies were to display the amazing results that human economies generate. This trended into a discussion of "agents with good insights but bad business sense", charlatanism, and possible effects of similar issues on the results of Baumian AI architectures (and the actual economy).

The Wisdom Economy

The second meeting wasn't exactly two weeks later... But on April 22, 2007 the EES met to talk about something we lumped under the label of the "Wisdom Economy" (though that term doesn't speak very well to the algorithmic angle we were focusing on). These were the readings for the meeting:

TOOL: The Open Opinion Layer
2002, Hassan Masum

Shared opinions drive society: what we read, how we vote, and where we shop are all heavily influenced by the choices of others. However, the cost in time and money to systematically share opinions remains high, while the actual performance history of opinion generators is often not tracked. This article explores the development of a distributed open opinion layer, which is given the generic name of TOOL. Similar to the evolution of network protocols as an underlying layer for many computational tasks, we suggest that TOOL has the potential to become a common substrate upon which many scientific, commercial, and social activities will be based. Valuation decisions are ubiquitous in human interaction and thought itself. Incorporating information valuation into a computational layer will be as significant a step forward as our current communication and information retrieval layers.

Automated Collaborative Filtering and Semantic Transports
1997, Alexander Chislenko

This essay focuses on the conceptualization of the issues, comparisons of current technological developments to other historical/evolutionary processes, future of automated collaboration and its implications for economic and social development of the world, and suggestions of what we may want to pursue and avoid. Explanations of the workings of the technology and analysis of the current market are not my purpose here, although some explanations and examples may be appropriate.

A.I. as a Positive and Negative Factor in Global Risk

Starting out with some back history :-)

Anna and I (more her than me, she's the inspired one :-)) schemed up something we ended up calling the Emergent Epistemology Salon (EES) for lack of a less painfully trendy name. The first get together was back on March 25, 2007 and as things have bounced along we kept telling ourselves it would help us write stuff and work up a "big picture" if we had a place to post thoughts and links and whatnot.

So, I'm going to start a tag called "Discussion" and every time we meet (that I have time for it) I'll post a link to what we talked about under that tag. That should ensure at least one post to the blog every two weeks or so... Here was the first thing we talked about back in March:

Artificial Intelligence as a Positive and Negative Factor in Global Risk
Eliezer Yudkowsky, 2006

An essay on the dangers and benefits of (succeeding at) building a general artificial intelligence. Of general theoretical interest is the discussion of the human reasoning biases that seem to lead many people to radically over estimate the degree to which they "understand intelligence".


I'm still trying to figure out what the right dynamics for "the blog version of the salon" should be. The meeting was pretty good and two of us (me and another person) wrote up a summary of where the discussion went. Those summaries might be worth posting? Or not? Or maybe it would be invasive of private conversation? I think I'll ask first and then post my summary here as a comment unless someone vetoes the idea.