As UCSD's quarter winds down, the Emergent Epistemology Salon met on June 3, 2007 and talked about the ideas in the final paper of Justin, one of our members. The conversation was about "games" taking the idea very broadly to include actual games like chess but (fuzzy definition ahead!) to also include nearly any interaction that can be simplified to the point that non-human participation starts being feasible. Games offer their participants "symbolic bottlenecks".
Three snippets from a very early draft of Justin's writing:
Groups of humans give rise to a vast array of organizational patterns found in no other system in the natural world. This has, of late, given rise to the suggestion that humans implement "group intentionality" in one way or another.
...
Chess is primarily a person-person interaction taking place over a board in physical space. It can also be played through the mail, by phone, over computers, or simply between persons calling out moves. Meaning is created during a chess game by the rule-constrained manipulation of pieces on a board, however these are instantiated. The game of chess organizes human behavior in complex and interesting ways across space and time.
The last century saw the rise of computerized chess engines. Computers, though hideously inefficient and unable to do lots of other things, now kick ass at chess.
...
Traditionally, language games are viewed as giving rise to exclusively human-human interactions. Certainly human-human interactions form a large part of our game-playing activity. But there is nothing essential to the structure of most games that requires a human opponent/participant. And even if this was a requirement, groups of humans can and do participate in games all the time.
Maybe more importantly, humans care about and invest in games (consider the parliaments of the world, religions, etc). If we find ourselves struggling in a game with entities that are not obviously reducible to one agent (the phone company, NIMBY, Quebec), or entities that are non-human (the phone company's automated devil box), we care, because the moves in the game matter to us.
There is a sense in which our investiture in games opens the door to intervention by entities other than individual humans.
If we are committed to the belief that many meaningful human activities are best understood as game-like interactions, and we discover non-human entities that can play our games with us (or even impose them upon us), what are we to make of the activity that results from our interaction with these entities? Is it meaningful in the same way human-human interaction is meaningful? What does it say about the other party? What does it say about us?
Friday, June 8, 2007
Sunday, May 27, 2007
Nativist Evolution (take one)
On April 20, 2007 the EES met to talk about evolution in a way that wasn't totally focused on "natural selection" but more on the sources and kinds of variation that it are available to it (and why). We basically bit off more than we could chew because there are a lot of neat ideas lurking here that our conversation looped around but never really brought into focus. (If you're wondering about the "nativism" in the title, you'll be in the right ballpark if you imagine idealism, platonism, or better still, looking for ways to apply psychological nativism to "evolution interpreted as a mindful process".)
How the Leopard Changed Its Spots: The Evolution of Complexity
Brian C. Goodwin, 2001
The link will take you to "the sorts of fragments of text that might induce you to buy the book" which are enough to smell the ideas (if not see them fully realized). There's a lot in here documenting the easy-to-implement structures and patterns that you might argued evolution "discovered" (if you're disposed to see things that way) like fractals and so on.
The Jigsaw Model: An Explanation for the Evolution of Complex Biochemical Systems and the Origin of Life
John F. McGowan, 2000
An essay on "how things might be set up so that single mutations generate correlated traits". The paper is interestingly (to me anyway) steeped in creationist conceptualizations of evolution (the text is pro-evolution... but it's striking in taking creationist objections to "blind evolution" as having a point able to be addressed by thinking about ways biological traits might hypothetically be encoded).
The Rate of Compensatory Mutation in the DNA Bacteriophage {PHI}X174
Art Poon and Lin Chao, 2005
...and the creationist resonances and hand waving were just sort of asking for a counterpoint with "actual real quantitative biology" in case you were somehow thinking that real world biological systems aren't robust and fixable instead of brittle due to "irreducible complexity" :-P
How the Leopard Changed Its Spots: The Evolution of Complexity
Brian C. Goodwin, 2001
The link will take you to "the sorts of fragments of text that might induce you to buy the book" which are enough to smell the ideas (if not see them fully realized). There's a lot in here documenting the easy-to-implement structures and patterns that you might argued evolution "discovered" (if you're disposed to see things that way) like fractals and so on.
The Jigsaw Model: An Explanation for the Evolution of Complex Biochemical Systems and the Origin of Life
John F. McGowan, 2000
An essay on "how things might be set up so that single mutations generate correlated traits". The paper is interestingly (to me anyway) steeped in creationist conceptualizations of evolution (the text is pro-evolution... but it's striking in taking creationist objections to "blind evolution" as having a point able to be addressed by thinking about ways biological traits might hypothetically be encoded).
The Rate of Compensatory Mutation in the DNA Bacteriophage {PHI}X174
...and the creationist resonances and hand waving were just sort of asking for a counterpoint with "actual real quantitative biology" in case you were somehow thinking that real world biological systems aren't robust and fixable instead of brittle due to "irreducible complexity" :-P
Manifesto for an Evolutionary Economics of Intelligence
On May 8, 2007 the Emergent Epistemology Salon met to talk about:
Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998
http://www.whatisthought.com/manif5.ps
PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.
We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.
In our discussion we went over content from the paper and ranged into related tangents. We considered issues having to do with the degree to which economies could be said to have goals and what the goal of the actual global economy might be if considered as an intelligent agent conserving property rights for it's agents so that they might be forced to accomplish goals or perish. We also spent some time discussing the degree to which "the ability to have invented an economy in the first place and then created the science of economics" was something that simulated agents should have if algorithms inspired by human economies were to display the amazing results that human economies generate. This trended into a discussion of "agents with good insights but bad business sense", charlatanism, and possible effects of similar issues on the results of Baumian AI architectures (and the actual economy).
Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998
http://www.whatisthought.com
PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.
We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.
In our discussion we went over content from the paper and ranged into related tangents. We considered issues having to do with the degree to which economies could be said to have goals and what the goal of the actual global economy might be if considered as an intelligent agent conserving property rights for it's agents so that they might be forced to accomplish goals or perish. We also spent some time discussing the degree to which "the ability to have invented an economy in the first place and then created the science of economics" was something that simulated agents should have if algorithms inspired by human economies were to display the amazing results that human economies generate. This trended into a discussion of "agents with good insights but bad business sense", charlatanism, and possible effects of similar issues on the results of Baumian AI architectures (and the actual economy).
Labels:
Algorithms,
Discussion,
Evolution
The Wisdom Economy
The second meeting wasn't exactly two weeks later... But on April 22, 2007 the EES met to talk about something we lumped under the label of the "Wisdom Economy" (though that term doesn't speak very well to the algorithmic angle we were focusing on). These were the readings for the meeting:
TOOL: The Open Opinion Layer
http://www.firstmonday.org/issues/issue7_7/masum/
2002, Hassan Masum
Shared opinions drive society: what we read, how we vote, and where we shop are all heavily influenced by the choices of others. However, the cost in time and money to systematically share opinions remains high, while the actual performance history of opinion generators is often not tracked. This article explores the development of a distributed open opinion layer, which is given the generic name of TOOL. Similar to the evolution of network protocols as an underlying layer for many computational tasks, we suggest that TOOL has the potential to become a common substrate upon which many scientific, commercial, and social activities will be based. Valuation decisions are ubiquitous in human interaction and thought itself. Incorporating information valuation into a computational layer will be as significant a step forward as our current communication and information retrieval layers.
Automated Collaborative Filtering and Semantic Transports
http://www.lucifer.com/~sasha/articles/ACF.html
1997, Alexander Chislenko
This essay focuses on the conceptualization of the issues, comparisons of current technological developments to other historical/evolutionary processes, future of automated collaboration and its implications for economic and social development of the world, and suggestions of what we may want to pursue and avoid. Explanations of the workings of the technology and analysis of the current market are not my purpose here, although some explanations and examples may be appropriate.
TOOL: The Open Opinion Layer
http://www.firstmonday.org
2002, Hassan Masum
Shared opinions drive society: what we read, how we vote, and where we shop are all heavily influenced by the choices of others. However, the cost in time and money to systematically share opinions remains high, while the actual performance history of opinion generators is often not tracked. This article explores the development of a distributed open opinion layer, which is given the generic name of TOOL. Similar to the evolution of network protocols as an underlying layer for many computational tasks, we suggest that TOOL has the potential to become a common substrate upon which many scientific, commercial, and social activities will be based. Valuation decisions are ubiquitous in human interaction and thought itself. Incorporating information valuation into a computational layer will be as significant a step forward as our current communication and information retrieval layers.
Automated Collaborative Filtering and Semantic Transports
http://www.lucifer.com/~sasha
1997, Alexander Chislenko
This essay focuses on the conceptualization of the issues, comparisons of current technological developments to other historical/evolutionary processes, future of automated collaboration and its implications for economic and social development of the world, and suggestions of what we may want to pursue and avoid. Explanations of the workings of the technology and analysis of the current market are not my purpose here, although some explanations and examples may be appropriate.
A.I. as a Positive and Negative Factor in Global Risk
Starting out with some back history :-)
Anna and I (more her than me, she's the inspired one :-)) schemed up something we ended up calling the Emergent Epistemology Salon (EES) for lack of a less painfully trendy name. The first get together was back on March 25, 2007 and as things have bounced along we kept telling ourselves it would help us write stuff and work up a "big picture" if we had a place to post thoughts and links and whatnot.
So, I'm going to start a tag called "Discussion" and every time we meet (that I have time for it) I'll post a link to what we talked about under that tag. That should ensure at least one post to the blog every two weeks or so... Here was the first thing we talked about back in March:
Artificial Intelligence as a Positive and Negative Factor in Global Risk
Eliezer Yudkowsky, 2006
An essay on the dangers and benefits of (succeeding at) building a general artificial intelligence. Of general theoretical interest is the discussion of the human reasoning biases that seem to lead many people to radically over estimate the degree to which they "understand intelligence".
--
I'm still trying to figure out what the right dynamics for "the blog version of the salon" should be. The meeting was pretty good and two of us (me and another person) wrote up a summary of where the discussion went. Those summaries might be worth posting? Or not? Or maybe it would be invasive of private conversation? I think I'll ask first and then post my summary here as a comment unless someone vetoes the idea.
Anna and I (more her than me, she's the inspired one :-)) schemed up something we ended up calling the Emergent Epistemology Salon (EES) for lack of a less painfully trendy name. The first get together was back on March 25, 2007 and as things have bounced along we kept telling ourselves it would help us write stuff and work up a "big picture" if we had a place to post thoughts and links and whatnot.
So, I'm going to start a tag called "Discussion" and every time we meet (that I have time for it) I'll post a link to what we talked about under that tag. That should ensure at least one post to the blog every two weeks or so... Here was the first thing we talked about back in March:
Artificial Intelligence as a Positive and Negative Factor in Global Risk
Eliezer Yudkowsky, 2006
An essay on the dangers and benefits of (succeeding at) building a general artificial intelligence. Of general theoretical interest is the discussion of the human reasoning biases that seem to lead many people to radically over estimate the degree to which they "understand intelligence".
--
I'm still trying to figure out what the right dynamics for "the blog version of the salon" should be. The meeting was pretty good and two of us (me and another person) wrote up a summary of where the discussion went. Those summaries might be worth posting? Or not? Or maybe it would be invasive of private conversation? I think I'll ask first and then post my summary here as a comment unless someone vetoes the idea.
Labels:
Algorithms,
Discussion,
Society
Subscribe to:
Comments (Atom)