Sunday, May 27, 2007

A.I. as a Positive and Negative Factor in Global Risk

Starting out with some back history :-)

Anna and I (more her than me, she's the inspired one :-)) schemed up something we ended up calling the Emergent Epistemology Salon (EES) for lack of a less painfully trendy name. The first get together was back on March 25, 2007 and as things have bounced along we kept telling ourselves it would help us write stuff and work up a "big picture" if we had a place to post thoughts and links and whatnot.

So, I'm going to start a tag called "Discussion" and every time we meet (that I have time for it) I'll post a link to what we talked about under that tag. That should ensure at least one post to the blog every two weeks or so... Here was the first thing we talked about back in March:

Artificial Intelligence as a Positive and Negative Factor in Global Risk
Eliezer Yudkowsky, 2006

An essay on the dangers and benefits of (succeeding at) building a general artificial intelligence. Of general theoretical interest is the discussion of the human reasoning biases that seem to lead many people to radically over estimate the degree to which they "understand intelligence".

--

I'm still trying to figure out what the right dynamics for "the blog version of the salon" should be. The meeting was pretty good and two of us (me and another person) wrote up a summary of where the discussion went. Those summaries might be worth posting? Or not? Or maybe it would be invasive of private conversation? I think I'll ask first and then post my summary here as a comment unless someone vetoes the idea.

No comments: