Thursday, December 6, 2007

A Theory of Cortical Function: Hierarchical Temporal Memory (HTM)

On Sunday, November 4th 2007 the Emergent Epistemology Salon tried an experiment in meeting to discuss videos rather than text. The videos were of talks by Jeff Hawkins regarding his work developing a quantitative and biologically plausible general theory explaining the human cortex. His proposed "cortical algorithm" is named Hierarchical Temporal Memory (HTM). Here are some videos of him talking about this stuff:

A 20-minute chat at TED (which appears to have been given when the actual algorithm was only in the intuition stage) entitle "Brain science is about to fundamentally change computing".


"Prospects and Problems of Cortical Theory" given at UC Berkeley on October 7, 2005 - it's a little over an hour long and gives all the basics of the theory. (Warning: the words don't perfectly sync with the images... it's a pretty good talk but the medium imposes on you a little.)


This talk, "Hierarchical Temporal Memory: Theory and Implementation" is less chatty and spends more time on the theory. There's a pitch at the end for the software tools his startup wrote.


Significant parts of this material are also covered in his 2004 book On Intelligence and he has working algorithms (designed by Dileep George) with publicly accessible code (if you don't mind some non-standard licensing) that you can find by clicking around the website of his startup company, Numenta. The code is implemented in C++ with an eye towards scaling to clusters and has Python bindings.

Our actual discussion of the material was weaker than normal. Mostly we went over the algorithms and talked about whether they might be able able to capture various kinds of cognition and/or concept formation. Part of the problem may have been that we turned out to be a much more literate crowd than a video watching crowd.

Law's Order

It's been a while since I posted anything, but that's mostly for lack of time to post rather than for lacking of meetings worth posting about. We held a salon on Sunday, October 14th 2007 to discuss the a book by David Friedman (son of Milton):

Law's Order
(The above link goes to the full online text, alternately try Amazon or Google Books)

From the Google Books blurb: "Suppose legislators propose that armed robbers receive life imprisonment. Editorial pages applaud them for getting tough on crime. Constitutional lawyers raise the issue of cruel and unusual punishment. Legal philosophers ponder questions of justness. An economist, on the other hand, observes that making the punishment for armed robbery the same as that for murder encourages muggers to kill their victims. This is the cut-to-the-chase quality that makes economics not only applicable to the interpretation of law, but beneficial to its crafting. Drawing on numerous commonsense examples, in addition to his extensive knowledge of Chicago-school economics, David D. Friedman offers a spirited defense of the economic view of law. He clarifies the relationship between law and economics in clear prose that is friendly to students, lawyers, and lay readers without sacrificing the intellectual heft of the ideas presented. Friedman is the ideal spokesman for an approach to law that is controversial not because it overturns the conclusions of traditional legal scholars--it can be used to advocate a surprising variety of political positions, including both sides of such contentious issues as capital punishment--but rather because it alters the very nature of their arguments. For example, rather than viewing landlord-tenant law as a matter of favoring landlords over tenants or tenants over landlords, an economic analysis makes clear that a bad law injures both groups in the long run."

The book covered basic economic concepts such as economic efficiency, externalities, and Coase's Therem. The author's honesty about the way the concept of "property" involves a substantial amount of work and thinking to get right is charming and intellectually productive.

When something is owned, in a rather deep sense what's owned in is not simply "a thing" but a complicated bundle of rights related to the thing. With "my land", for example, there are: the right to build on land, the right to not have certain things built on neighboring land, the right to control the movement of physical objects through airspace above the land, the right to the minerals beneath the surface, the right to have it supported by neighboring land, the right to make loud noises on the edge of the land, and so on and so forth.

Once bundling of rights is acknowledged to be going on in potentially arbitrary ways it opens up the discussion to questions about how rights relating to different things should be bundled, who should initially own the bundles, what sort of transfer schemes should hold. Mr. Friedman takes a position on what should be happening but it's rather complicated. Chapter 5 has a "spaghetti diagram" showing a variety of possible initial assignments of different kinds of rights, further ramified by the relative costs and benefits that accrue to various outcomes after rights have been renegotiated by various means (up front purchase contract, after the fact court dispute, etc).

The "thing that should happen" isn't a single way of doing things but a situationally sensitive rule that requires estimation of the costs and benefits parties on each side of an allocation of rights faces, and further guesses about who is likely to be able to see how many of the costs and benefits (the parties involved, the courts, etc), recognition of the number of people owning various rights and how many people any particular agent would have to negotiate with in order to get anything useful, and estimates of transaction costs (like the cost of making all these estimates) to boot.

If the initial assignment of rights is done poorly, various game theoretic barriers to collective action can arise. Given the complexity of the decision, this is not necessarily a conclusion that inspires happiness and hope. Mr. Friedman discusses institutions for working through these issues, including an examination of the claim that the best institution for achieving economically efficient outcomes in the long run is common law.

Our discussion of the book ranged rather widely. One of the juiciest veins of thought we found was in the question of bundling rights in novel ways and trying to understand how they might be rebundled by the market over time. For example, the idea of "salesright" (inspired by copyright) was a sort of "horrifying or amazing" concept that fell out of the discussions. Sales right would be "the right to a sale given certain propaganda efforts". If one company advertised at you, and you ended up buying something in their industry (when you wouldn't otherwise have done so) but you buy something from one of their competitors... in some sense the competitor has "stolen a sale" that "rightfully" belonged to the company that paid for the advertisement. (And you thought patents and copyright were bad :-P)

Another theme we examined (that Mr. Friedman mostly ignored) was the similarity between the questions of rights bundling and what, in in modern philosophy, is known as the Goodman's new problem of induction.

Saturday, September 29, 2007

Nativist Chemistry?

After a summer hiatus, the salon met on September 23rd to discuss a book by Stuart Kauffman:

Investigations

A quote from decomplexity's Amazon review: Kauffman's start point is autocatalysis: that it is very likely that self-reproducing molecular systems will form in any large and sufficiently complex chemical reaction. He then goes on to investigate what qualities a physical system must have to be an autonomous agent. His aim is to define a new law of thermodynamics for those systems such as the biosphere that may be hovering in a state of self-organised criticality and are certainly far from thermodynamic equilibrium. This necessitates a rather more detailed coverage of Carnot work cycles and information compressibility than was covered in passing in his earlier books. It leads to the idea that a molecular autonomous agent is a self-reproducing molecular system capable of carrying out one or more work cycles.

But Kauffman now pushes on further into stranger and uncharted territory. The Universe, he posits, is not yet old enough to have synthesised more than a minute subset of the total number of possible proteins. This leads to the fundamental proposition that the biosphere of which we are part cannot have reached all its possible states. The ones not yet attained - the `adjacent possible' as Kauffman terms it - are unpredictable since they are the result of the interaction of the large collection of autonomous agents: us - or rather our genes - and all the other evolving things in the external world. His new fourth law of thermodynamics for self-constructing systems implies that they will try to expand into the `adjacent possible' by trying to maximise the number of types of events that can happen next.

The book covers more than that (see the rest of the quoted review) but we focused on the early part of the book. Kauffman points out that he wasn't really doing science and he's right about that. However, he had a number of ideas arranged in a sequence that made some sense to us... the trick was that in between the ideas there appeared to be high flying prose and analogies to mathematical concepts where perhaps the details of the original math weren't being faithfully imported. Or maybe they were and we just couldn't see it? It was interesting to hypothesize the existence of mechanical connections and see if we could reconstruct some of them.

We spent some time with autocatalytic sets and looked into some assumptions about what it took to make them work right (various kinds of neighborliness of molecular species, differential rates of reaction, etc) especially the presumption that real chemistry possessed the "algorithmic generosity" required. It inspired an interesting analogy to the "debates" between working biologists and theists promoting "intelligent design"... one could imagine people insisting that autocatalysis was a sufficient "algorithm" to explain biogenesis while another group insisted that chemistry had to work in certain ways for the algorithm to successfully operate and that the fact that chemistry did work in such ways was evidence that it had been "designed".

There was also some discussion around Kauffman's claims that the processes or parameters of evolution could not be pre-stated or predicted in any meaningful way. It seems that he was inspired by theorems about computability but it would have been nice if he'd spent more time wondering if the axioms involved in those theorems really applied to biology at the level of biology that humans are interested in. It appeared that he believed biological systems were doing something "non algorithmic" in the sense that you'd have to know every detail of everything to predict what an ecosystem (or an economy) would think up next. It would have been nice if his analogy for scientific theories was "lossy compressions of reality with possibly calculable error bounds" instead of something more pristine. (Mysterians were mentioned as having a vaguely similar attitude towards cognition... seeming to want to find something that was impossible to understand.)

Sunday, July 15, 2007

Nativist Evolution (take two)

On July 15, 2007 the emergent epistemology salon met to talk about:

The theory of facilitated variation

Gerhart & Kirschner, May 2007

This theory concerns the means by which animals generate phenotypic variation from genetic change. Most anatomical and physiological traits that have evolved since the Cambrian are, we propose, the result of regulatory changes in the usage of various members of a large set of conserved core components that function in development and physiology. Genetic change of the DNA sequences for regulatory elements of DNA, RNAs, and proteins leads to heritable regulatory change, which specifies new combinations of core components, operating in new amounts and states at new times and places in the animal. These new configurations of components comprise new traits. The number and kinds of regulatory changes needed for viable phenotypic variation are determined by the properties of the developmental and physiological processes in which core components serve, in particular by the processes' modularity, robustness, adaptability, capacity to engage in weak regulatory linkage, and exploratory behavior. These properties reduce the number of regulatory changes needed to generate viable selectable phenotypic variation, increase the variety of regulatory targets, reduce the lethality of genetic change, and increase the amount of genetic variation retained by a population. By such reductions and increases, the conserved core processes facilitate the generation of phenotypic variation, which selection thereafter converts to evolutionary and genetic change in the population. Thus, we call it a theory of facilitated phenotypic variation.

This paper is, roughly, an eight page long abstract for Gerhart & Kirschner's book "The Plausibility of Life". It covers a lot of ground idea-wise, with entire chapters in the book compressed down to a few paragraphs in the paper. The paper has a really high idea-to-word density (which is great in some ways) but if you're looking for elaborated concrete examples to ground the theory or inspire your own intuitions, the book is the probably the place to go.

A lot of our discussion revolved around laying out the theory of G & K and trying to find equivalent patterns (of conserved structures reused and able to interact via the influence of thin regulatory signals) in the processes of science and the algorithms of machine learning.

Thursday, June 28, 2007

Manifesto for a Material Study of Learning

Continuing in the vein of "winding down the school year" the EES met on June 26 (there was a BBQ on Sunday so we rescheduled) to talk about a 24 page final paper, written by Anna, full of [insert better example than this here] notes that was dense enough to expand into a book if it's not broken into six papers instead :-)

The abstract:

I propose a program for seeking thick, data-rich analogies between learning systems. The goal is to understand why evolution is able to design species; why animals are able to acquire useful behavior patterns; and why communities of scientists are able to find predictively useful theories. Each individual system is already being studied by a large number of system-specific specialists. My proposal offers a framework for enrolling these system-specific research efforts into a single, larger endeavor.

Specifically, I argue that each of the above systems can be understood as a procedure of natural selection undertaken on a certain kind of fitness landscape with a certain kind of variation-making. I argue that if we make the fitness landscape and variation-making central objects of study, we will be able to move past the thin cross-system models that have previously been offered to make rich contact with the data.


Having talked about the paper, the abstract appears especially abstract relative to the content of the paper. For example, there's no mention in the abstract of the No Free Lunch Theorem or Occam's Razor or Grue or...

Friday, June 8, 2007

Games as Symbolic Bottlenecks

As UCSD's quarter winds down, the Emergent Epistemology Salon met on June 3, 2007 and talked about the ideas in the final paper of Justin, one of our members. The conversation was about "games" taking the idea very broadly to include actual games like chess but (fuzzy definition ahead!) to also include nearly any interaction that can be simplified to the point that non-human participation starts being feasible. Games offer their participants "symbolic bottlenecks".

Three snippets from a very early draft of Justin's writing:

Groups of humans give rise to a vast array of organizational patterns found in no other system in the natural world. This has, of late, given rise to the suggestion that humans implement "group intentionality" in one way or another.

...

Chess is primarily a person-person interaction taking place over a board in physical space. It can also be played through the mail, by phone, over computers, or simply between persons calling out moves. Meaning is created during a chess game by the rule-constrained manipulation of pieces on a board, however these are instantiated. The game of chess organizes human behavior in complex and interesting ways across space and time.

The last century saw the rise of computerized chess engines. Computers, though hideously inefficient and unable to do lots of other things, now kick ass at chess.


...

Traditionally, language games are viewed as giving rise to exclusively human-human interactions. Certainly human-human interactions form a large part of our game-playing activity. But there is nothing essential to the structure of most games that requires a human opponent/participant. And even if this was a requirement, groups of humans can and do participate in games all the time.

Maybe more importantly, humans care about and invest in games (consider the parliaments of the world, religions, etc). If we find ourselves struggling in a game with entities that are not obviously reducible to one agent (the phone company, NIMBY, Quebec), or entities that are non-human (the phone company's automated devil box), we care, because the moves in the game matter to us.

T
here is a sense in which our investiture in games opens the door to intervention by entities other than individual humans.


If we are committed to the belief that many meaningful human activities are best understood as game-like interactions, and we discover non-human entities that can play our games with us (or even impose them upon us), what are we to make of the activity that results from our interaction with these entities? Is it meaningful in the same way human-human interaction is meaningful? What does it say about the other party? What does it say about us?

Sunday, May 27, 2007

Nativist Evolution (take one)

On April 20, 2007 the EES met to talk about evolution in a way that wasn't totally focused on "natural selection" but more on the sources and kinds of variation that it are available to it (and why). We basically bit off more than we could chew because there are a lot of neat ideas lurking here that our conversation looped around but never really brought into focus. (If you're wondering about the "nativism" in the title, you'll be in the right ballpark if you imagine idealism, platonism, or better still, looking for ways to apply psychological nativism to "evolution interpreted as a mindful process".)

How the Leopard Changed Its Spots: The Evolution of Complexity

Brian C. Goodwin, 2001

The link will take you to "the sorts of fragments of text that might induce you to buy the book" which are enough to smell the ideas (if not see them fully realized). There's a lot in here documenting the easy-to-implement structures and patterns that you might argued evolution "discovered" (if you're disposed to see things that way) like fractals and so on.

The Jigsaw Model: An Explanation for the Evolution of Complex Biochemical Systems and the Origin of Life
John F. McGowan, 2000

An essay on "how things might be set up so that single mutations generate correlated traits". The paper is interestingly (to me anyway) steeped in creationist conceptualizations of evolution (the text is pro-evolution... but it's striking in taking creationist objections to "blind evolution" as having a point able to be addressed by thinking about ways biological traits might hypothetically be encoded).

The Rate of Compensatory Mutation in the DNA Bacteriophage {PHI}X174
Art Poon and Lin Chao, 2005

...and the creationist resonances and hand waving were just sort of asking for a counterpoint with "actual real quantitative biology" in case you were somehow thinking that real world biological systems aren't robust and fixable instead of brittle due to "irreducible complexity" :-P

Manifesto for an Evolutionary Economics of Intelligence

On May 8, 2007 the Emergent Epistemology Salon met to talk about:

Manifesto for an Evolutionary Economics of Intelligence
Eric B. Baum, 1998
http://www.whatisthought.com/manif5.ps

PARTIAL ABSTRACT: We address the problem of reinforcement learning in ultra-complex environments. Such environments will require a modular approach. The modules must solve subproblems, and must collaborate on solution of the overall problem. However a collection of rational agents will only collaborate if appropriate structure is imposed. We give a result, analogous to the First Theorem of Welfare Economics, that shows how to impose such structure. That is, we describe how to use economic principles to assign credit and ensure that a collection of rational (but possibly computationally limited) agents will collaborate on reinforcement learning. Conversely, we survey catastrophic failure modes that can be expected in distributed learning systems, and empirically have occurred in biological evolution, real economics, and artificial intelligence programs, when such structure was not enforced.

We conjecture that simulated economies can evolve to reinforcement learn in complex environments in feasible time scales, starting from a collection of agents which have little knowledge and hence are *not* rational. We support this with two implementations of learning models based on these principles.

In our discussion we went over content from the paper and ranged into related tangents. We considered issues having to do with the degree to which economies could be said to have goals and what the goal of the actual global economy might be if considered as an intelligent agent conserving property rights for it's agents so that they might be forced to accomplish goals or perish. We also spent some time discussing the degree to which "the ability to have invented an economy in the first place and then created the science of economics" was something that simulated agents should have if algorithms inspired by human economies were to display the amazing results that human economies generate. This trended into a discussion of "agents with good insights but bad business sense", charlatanism, and possible effects of similar issues on the results of Baumian AI architectures (and the actual economy).

The Wisdom Economy

The second meeting wasn't exactly two weeks later... But on April 22, 2007 the EES met to talk about something we lumped under the label of the "Wisdom Economy" (though that term doesn't speak very well to the algorithmic angle we were focusing on). These were the readings for the meeting:

TOOL: The Open Opinion Layer
http://www.firstmonday.org/issues/issue7_7/masum/
2002, Hassan Masum

Shared opinions drive society: what we read, how we vote, and where we shop are all heavily influenced by the choices of others. However, the cost in time and money to systematically share opinions remains high, while the actual performance history of opinion generators is often not tracked. This article explores the development of a distributed open opinion layer, which is given the generic name of TOOL. Similar to the evolution of network protocols as an underlying layer for many computational tasks, we suggest that TOOL has the potential to become a common substrate upon which many scientific, commercial, and social activities will be based. Valuation decisions are ubiquitous in human interaction and thought itself. Incorporating information valuation into a computational layer will be as significant a step forward as our current communication and information retrieval layers.

Automated Collaborative Filtering and Semantic Transports
http://www.lucifer.com/~sasha/articles/ACF.html
1997, Alexander Chislenko

This essay focuses on the conceptualization of the issues, comparisons of current technological developments to other historical/evolutionary processes, future of automated collaboration and its implications for economic and social development of the world, and suggestions of what we may want to pursue and avoid. Explanations of the workings of the technology and analysis of the current market are not my purpose here, although some explanations and examples may be appropriate.

A.I. as a Positive and Negative Factor in Global Risk

Starting out with some back history :-)

Anna and I (more her than me, she's the inspired one :-)) schemed up something we ended up calling the Emergent Epistemology Salon (EES) for lack of a less painfully trendy name. The first get together was back on March 25, 2007 and as things have bounced along we kept telling ourselves it would help us write stuff and work up a "big picture" if we had a place to post thoughts and links and whatnot.

So, I'm going to start a tag called "Discussion" and every time we meet (that I have time for it) I'll post a link to what we talked about under that tag. That should ensure at least one post to the blog every two weeks or so... Here was the first thing we talked about back in March:

Artificial Intelligence as a Positive and Negative Factor in Global Risk
Eliezer Yudkowsky, 2006

An essay on the dangers and benefits of (succeeding at) building a general artificial intelligence. Of general theoretical interest is the discussion of the human reasoning biases that seem to lead many people to radically over estimate the degree to which they "understand intelligence".

--

I'm still trying to figure out what the right dynamics for "the blog version of the salon" should be. The meeting was pretty good and two of us (me and another person) wrote up a summary of where the discussion went. Those summaries might be worth posting? Or not? Or maybe it would be invasive of private conversation? I think I'll ask first and then post my summary here as a comment unless someone vetoes the idea.