Hi William,

A Von Neumann computer is just a machine. It's only purpose is to compute. When 
you get into higher-level purpose, you have to go up a level to the stuff being 
computed. Even then, the purpose is in the mind of the programmer. The only way 
to talk coherently about purpose within the computation is to simulate 
self-organized, embodied systems.

And I applaud your intuition to make the whole system intelligent. One of my 
biggest criticisms of traditional AI philosophy is over-emphasis on the agent. 
Indeed, the ideal simulation, in my mind, is one in which the boundary between 
agent and environment is blurry.  In nature, for example, at low-enough levels 
of description it is impossible to find a boundary between the two, because the 
entities at that level are freely exchanged.

You are right that starting with bacteria is too indirect, if your goal is to 
achieve AGI in something like decades. It would certainly take an enormous 
amount of time and computation to get from there to human-level AI and beyond, 
perhaps a hundred years or more. But you're asking, aren't there shortcuts we 
can take that don't limit the field of potential intelligence in important 
ways. 

For example, starting with bacteria means we have to let multi-cellular 
organisms evolve on their own in a virtual geometry. That project alone is an 
enormous challenge. So let's skip it and go right to the multi-cellular design. 
The trouble is, our design of the multi-cellular organism is limiting. 
Alternative designs become impossible. The question at that point is, are we 
excluding any important possibilities for intelligence if we build in our 
assumptions about what is necessary to support it, on a low-level basis. In 
what ways is our designed brain leaving out some key to adapting to unforeseen 
domains?

One of the basic threads of scientific progress is the ceaseless denigration of 
the idea that there is something special about humans. Pretending that we can 
solve AGI by mimicking top-down high-level human reasoning is another example 
of that kind of hubris, and eventually, that idea will fall too. 

Terren 



--- On Mon, 6/30/08, William Pearson <[EMAIL PROTECTED]> wrote:

> > Ben,
> >
> > I agree, an evolved design has limits too, but the key
> difference between a contrived design and one that is
> allowed to evolve is that the evolved
> > critter's intelligence is grounded in the context
> of its own 'experience', whereas the contrived
> one's intelligence is grounded in the experience of its
> > creator, and subject to the limitations built into
> that conception of intelligence. For example, we really
> have no idea how we arrive at spontaneous
> > insights (in the shower, for example). A chess master
> suddenly sees the game-winning move. We can be fairly
> certain that often, these insights are not
> > the product of logical analysis. So if our conception
> of intelligence fails to explain these important aspects,
> our designs based on those conceptions will
> > fail to exhibit them. An evolved intelligence, on the
> other hand, is not limited in this way, and has the
> potential to exhibit intelligence in ways we're not
> > capable of comprehending.
> 
> I'm seeking to do something half way between what you
> suggest (from
> bacterial systems to human alife) and AI. I'd be
> curious to know
> whether you think it would suffer from the same problems.
> 
> First are we agreed that the von Neumann model of computing
> has no
> hidden bias to its problem solving capabilities. It might
> be able to
> do some jobs more efficiently than other and need lots of
> memory to do
> others but it is not particularly suited to learning chess
> or running
> down a gazelle. Which means it can be reprogrammed to do
> either.
> 
> However it has no guide to what it should be doing, so can
> become
> virus infested or subverted. It has a purpose but we
> can't explicitly
> define it. So let us try and put in the most minimal guide
> that we can
> so we don't give it a specific goal, just a tendency to
> favour certain
> activities or programs. How to do this? Form and economy
> based on
> reinforcement signals, those that get more reinforcement
> signals can
> outbid the others for control of system resources.
> 
> This is obviously reminiscent of tierra and a million and
> one other
> alife system. The difference being is that I want the whole
> system to
> exhibit intelligence. Any form of variation is allowed,
> from random to
> getting in programs from the outside. It should be able to
> change the
> whole from the OS level up based on the variation.
> 
> I agree that we want the systems we make to be free of our
> design
> constraints long term, that is eventually correct all the
> errors and
> oversimplifications or gaps we left. But I don't see
> the need to go
> all the way back to bacteria. Even then you would need to
> design the
> system correctly in terms of chemical concentrations. I
> think both
> would count as the passive approach* to helping solve the
> problem,
> yours is more indirect than is needed I think.
> 
>   Will Pearson
> 
> *
> http://www.mail-archive.com/agi@v2.listbox.com/msg11399.html
> 
> 
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to