Hello Terren

> A Von Neumann computer is just a machine. It's only purpose is to compute.
> When you get into higher-level purpose, you have to go up a level to the 
> stuff being computed. Even then, the purpose is in the mind of the programmer.

What I don't see is why your simulation gets away from this, where as
my architecture doesn't.  Read the linked post in the previous
message, if you want to understand more about the philosophy of the
system.

>The only way to talk coherently about purpose within the computation is to 
>simulate self-organized, embodied systems.

I don't think you are quite getting my system. If you had a bunch of
programs that did the following

1) created new programs, by trial and error and taking statistics of
variables or getting arbitrary code from the outside.
2) communicated with each other to try and find programs that perform
services they need.
3) Bid for computer resources, if a program loses its memory resources
it is selected against, in a way.

Would this be sufficiently self-organised? If not, why not? And the
computer programs would be as embodied as your virtual creatures. They
would just be embodied within a tacit economy, rather than an
artificial chemistry.

> And I applaud your intuition to make the whole system intelligent. One of my 
> biggest criticisms of traditional AI philosophy is over-emphasis on the 
> agent. Indeed, the ideal simulation, in my mind, is one in which the boundary 
> between agent and environment is blurry.  In nature, for example, at 
> low-enough levels of description it is impossible to find a boundary between 
> the two, because the entities at that level are freely exchanged.
>
> You are right that starting with bacteria is too indirect, if your goal is to 
> achieve AGI in something like decades. It would certainly take an enormous 
> amount of time and computation to get from there to human-level AI and 
> beyond, perhaps a hundred years or more. But you're asking, aren't there 
> shortcuts we can take that don't limit the field of potential intelligence in 
> important ways.

If you take this attitude you would have to ask yourself whether
implementing your simulation on a classical computer is not cutting
off the ability to create intelligence. Perhaps quantum affects are
important in whether a system can produce intelligence. Protein
folding probably wouldn't be the same.

You have to at some point simplify. I'm going to have my system have
as many degrees of freedom to vary as a stored program computer (or as
near as I can make it). Whilst having the internal programs
self-organise and vary in ways that would make a normal stored program
computer become unstable.  Any simulations you do on a computer cannot
have any more degrees of freedom.

> For example, starting with bacteria means we have to let multi-cellular 
> organisms evolve on their own in a virtual geometry. That project alone is an 
> enormous challenge. So let's skip it and go right to the multi-cellular 
> design. The trouble is, our design of the multi-cellular organism is 
> limiting. Alternative designs become impossible.

What do you mean by design here? Do you mean an abstract multicellular
cell model or do you mean design as in what Tom Ray (you do know
Tierra right, I can use this as a common language?) did with his first
self replicator, by creating an artificial genome. I can see problems
with the first in restricting degrees of freedom, but the second, the
degrees of freedom are still there to be acted on by the pressures of
variation within the system. Even though Tom Ray built a certain type
of replicator, they still managed to replicate in other ways, the one
I can remember is stealing other peoples replication machinery as
parasites.

Lets say you started with an artificial chemistry. You could then
design within that chemistry a replicator, then test that replicator.
See if the variation is working okay. Then design a multicellular
variant, by changing its genome. It could still slip back to single
cellularity and find a different way to multicellularity. The degrees
of freedom do not go away the second a human starts to design
something (else genetically modified foods would not be such a thorny
issue), you just got to allow the forces of variation to be able to
act upon them.

> The question at that point is, are we excluding any important possibilities 
> for intelligence if we build in our assumptions about what is necessary to 
> support it, on a low-level basis. In what ways is our designed brain leaving 
> out some key to adapting to unforeseen domains?

Just apply a patch :P Or have an architecture that is capable of
supporting a self-patching system. I have no fixed design for an AI
myself. Intelligence means winning, winning requires flexibility.

> One of the basic threads of scientific progress is the ceaseless denigration 
> of the idea that there is something special about humans. Pretending that we 
> can solve AGI by mimicking top-down high-level human reasoning is another 
> example of that kind of hubris, and eventually, that idea will fall too.

Agreed. However I am not mimicing top down high-level human reasoning.
I am attempting to mimic the concepts of low level neural plasticity,
neural darwinism and the dopamine system. Or at least the closest I
can get efficiently on the hardware we have, whilst still keeping what
I think of as the spirit of these concepts.

  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to