Hi Will,

--- On Mon, 6/30/08, William Pearson <[EMAIL PROTECTED]> wrote:
> >The only way to talk coherently about purpose within
> the computation is to simulate self-organized, embodied
> systems.
> 
> I don't think you are quite getting my system. If you
> had a bunch of
> programs that did the following
> 
> 1) created new programs, by trial and error and taking
> statistics of
> variables or getting arbitrary code from the outside.
> 2) communicated with each other to try and find programs
> that perform
> services they need.
> 3) Bid for computer resources, if a program loses its
> memory resources
> it is selected against, in a way.
> 
> Would this be sufficiently self-organised? If not, why not?
> And the
> computer programs would be as embodied as your virtual
> creatures. They
> would just be embodied within a tacit economy, rather than
> an
> artificial chemistry.

It boils down to your answer to the question: how are the resources ultimately 
allocated to the programs?  If you're the one specifying it, via some heuristic 
or rule, then the purpose is driven by you. If resource allocation is handled 
by some self-organizing method (this wasn't clear in the article you provided), 
then I'd say that the system's purpose is self-defined.

As for embodiment, my question is, how do your programs receive input?  
Embodiment, as I define it, requires that inputs are merely reflections of 
state variables, and not even labeled in any way... i.e. we can't pre-define 
ontologies. The embodied entity starts from the most unstructured state 
possible and self-structures whatever inputs it receives.

That said, you may very well be doing that and be creating embodied programs in 
this way... if so, that's cool because I hadn't considered that possibility and 
I'll be interested to see how you fare.
 
> > You are right that starting with bacteria is too
> indirect, if your goal is to achieve AGI in something like
> decades. It would certainly take an enormous amount of time
> and computation to get from there to human-level AI and
> beyond, perhaps a hundred years or more. But you're
> asking, aren't there shortcuts we can take that
> don't limit the field of potential intelligence in
> important ways.
> 
> If you take this attitude you would have to ask yourself whether
> implementing your simulation on a classical computer is not cutting
> off the ability to create intelligence. Perhaps quantum affects are
> important in whether a system can produce intelligence. Protein
> folding probably wouldn't be the same.

Computation per se has little to do with the potential to create intelligent 
systems. Computation is only a framework that supports the simulation of 
virtual environments, in which intelligence may emerge. You could in principle 
build that computer out of tinker toys, or as an implementation of a Turing 
machine in Conway's Game of Life. The substrate doesn't matter, so long as it 
can compute.

As for quantum effects, it's possible there's something there with respect to 
protein folding, probable even. But I strongly distrust attempts to locate the 
non-deterministic behavior required of autonomous systems in the domain of 
quantum uncertainty. Every phenomenon above the scale of molecular dynamics is 
far too large to be impacted by anything but statistical behaviors. Individual 
quantal events lose all practical meaning at that level. Because intelligence, 
in my estimation, is at least partially dependent on global notions of 
emergence and complexity, quantum effects contribute absolutely nothing to my 
model. 

> You have to at some point simplify. I'm going to have
> my system have
> as many degrees of freedom to vary as a stored program
> computer (or as
> near as I can make it). Whilst having the internal programs
> self-organise and vary in ways that would make a normal
> stored program
> computer become unstable.  Any simulations you do on a
> computer cannot
> have any more degrees of freedom.

I disagree, but would like to see your response to the above before diving into 
such esoterica.

> > For example, starting with bacteria means we have to
> let multi-cellular organisms evolve on their own in a
> virtual geometry. That project alone is an enormous
> challenge. So let's skip it and go right to the
> multi-cellular design. The trouble is, our design of the
> multi-cellular organism is limiting. Alternative designs
> become impossible.
> 
> What do you mean by design here? Do you mean an abstract
> multicellular
> cell model or do you mean design as in what Tom Ray (you do
> know
> Tierra right, I can use this as a common language?) did
> with his first
> self replicator, by creating an artificial genome. I can
> see problems
> with the first in restricting degrees of freedom, but the
> second, the
> degrees of freedom are still there to be acted on by the
> pressures of
> variation within the system. Even though Tom Ray built a
> certain type
> of replicator, they still managed to replicate in other
> ways, the one
> I can remember is stealing other peoples replication
> machinery as
> parasites.

But the way that artificial genome is interpreted, to build the entity, is 
fixed, which limits the degrees of freedom. You still might be right, and the 
degrees of freedom you lose turn out to be unimportant. That's the great hope 
of anyone who takes shortcuts. It's a gamble, because you can't know beforehand 
whether your design cuts off something important. 

Even starting with single-celled entities is a shortcut that may prove to be 
too costly. You're right, you have to simplify somewhere if you want to 
accelerate a process that took nature billions of years. By starting at a low 
enough level my aim is to not limit the simulation by my own conception of 
intelligence.

> Lets say you started with an artificial chemistry. You
> could then
> design within that chemistry a replicator, then test that
> replicator.
> See if the variation is working okay. Then design a
> multicellular
> variant, by changing its genome. It could still slip back
> to single
> cellularity and find a different way to multicellularity.
> The degrees
> of freedom do not go away the second a human starts to
> design
> something (else genetically modified foods would not be
> such a thorny
> issue), you just got to allow the forces of variation to be
> able to
> act upon them.

If you had a system that could evolve either single or multicellular entities 
you would have already solved the problem of creating an environment that could 
support both (maximum degree of freedom), which is the hard part. You didn't 
actually take any shortcuts there.

> > The question at that point is, are we excluding any
> important possibilities for intelligence if we build in our
> assumptions about what is necessary to support it, on a
> low-level basis. In what ways is our designed brain leaving
> out some key to adapting to unforeseen domains?
> 
> Just apply a patch :P Or have an architecture that is
> capable of
> supporting a self-patching system. I have no fixed design
> for an AI
> myself. Intelligence means winning, winning requires
> flexibility.

Agreed.

> > One of the basic threads of scientific progress is the
> ceaseless denigration of the idea that there is something
> special about humans. Pretending that we can solve AGI by
> mimicking top-down high-level human reasoning is another
> example of that kind of hubris, and eventually, that idea
> will fall too.
> 
> Agreed. However I am not mimicing top down high-level human
> reasoning.
> I am attempting to mimic the concepts of low level neural
> plasticity,
> neural darwinism and the dopamine system. Or at least the
> closest I
> can get efficiently on the hardware we have, whilst still
> keeping what
> I think of as the spirit of these concepts.

I didn't mean to imply that was your approach. I was just getting on a soapbox 
there. 

Best,
Terren


      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to