Thanks Terren!

    Good stuff!

Onward!

Stephen

On 1/9/2012 2:40 PM, terren wrote:
For Stephen and anyone else interested, I asked the following to Steve Grand
regarding the capacity of his Grandroids to do self-modeling:

"Quick question (and forgive me if this has already come up) - do you think
the grandroids will have the capacity for self-modeling?  If so, is there
something in the way you will design the brains (as such, from the bottom
up) that will somehow encourage self-modeling?  I'm working on the
assumption that that is something you wouldn't be designing in explicitly,
but I also know that you are realistic about tradeoffs involved between
design and emergence."

And his response:

"They'll certainly (all being well!) develop a model of their own body and
how it works. How far that will extend, though, is a tricky question.
Basically the system learns by observation of itself. At first it observes
how its senses tend to change over time and how initially random motor
actions alter the environment and sensation. Later it will observe itself
doing simple motor responses to things and develop higher level
understandiing of the sensation-action-sensation loop. Whether in principle
it could go on to observe its own thoughts and reflect on them in a more
cognitive way I don't know. Right now I'll be impressed when it just manages
to learn how to look in a chosen direction, but I think the principle
extends quite a long way, even if the practice can't keep up with it!"

Terren



terren wrote:
As far as I understand it, if grandroids are capable of self-modeling, it
would not be programmed in beforehand but rather emerge somehow. But I'm
not sure, I'll ask.
On Jan 1, 2012 2:30 PM, "Stephen P. King"<stephe...@charter.net>  wrote:

Hi,

    Does Steve Grand's game include self-modeling?

Onward!

Stephen

On 1/1/2012 10:32 AM, Craig Weinberg wrote:

On Jan 1, 8:29 am, Terren Suydam<terren.suy...@gmail.com**>   wrote:

Steve Grand's latest project, an artificial-life game called
Grandroids,
does just that. The bottom layer (substitution level) is an artificial
chemistry and biology, including analogues to dna, metabolism, cells
(including neurons of course), hormones, and so on.  He's concentrating
on
building a very robust and dynamic set of base components that will be
assembled from the dna in ways that result in an artificial animal...
an
animal that has no behaviors programmed in by Steve or anyone else.
Whatever it does will be completely emergent.

He's still building it, so a lot of stuff has to be proved out, but if
all
goes right, these animals will display coherent, apparently
goal-directed
behaviors in such a way that the most parsimonious explanation of
what's
happening is that a new layer of "psychology" has emerged from the
computational substrate.

Even if Steve fails, it is at least possible in principle to see how
that
could happen.

Happy new year!

If Steve fails, it will also be possible to see how that principle
falls short in reality and bring functionalism to it's inevitable dead
end.

Craig



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to