I think that we humans offer something that will entice AGI not to decimate
us all.
Human biology is comparable to a  program as Kurzweil keeps repeating.

AGI might enjoy living out experiences biologically.
Humans might find themselves semi-autonomous sensor nodes.
Just like we like to make our games and toys more interesting AGI might
like to see radically enhanced humans a an extension of themselves just like
we conceive of them as an extension of ourselves.
Humans in effect become the basic unit of computronium.

Morris Johnson


On 12/12/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>
> --- postbus <[EMAIL PROTECTED]> wrote:
>
> > Dear fellow minds,
> >
> > After editing the book "Nanotechnology, towards a molecular construction
> > kit" (1998), I have become a believer in strong AI. As a result, I still
> > worry about an upcoming "war against the machines" leading to our
> > destruction or enslavement. Robots will simply evolve beyond us. Until a
> > few days ago, I believed this war and outcome to be inevitable.
>
> It doesn't work that way.  There will be no war because you won't know you
> are
> enslaved.  The AI could just reprogram your brain so you want to do its
> bidding.
>
> > However, there may be a way out. What thoughts has any of you concerning
> > the following line of reasoning:
> >
> > First, human values have evolved along the model of Claire Graves. Maybe
> > you heard about his work in terms of "Spiral Dynamics". Please look into
> > it if you don't. To me, it has been an eye opener.
> > Second, a few days ago it dawned on me that intelligent robots might
> > follow the same spiral evolution of values:
> >
> > 1. The most intelligent robots today are struggling for their survival
> > in the lab (survival). Next, they would develop a sense of:
> > 2. a tribe
> > 3. glory & kingdom (here comes the war...)
> > 4. order (the religous robots in Battlestar Galactica, which triggered
> > this idea in the first place)
> > 5. discovery and entrepreneurship (materialism)
> > 6. social compassion ("robot hippies")
> > 7. systemic thinking
> > 8. holism.
> >
> > In other words, if we guide robots/AI quickly and safely into the value
> > system of order (3) and help them evolve further, they might not kill us
> > but become our companions in the universe. N.B. This is quite different
> > from installing Asimov's laws: the robots need to be able to develop
> > their own set of values.
> >
> > Anyone?
>
> If AI follows the same evolutionary path as humans have followed, then it
> does
> not follow that that the AI will be compassionate toward humans any more
> than
> humans are compassionate toward lower animals.  Evolution is a competitive
> algorithm.  Animals eat animals of other species.  AI would not be
> compassionate toward humans unless it increased their fitness.  But when
> AI
> becomes vastly more intelligent, we will be of no use to them.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=75907663-8fc066

Reply via email to