Aleksei Riikonen wrote:
On 10/22/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
My own opinion is that the first AGI systems to be built will have
extremely passive, quiet, peaceful "egos" that feel great empathy for
the needs and aspirations of the human species.

Sounds rather optimistic, that creating great empathy within an
artificial system will be learned at the same time as creating general
intelligence.

I find it more probable that the first AGIs will be rather
emotionless. Seems unnecessarily risky to try to succeed at both at
the same time: creating potentially human-surpassing intelligence and
creating harmonious emotions.

I'm not sure that it would be possible to create an AGI that was emotionless. One that had emotions that were very different from those of humans, yes. And that's what I would predict (as we don't understand our own emotions very well). But emotions are necessary shortcuts to evaluate choices, and select between alternatives. And time isn't long enough to allow a full computation of what is the logically proper thing to do. Not for any conceivable computer, including a chunk of computronium the size of the moon.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=56861763-ed2632

Reply via email to