Relevant to this thread is the following link: 

 

http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine
<http://www.nytimes.com/2007/03/04/magazine/04evolution.t.html?ref=magazine&;
pagewanted=print> &pagewanted=print

 

Ed Porter

 

-----Original Message-----
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Sunday, December 09, 2007 1:50 PM
To: agi@v2.listbox.com
Subject: RE: [agi] AGI and Deity

 

This example is looking at it from a moment in time. The evolution of
intelligence in man has some relation to his view of deity. Before
government and science there was religion. Deity and knowledge and perhaps
human intelligence are entwined. For example some taboos evolved as defenses
against disease, burying the dead, not eating certain foods, etc. science
didn't exist at the time. Deity was a sort of peer to peer lossily
compressed semi-holographic knowledge base hosted and built by human mobile
agents and agent systems. Now it is evolving into something else. But humans
may readily swap out their deities with AGIs and then uploading can replace
heaven :-)

 

An AGI, as it reads through text related to man's deities, could start
wondering about Pascal's wager. It depends on many factors... Still though I
think AGIs have to run into the same sort of issues.

 

John

 

 

From: J Marlow [mailto:[EMAIL PROTECTED] 

Here's the way I like to think of it; we have different methods of thinking
about systems in our environments, different sort of models.  One type of
model that we humans have (with the possible exception of autistics) is the
ability to try to model another system as a person like ourselves; its
easier to predict what it will do if we attribute it motives and goals.  I
think a lot of our ideas about God/gods/goddesses come from a tendency to
try to predict the behavior of nature using agent models; so farmers
attribute human emotions, like spite or anger, to nature when the weather
doesn't help the crops. 
So, assuming that is a big factor in how/why we developed religions, then it
is possible that an AI could have a similar problem, if it tried to describe
too many events using its 'agency' models.  But I think an AI near or better
than human level could probably see that there are simpler (or more
accurate) explanations, and so reject predictions made based on those
models. 
Then again, a completely rational AI may believe in Pascal's wager...
Josh

 

  _____  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
<http://v2.listbox.com/member/?&;>
&

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=74072432-07fa77

Reply via email to