Since I voiced my concern with the AGI
 
 
Department of Information Systems
Email: [EMAIL PROTECTED]
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21
 => Singularity automatic assumption here earlier (give me 1000 times more time 
than Einstein to think up Relativity Theory and I still couldn't; give me 1000 
times more data and I'll be seeing less, not more forest), let me add 
corollaries to/musings about Jef's argument: 
(1) if (by force) we confine a super-AGI to a single problem situation or even 
our own limited environment for "long enough"  (ignore the ethical slavery 
aspect for a moment), won't it go crazy - just like many geniuses go crazy or 
at the very least very eccentric after a relatively short life of intensive 
intellectual creativity
(2) will we recognise the difference between AGI genius and AGI craziness even 
at the early stage in its life - we hardly do recognize it in human geniuses 
(and remember that the parameters in a normal human only needs to be slightly 
off before (s)he is considered crazy - it'll be hard enough to get the 
parameters right for our human-level AGI)
(3) once/if it goes off in its own super-intelligence space (likely to be in 
intellectual domains such as maths) I doubt that we will ever be able to 
recognize what it does (try reading an advanced maths,  physics or 
theology/philosophy book)
Jean-Paul Van Belle
 
>"Jef Allbright" [EMAIL PROTECTED]> 2007/04/15 21:40:06 >>
>While such a machine intelligence will quickly far exceed human
>capabilities, from its own perspective it will rapidly hit a wall due
>to having exhausted all opportunities for effective interaction with
>its environment.  It could then explore an open-ended possibility
>space à la schmidhuber, but such increasingly detached exploration
>will be increasingly detached from "intelligence" in an effective
>sense.
>>On 4/15/07, Pei Wang <[EMAIL PROTECTED]> wrote:
>> However, to me "Singularity" is a stronger claim than "superhuman
>> intelligence". It implies that the intelligence of AI will increase
>> exponentially, to a point that is shorter than what we can perceive or
>> understand. That is what I'm not convinced.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to