Ben Goertzel wrote:
>Note that I don't begrudge Google the label "AI" --  IMO it's just as much
>AI as most of the stuff in Russel & Norvig (the classic AI text).  I just
>begrudge it the title "AGI" ;-) ... Of course, I realize that with finite processing
>power there is no truly general intelligence.  But there are levels of generality
>in intelligence, and Google's is quite low.
 
>Next, your message seems to imply that commercial success or popularity
>are a decent measure of scientific or engineering quality or interestingness.
>This is just not the case, as is shown by very many examples in recent and
>less recent history.
 
Agreed.
 
>Finally, you suggest that an in-development AGI should necessarily have
>powerful applications -- e.g. that a 60%-complete AGI implementation should
>be 60% as useful as a complete AGI.   It's just not true.  There are plenty of
>other areas of science and engineering where this kind of "continuity" doesn't
>hold either.  A 60%-complete spacecraft doesn't fly anywhere, and probably
>has very little commercial value   -- so what? 
 
Yes, this continuity certainly doesn't hold for space-crafts or a good many other engineering endeavors, but your analogy to them is flawed. If it were true, it would mean that it would be impossible for an AGI--more specifically, us--to evolve, since a 60% complete human-AGI, such as, say, a chimp, would presumably be too stupid to be successful in its environment, what with it only having a good deal less than 60% human-level AGI-power, after all. If evolution was able to make commercially successful spin-offs on its way to AGI, why can't you?
 
Paul Fidika
 
----- Original Message -----
Sent: Wednesday, March 16, 2005 7:40 AM
Subject: RE: [agi] Google as a strong AI

 
Paul,
 
Note that I don't begrudge Google the label "AI" --  IMO it's just as much AI as most of the stuff in Russel & Norvig (the classic AI text).  I just begrudge it the title "AGI" ;-) ... Of course, I realize that with finite processing power there is no truly general intelligence.  But there are levels of generality in intelligence, and Google's is quite low.
 
Next, your message seems to imply that commercial success or popularity are a decent measure of scientific or engineering quality or interestingness.  This is just not the case, as is shown by very many examples in recent and less recent history.
 
Finally, you suggest that an in-development AGI should necessarily have powerful applications -- e.g. that a 60%-complete AGI implementation should be 60% as useful as a complete AGI.   It's just not true.  There are plenty of other areas of science and engineering where this kind of "continuity" doesn't hold either.  A 60%-complete spacecraft doesn't fly anywhere, and probably has very little commercial value   -- so what? 
 
-- Ben


To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to