Maybe you could give me one example from the history of technology where 
machines "ran" before they could "walk"? Where they started complex rather than 
simple?  Or indeed from evolution of any kind? Or indeed from human 
development? Where children started doing complex mental operations like logic, 
say, or maths or the equivalent before they could speak?  Or started running 
before they could control their arms, roll over, crawl, sit up, haul themselves 
up, stand up, totter -  just went straight to running?**

A bottom-up approach, I would have to agree, clearly isn't obvious to AGI-ers. 
But then there are v. few AGI-ers who have much sense of history or evolution. 
It's so much easier to engage in sci-fi fantasies about future, topdown AGI's.

It's HARDER to think about where AGI starts - requires serious application to 
the problem.

And frankly, until you or anyone else has a halfway viable of where AGI will or 
can start, and what uses it will serve,  speculation about whether it's worth 
building complex, sci-fi AGI's is a waste of your valuable time.

**PS Note BTW - a distinction that eludes most AGI-ers -  a present computer 
program doing logic or maths or chess, is a fundamentally and massively 
different thing from a human or AGI doing the same, just as a current program 
doing NLP is totally different from a human using language.   IN all these 
cases, humans (and real AGIs to come) don't merely manipulate meaningless 
patterns of numbers,   they relate the symbols first to concepts and then to 
real world referents - massively complex operations totally beyond current 
computers.

The whole history of AI/would-be AGI shows the terrible price of starting 
complex - with logic/maths/chess programs for example - and not having a clue 
about how intelligence has to be developed from v. simple origins, step by 
step, in order to actually understand these activities.



From: Steve Richfield 
Sent: Friday, August 06, 2010 4:52 PM
To: agi 
Subject: Re: [agi] Epiphany - Statements of Stupidity


Mike,

Your reply flies in the face of two obvious facts:
1.  I have little interest in what is called AGI here. My interests lie 
elsewhere, e.g. uploading, Dr. Eliza, etc. I posted this piece for several 
reasons, as it is directly applicable to Dr. Eliza, and because it casts a 
shadow on future dreams of AGI. I was hoping that those people who have thought 
things through regarding AGIs might have some thoughts here. Maybe these people 
don't (yet) exist?!
2.  You seem to think that a "walk before you run" approach, basically a 
bottom-up approach to AGI, is the obvious one. It sure isn't obvious to me. 
Besides, if my "statements of stupidity" theory is true, then why even bother 
building AGIs, because we won't even be able to meaningfully discuss things 
with them.

Steve
==========

On Fri, Aug 6, 2010 at 2:57 AM, Mike Tintner <tint...@blueyonder.co.uk> wrote:

  sTEVE:I have posted plenty about "statements of ignorance", our probable 
inability to comprehend what an advanced intelligence might be "thinking", 

  What will be the SIMPLEST thing that will mark the first sign of AGI ? - 
Given that there are zero but zero examples of AGI.

  Don't you think it would be a good idea to begin at the beginning? With 
"initial AGI"? Rather than "advanced AGI"? 
        agi | Archives  | Modify Your Subscription  


      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to