On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>  [BTW Sloman's quote is a month old]
>

Are you sure it was A. Sloman who wrote or said that?  From where I'm
sitting it looks like it was Margaret Boden who wrote it.  But then again, I
am one of those people who sometimes make mistakes.
Jim Bromer


On Thu, Jun 24, 2010 at 12:52 PM, Mike Tintner <tint...@blueyonder.co.uk>wrote:

>  [BTW Sloman's quote is a month old]
>
> I think he means what I do - the end-problems that an AGI must face. Please
> name me one true AGI end-problem being dealt with by any AGI-er - apart from
> the toybox problem.
>
> As I've repeatedly said- AGI-ers simply don't address or discuss AGI
> end-problems.  And they do indeed start with "solutions" - just as you are
> doing - re the TSP problem and the problem of combinatorial complexity, both
> of wh. have in fact nothing to do with AGI, and for neither of wh.. can you
> provide a single example of a  relevant AGI problem.
>
> One could not make up this total avoidance of the creative problem,
>
> And AGI-ers are not just shockingly but obscenely narrow in their
> disciplinarity/ the range of their problem interests - maths, logic,
> standard narrow AI computational problems,  NLP, a little robotics and
> that's about it - with by my rough estimate some 90% of human and
> animal real world problemsolving of no interest to them. That esp. includes
> their chosen key fields of language, conversation and vision - all of wh.
> are much more the province of the *arts* than the sciences, when it comes to
> AGI
>
> The fact that creative, artistic problemsolving presents a totally
> different paradigm to that of programmed, preplanned problemsolving, is of
> no interest to them - because they lack what educationalists would call any
> kind of metacognitive (& interdisciplinary) "scaffolding" to deal with it.
>
> It doesn't matter that programming itself, and developing new formulae and
> theorems - (all the forms IOW of creative maths, logic, programming, science
> and technology)  -  the very problemsolving upon wh. they absolutely
> depend.- also come under "artistic problemsolving".
>
> So there is a major need for broadening AI & AGI education both in terms of
> culturally creative problemsolving and true culture-wide
> multidisciplinarity.
>
>
>
>
>
>  *From:* Jim Bromer <jimbro...@gmail.com>
> *Sent:* Thursday, June 24, 2010 5:05 PM
> *To:* agi <agi@v2.listbox.com>
> *Subject:* Re: [agi] The problem with AGI per Sloman
>
> Both of you are wrong.  (Where did that quote come from by the way.  What
> year did he write or say that.)
>
> An inadequate understanding of the problems is exactly what has to
> be expected by researchers (both professional and amateurs) when they are
> facing a completely novel pursuit.  That is why we have endless discussions
> like these.  What happened over and over again in AI research is that the
> amazing advances in computer technology always seemed to suggest that
> similar advances in AI must be just off the horizon.  And the reality is
> that there have been major advances in AI.  In the 1970's a critic stated
> that he wouldn't believe that AI was possible until a computer was able to
> beat him in chess.  Well, guess what happened and guess what conclusion he
> did not derive from the experience.  One of the problems with critics is
> that they can be as far off as those whose optimism is absurdly unwarranted.
>
> If a broader multi-disciplinary effort was the obstacle to creating AGI, we
> would have AGI by now.  It should be clear to anyone who examines the
> history of AI or the present day reach of computer programming that a
> multi-discipline effort is not the key to creating effective AGI.  Computers
> have become pervasive in modern day life, and if it was just a matter of
> getting people with different kinds of interests involved, it would have
> been done by now.  It is a little like saying that the key to safe deep sea
> drilling is to rely on the expertise of companies that make billions and
> billions of dollars and which stand to lose billions by mistakes.  While
> that should make sense, if you look a little more closely, you can see that
> it doesn't quite work out that way in the real world.
>
> Jim Bromer
>
> On Thu, Jun 24, 2010 at 7:33 AM, Mike Tintner <tint...@blueyonder.co.uk>wrote:
>
>>  "One of the problems of AI researchers is that too often they start off
>> with an inadequate
>> understanding of the *problems* and believe that solutions are only a few
>> years away. We need an educational system that not only teaches techniques
>> and solutions, but also an understanding of problems and their difficulty —
>> which can come from a broader multi-disciplinary education. That could speed
>> up progress."
>> A. Sloman
>>
>> (& who else keeps saying that?)
>>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
>> <https://www.listbox.com/member/archive/rss/303/> | 
>> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> <http://www.listbox.com/>
>>
>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to