It’s not the *meaning* of pattern that is relevant, Ben. It’s *examples* of how 
patterns have any relevance to AGI that you have to produce –= and you never 
have produced any in this conversation or any other – or any of your voluminous 
texts.

You have a very vague theory of AGI IOW – yet another “magic sauce” - that has 
no empirical foundation whatsoever. And the same is true of every theory behind 
OpenCog.

Babbage *was* an empirical thinker – you ain’t. Being empirical – producing and 
discussing evidence for your theories – is not “sinking so low” – it’s the mark 
of a serious technologist, as distinct from an academic fantasist.

From: Ben Goertzel 
Sent: Friday, December 28, 2012 12:59 AM
To: AGI 
Subject: Re: [agi] The Only Test of AGI


While I seem to be in a mood to waste more time than I usually would on AGI 
list discussions, I'm not going to sink so low as to have yet another debate 
about the meaning of "pattern" with Mike Tintner ;p 

Mikey my friend, we will just have to agree to disagree.  You can have it your 
way, and I will have it the right way ;)

-- Ben G


On Thu, Dec 27, 2012 at 7:18 PM, Mike Tintner <[email protected]> wrote:

  You seem to have forgotten Ben that “magic sauce” is a term used by many 
AGI-ers for the “missing but crucial ingredient of AGI* wh. an AGI 
projectbuilder claims he will produce at a later stage from his present outline 
but never does.

  You’ve also forgotten how many times you have indeed promised a missing but 
crucial ingredient of AGI would be produced at a later stage with some 
publication of yours – and never have produced it.

  You have done  a QED by now indicating another magic ingredient of AGI, this 
time a pattern recognition system .  you have never produced any actual 
examples of how patterns are relevant to AGI problems, and never will – just 
waffled generally.  This is complete (if fairly widely-held) nonsense –

  By definition, an essential requirement of AGI is to solve problems about 
actions and environments that do NOT fit existing patterns. By definition  an 
AGI must acquire new skills and undertake new kinds of actions – something that 
presently defies all narrow AI programs. This is a problematic business –  
precisely because the new skill/action is NOT like already known actions – does 
NOT fit any existing, known pattern. Mastering a new skill like say the actions 
of table tennis after tennis, or vice versa, presents difficulties precisely 
because the arm and body actions do not fit the same patterns. 
Mastering/understanding physics after chemistry is difficult precisely because 
the laws of one do not easily fit the laws/patterns of the other. And so it is 
with every subject area in organized knowledge.

  Being able to have a conversation with one person is difficult precisely he 
does NOT fit the conversational patterns of others. Every one is different. 
Talking to Jim about AGI is different from talking to Matt, or Pei, or Aaron, 
etc etc – because each one has a different approach to AGI, each one is 
individual, and each one’s idiosyncrasies have to be gradually identified in 
order to talk to them. They may share some common elements, but overall they 
are very different.

  Nor do the conversations or posts of any individual taken altogether fit a 
distinct pattern. We may have distinctive “styles” of conversation, but those 
styles are fluid schemas at best, and nothing like the precise patterns you are 
talking about – and result in very diverse, multiform posts. Check your own 
posts in this or any other thread. I defy you to identify overall patterns.

  I know that you have never actually applied your pattern theories to actual 
AGI problems, just as it was clear from your book on creativity that you had 
never applied your creativity theories to any actual creative problems that you 
had independently researched.

  You work by adapting other people’s theories – and unfortunately for you, 
none of them apply to building a real AGI., esp. patterns and pattern 
recognition.

  P.S.  Some indication of how AGI’s actually adapt to the new is given in our 
talking of “a period of adjustment” being required, of “getting the hang” of 
things, and “finding our feet” after stumbling and groping around. All of this 
adapting to the new has nothing to do with pattern recognition.

  A GOS must be designed to enable a robot to endlessly *develop*  its actions 
– endlessly move along *new* lines  - not fit its actions to the same old 
patterns and lines. That’s narrow AI.



  From: Ben Goertzel 
  Sent: Thursday, December 27, 2012 10:56 PM
  To: AGI 
  Subject: Re: [agi] The Only Test of AGI



    Neither Ben nor anyone else in AGI is directly addressing the problem of a 
take-off system – or indeed has a clue – wh. is why you can immediately write 
off Opencog and other such efforts. They have absolutely nothing to do with 
AGI/ take-off – wh. is also why Ben et al have always resisted any form of test 
– they always have and always will fail any test of take-off/generality.  (It’s 
not just me BTW – many have remarked that Ben et al’s “magic sauce” is not 
there – not even the idea of one)


  The idea that a "magic sauce" is needed for AGI is a mystical delusion, 
redolent of vitalism in biology...

  The statement that I resist any form of test is a bald-faced lie.   I don't 
think that testing a completed human-level AGI is a particularly hard problem, 
and I think it's a useful thing to do. The Turing Test is an OK one (if it does 
on for an hour or more), or the test of having a robot pass the third grade, 
etc. etc.   I am skeptical of quantitative metrics for early-stage partial 
progress toward human-level AGI, because I haven't yet seen any that aren't 
either

  -- requiring a system already 80% of the way to human-level AGI

  -- too easily game-able by narrow-AI systems written especially to pass the 
test

  ...

  The magic of general intelligence is simply this: A pattern recognition 
system that can recognize patterns in its environment and *itself*, including 
patterns regarding which actions tend to achieve which objectives in which 
contexts.

  The challenge of general intelligence is: Recognizing a sufficient scope of 
patterns, within a relevant and broad set of contexts, within the limited 
compute resources available....

  Meeting this challenge seems, so far as I can tell, to require a fairly 
complex and multifaceted software system with interdependent parts; which makes 
building AGI a  major engineering and algorithmic challenge.

  That's not as romantic as daydreaming about some "magic sauce" that you can 
just pour into your robot's head to make its wiring or its software get smart 
-- but it's the reality...

  -- Ben G



        AGI | Archives  | Modify Your Subscription   

        AGI | Archives  | Modify Your Subscription  





-- 
Ben Goertzel, PhD
http://goertzel.org

"My humanity is a constant self-overcoming" -- Friedrich Nietzsche

      AGI | Archives  | Modify Your Subscription   



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to