Jeez,  no AI program can understand *two* consecutive *sentences* in a text - 
can understand any text period - can understand language, period. And you want 
an AGI that can understand a *story*. You don't seem to understand that 
requires cognitively a fabulous, massively evolved, highly educated, hugely 
complex set of powers . 

No AI can understand a photograph of a scene, period - a crowd scene, a house 
by the river. Programs are hard put to recognize any objects other than those 
in v. standard positions. And you want an AGI that can understand a *movie*. 

You don't seem to realise that we can't take the smallest AGI  *step* yet - and 
you're fantasying about a superevolved AGI globetrotter.

That's why Benjamin & I tried to focus on v. v. simple tests - & they're still 
way too complex & they (or comparable tests) will have to be refined down 
considerably for anyone who is interested in practical vs sci-fi fantasy AGI.

I recommend looking at Packbots and other military robots and hospital robots 
and the like, and asking how we can free them from their human masters and give 
them the very simplest of capacities to rove and handle the world independently 
- like handling and travelling on rocks. 

Anyone dreaming of computers or robots that can follow "Gone with The Wind" or 
become a child (real) scientist in the foreseeable future pace Ben, has no 
realistic understanding of what is involved.

From: deepakjnath 
Sent: Sunday, July 18, 2010 9:04 PM
To: agi 
Subject: Re: [agi] Of definitions and tests of AGI


Let me clarify. As you all know there are somethings computers are good at 
doing and somethings that Humans can do but a computer cannot.

One of the test that I was thinking about recently is to have to movies show to 
the AGI. Both movies will have the same story but it would be a totally 
different remake of the film probably in different languages and settings. If 
the AGI is able to understand the sub plot and say that the story line is 
similar in the two movies then it could be a good test for AGI structure. 

The ability of a system to understand its environment and underlying sub plots 
is an important requirement of AGI.

Deepak


On Mon, Jul 19, 2010 at 1:14 AM, Mike Tintner <tint...@blueyonder.co.uk> wrote:

  Please explain/expound freely why you're not "convinced" - and indicate what 
you expect,  - and I'll reply - but it may not be till tomorrow.

  Re your last point, there def. is no consensus on a general problem/test OR a 
def. of AGI.  

  One flaw in your expectations seems to be a desire for a single test -  
almost by definition, there is no such thing as 

  a) a single test - i.e. there should be at least a dual or serial test - 
having passed any given test, like the rock/toy test, the AGI must be presented 
with a new "adjacent" test for wh. it has had no preparation,  like say 
building with cushions or sand bags or packing with fruit. (and neither 
rock/toy test state that clearly)

  b) one kind of test - this is an AGI, so it should be clear that if it can 
pass one kind of test, it has the basic potential to go on to many different 
kinds, and it doesn't really matter which kind of test you start with - that is 
partly the function of having a good.definition of AGI .


  From: deepakjnath 
  Sent: Sunday, July 18, 2010 8:03 PM
  To: agi 
  Subject: Re: [agi] Of definitions and tests of AGI


  So if I have a system that is close to AGI, I have no way of really knowing 
it right? 

  Even if I believe that my system is a true AGI there is no way of convincing 
the others irrefutably that this system is indeed a AGI not just an advanced AI 
system.

  I have read the toy box problem and rock wall problem, but not many people 
will still be convinced I am sure.

  I wanted to know that if there is any consensus on a general problem which 
can be solved and only solved by a true AGI. Without such a test bench how will 
we know if we are moving closer or away from our quest. There is no map.

  Deepak




  On Sun, Jul 18, 2010 at 11:50 PM, Mike Tintner <tint...@blueyonder.co.uk> 
wrote:

    I realised that what is needed is a *joint* definition *and*  range of 
tests of AGI.

    Benamin Johnston has submitted one valid test - the toy box problem. (See 
archives).

    I have submitted another still simpler valid test - build a rock wall from 
rocks given, (or fill an earth hole with rocks).

    However, I see that there are no valid definitions of AGI that explain what 
AGI is generally , and why these tests are indeed AGI. Google - there are v. 
few defs. of AGI or Strong AI, period.

    The most common: AGI is human-level intelligence -  is an embarrassing 
non-starter - what distinguishes human intelligence? No explanation offered.

    The other two are also inadequate if not as bad: Ben's "solves a variety of 
complex problems in a variety of complex environments". Nope, so does  a 
multitasking narrow AI. Complexity does not distinguish AGI. Ditto Pei's - 
something to do with "insufficient knowledge and resources..."    
"Insufficient" is open to narrow AI interpretations and reducible to 
mathematically calculable probabilities.or uncertainties. That doesn't 
distinguish AGI from narrow AI.

    The one thing we should all be able to agree on (but who can be sure?) is 
that:

    ** an AGI is a general intelligence system, capable of independent 
learning**

    i.e. capable of independently learning new activities/skills with minimal 
guidance or even, ideally, with zero guidance (as humans and animals are) - and 
thus acquiring a "general", "all-round" range of intelligence..  

    This is an essential AGI goal -  the capacity to keep entering and 
mastering new domains of both mental and physical skills WITHOUT being 
specially programmed each time - that crucially distinguishes it from narrow 
AI's, which have to be individually programmed anew for each new task. Ben's 
AGI dog exemplified this in a v simple way -  the dog is supposed to be able to 
learn to fetch a ball, with only minimal instructions, as real dogs do - they 
can learn a whole variety of new skills with minimal instruction.  But I am 
confident Ben's dog can't actually do this.

    However, the independent learning def. while focussing on the distinctive 
AGI goal,  still is not detailed enough by itself.

    It requires further identification of the **cognitive operations** which 
distinguish AGI,  and wh. are exemplified by the above tests.

    [I'll stop there for interruptions/comments & continue another time].

     P.S. Deepakjnath,

    It is vital to realise that the overwhelming majority of AGI-ers do not * 
want* an AGI test -  Ben has never gone near one, and is merely typical in this 
respect. I'd put almost all AGI-ers here in the same league as the US banks, 
who only want mark-to-fantasy rather than mark-to-market tests of their assets.
          agi | Archives  | Modify Your Subscription  




  -- 
  cheers,
  Deepak

        agi | Archives  | Modify Your Subscription   

        agi | Archives  | Modify Your Subscription  




-- 
cheers,
Deepak

      agi | Archives  | Modify Your Subscription   



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to