Ben,

Again, this provokes some playful developments.

As I think you may have more or less noted, the goals of the whole thread and 
of most people responding are somewhat ill-defined, (which in this context is 
fine).

(And the following relates to the adjacent thread too). The human mind doesn't 
start with - isn't started by - goals; (nor should any AGI),  it starts with 
*drives.*

You have drives to food, warmth, activity, (as you note - to "mental 
exercise/activity") and more... Which are extremely general and can each be 
satisfied in an infinity of ways.

You then have to *specify* goals for your drives, which are still v. general,  
albeit a level more specific, - but do point to some kind of action  - and then 
have to be more and more precisely specified.  "I'm hungry... right, I want 
Chinese...  right, I'll go to Chang's.." and then you specify strategies, 
tactics and moves.

But humans again and again, plunge into many activities with mixed, conflicting 
drives,  and ill- or *unspecified goals*. Like what exactly were you or I doing 
in formulating our different posts?   Goals were often being redefined ad hoc 
or formulated for the first time down the line, after we'd started.

And this is a characteristic - and sometimes failing/ sometimes adaptive 
advantage - of much, if not most human activity. We enter many activities with 
confused goals, and often fail to satisfactorily define them at all. I 
criticise current AGI, as you know, (and remember it consists of highly 
developed, highly advanced projects), for having no v. practical definition (& 
therefore goal) of "intelligence" or the problems it wants to solve;. You on 
your side insist that you don't have to have such precisely defined goals - 
your intuitive (and by definition, ill-defined) sense of intelligence will do. 
The specific argument doesn't matter here - the point is it illustrates how the 
goals of a general intelligence are, and have to be continually "played" with - 
  a) sometimes not defined at all b) sometimes half- or ill-defined c) usually 
mixed and d) continuously provisional, and in *creative development* - with the 
frequent disadvantage, evidenced by a trillion undergrad essays, that goals may 
be  way too ill-defined.




  Ben:I wrote a blog post enlarging a little on the ideas I developed in my 
response to the "playful AGI" thread...

  See

  http://multiverseaccordingtoben.blogspot.com/2008/08/logic-of-play.html

  Some of the new content I put there:

  "
  Still, I have to come back to the tendency of play to give rise to goal drift 
... this is an interesting twist that apparently relates to the wildness and 
spontaneity that exists in much playing. Yes, most particular forms of play do 
seem to arise via the syllogism I've given above. Yet, because it involves 
activities that originate as simulacra of goals that go BEYOND what the mind 
can currently do, play also seems to have an innate capability to drive the 
mind BEYOND its accustomed limits ... in a way that often transcends the goal G 
that the play-goal G1 was designed to emulate....

  This brings up the topic of meta-goals: goals that have to do explicitly with 
goal-system maintenance and evolution. It seems that playing is in fact a 
meta-goal, quite separately from the fact of each instance of playing generally 
involving an imitation of some other specific real-life goal. Playing is a 
meta-goal that should be valued by organisms that value growth and spontaneity 
... including growth of their goal systems in unpredictable, adaptive ways....
  "

  -- Ben G


  On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:


    About play... I would argue that it emerges in any sufficiently 
generally-intelligent system
    that is faced with goals that are difficult for it ... as a consequence of 
other general cognitive
    processes...

    If an intelligent system has a goal G which is time-consuming or difficult 
to achieve ...

    it may then synthesize another goal G1 which is easier to achieve

    We then have the uncertain syllogism

    Achieving G implies reward
    G1 is similar to G
    |-
    Achieving G1 implies reward

    As links between goal-achievement and reward are to some extent modified by 
uncertain
    inference (or analogous process, implemented e.g. in neural nets), we thus 
have the
    emergence of "play" ... in cases where G1 is much easier to achieve than G 
...

    Of course, if working toward G1 is actually good practice for working 
toward G, this may give the intelligent
    system (if it's smart and mature enough to strategize) or evolution impetus 
to create
    additional bias toward the pursuit of G1

    In this view, play is a quite general structural phenomenon ... and the 
play that human kids do with blocks and sticks and so forth is a special case, 
oriented toward ultimate goals G involving physical manipulation

    And the knack in gaining anything from play is in appropriate 
similarity-assessment ... i.e. in measuring similarity between G and G1 in such 
a way that achieving G1 actually teaches things useful for achieving G

    So for any goal-achieving system that has long-term goals which it can't 
currently effectively work directly toward, play may be an effective strategy...

    In this view, we don't really need to design an AI system with play in 
mind.  Rather, if it can explicitly or implicitly carry out the above 
inference, concept-creation and subgoaling processes, play should emerge from 
its interaction w/ the world...

    ben g




    On Tue, Aug 26, 2008 at 8:20 AM, David Hart <[EMAIL PROTECTED]> wrote:

      On 8/26/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
        Is anyone trying to design a self-exploring robot or computer? Does 
this principle have a name?

      Interestingly, some views on AI advocate specifically prohibiting 
self-awareness and self-exploration as a precaution against the development of 
unfriendly AI. In my opinion, these views erroneously transfer familiar human 
motives onto 'alien' AGI cognitive architectures - there's a history of 
discussing this topic  on SL4 and other places.

      I believe however that most approaches to designing AGI (those that do 
not specifically prohibit self-aware and self-explortative behaviors) take for 
granted, and indeed intentionally promote, self-awareness and self-exploration 
at most stages of AGI development. In other words, efficient and effective 
recursive self-improvement (RSI) requires self-awareness and self-exploration. 
If any term exists to describe a 'self-exploring robot or computer', that term 
is RSI. Coining a lesser term for 'self-exploring AI' may be useful in some 
proto-AGI contexts, but I suspect that 'RSI' is ultimately a more useful and 
meaningful term.

      -dave


--------------------------------------------------------------------------
            agi | Archives  | Modify Your Subscription  





    -- 
    Ben Goertzel, PhD
    CEO, Novamente LLC and Biomind LLC
    Director of Research, SIAI
    [EMAIL PROTECTED]

    "Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson






  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome " - Dr Samuel Johnson




------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com

Reply via email to