On 5/1/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

As I said to Ben, the crucial cultural background here is that intelligence
and creativity have not been properly defined in any sphere. There is no
consensus about types of problems, about the difference between AI and AGI,
or, more crucially, between divergent and convergent intelligence, etc. etc.
So I don't agree that you can assume that a given AI architecture or system
will be able to solve a whole set of problems.

I don't assume that, and that is exactly why I listed five different
working definitions in the paper I mentioned, and argued that no one
can replace the other completely.

And a large part of my point is that the question "what does it do?"  SHOULD
be obvious, but isn't because there's clearly a whole producer-centric
culture within AI/AGI of ignoring it.

As this debate shows, what is considered as "obvious" by different
people are obviously different. ;-)

P.S. I think I see that one problem I'm having communicating with both you &
Ben, is that you're both working within a fading dichotomy of AI -specific
well-defined problems vs AGI - general problem-solving which supposedly
doesn't have to be defined (and keep pushing me into the AI camp).

I'm saying you do have to define what your AGI will do - but define it as a
tree - 1)  a general class of problems - supported by 2) examples of
specific types of problem within that class. I'm calling for something
different to the traditional alternatives here.

I doubt that anyone is doing much thinking about general CLASSES of
problems. I've been trying to do it in my posts .

I have made it very clear in my papers why I don't want to define
intelligence by the set of practical problems it can solve. It is fine
for you to disagree, but at least you should try to see why I take
this position before claiming it to be wrong "for obvious reasons".

Pei



----- Original Message -----
From: "Pei Wang" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Tuesday, May 01, 2007 7:08 PM
Subject: Re: [agi] The role of incertainty


> On 5/1/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
>>
>> The difficulty here is that the problems to be solved by an AI or AGI
>> machine are NOT accepted, well-defined. We cannot just take Pei's NARS,
>> say,
>> or NOvaemnte, and say well obviously it will apply to all these different
>> kinds of problems. No doubt it will apply to many. But you have to
>> explain.
>> You have to classify the problems.
>
> We indeed have done that. What you suggested in exactly what I called
> "Capability-AI" in
> http://nars.wang.googlepages.com/wang.AI_Definitions.pdf . I agree
> that it is closer to many people's intuitive understanding to
> "intelligence" --- after all, we judge other people's intelligence by
> what practical problems they can solve. However, this understanding
> has serious limitation, as analyzed in the paper, as well as shown by
> the history of AI, since your idea is quite close to mainstream AI.
>
> Again, I'm not really trying to convince you, but to show you that if
> some AGI researchers don't do what you consider as "obvious", they may
> have some consideration which cannot be simply rejected as obviously
> wrong.
>
> Pei
>
>> Indeed, you will at some point be able to (or can already) describe
>> different AI architectures almost as engines - but it's bringing all
>> those
>> problems together - which is a mixture of a psychological and
>> philosophical
>> problem.
>>
>> Background here: the fact that psychologists are still arguing about
>> whether
>> g exists - general intelligence - is a reflection of the difficulties
>> here -
>> the unsolved problems of defining problems. However those difficulties
>> are
>> not that great or insuperable.
>>
>> Not much point in arguing further here - all I can say now is TRY it -
>> try
>> focussing your work the other way round - I'm confident you'll find it
>> makes
>> life vastly easier and more productive.  Defining what it does is just as
>> essential for the designer as for the consumer.
>>
>>
>>
>> ----- Original Message -----
>> From: Benjamin Goertzel
>> To: agi@v2.listbox.com
>> Sent: Tuesday, May 01, 2007 5:57 PM
>> Subject: Re: [agi] The role of incertainty
>>
>>
>>
>> >
>> >
>> >
>> >
>> > P.S. This is a truly weird conversation. It's like you're saying.."Hell
>> it's a box, why should I have to tell you what my box does?" Only
>> insiders
>> care what's inside the box. The rest of the world wants to know what it
>> does
>> - and that's the only way they'll buy it and pay attention to it - and
>> the
>> only reason they should. Life's short.
>>
>>
>> Well, I am not trying to sell the Novamente Cognition Engine to the
>> average
>> Joe as ANYTHING, because it is not finished.
>>
>> When it is finished, I will still not try to sell it to the average Joe
>> (or
>> Mike ;-) as a purpose-specific product, because it is not one.
>>
>> What I will try to sell to people are purpose-specific products, such as
>> virtual pets that they can train, or software systems they can use (if
>> they're biologists) to find patterns in their data, etc.   I understand
>> that
>> what people want to pay for, are purpose-specific products.  However,
>> what
>> will enable the construction of a wide variety of purpose-specific
>> products,
>> is a general-purpose AGI engine...
>>
>> To use a rough analogy, suppose it was a long time ago and I was
>> developing
>> the world's first internal combustion engine.  Then we could argue...
>>
>> Mike: What are you working on, Ben?
>>
>> Ben: I'm building an internal combustion engine
>>
>> Mike: What does it do?
>>
>> Ben: Well, it's a device in which rapid oxidation, of gas and air occurs
>> in
>> a confined space called a combustion chamber. This exothermic reaction of
>> a
>> fuel with an oxidizer creates gases of high temperature and pressure,
>> which
>> are permitted to expand. The defining feature of an internal combustion
>> engine is that useful work is performed by the expanding hot gases acting
>> directly to cause pressure, further causing movement of the piston inside
>> the cylinder.
>>
>> Mike: What?
>>
>> Ben: Well, you burn stuff in a closed chamber and it makes pistons move
>> up
>> and down
>>
>> Mike: Oh.  Well who the hell would want to buy something that does that?
>> No
>> one wants to watch pistons move up and down, at least not in my neck of
>> the
>> woods.
>>
>> Ben: Well you can use it for all sorts of different things
>>
>> Mike: Like what?
>>
>> Ben: Well, to power a car, or a locomotive, or an electrical generator
>> ...
>> or even a backpack helicopter.  Maybe a robot.  A lawnmower.
>>
>> Mike: Ok, so if you want to get your engine built, you need to set a
>> specific goal.  For instance, your goal could be to build a lawnmower.
>>
>> Ben: Well, that could be a good incremental goal -- to make a small
>> version
>> of my engine to power a lawnmower.  But no particular goal is going to
>> encapsulate all the applications of the engine.  The main point is that
>> I'm
>> building an engine that lets you burn fuel and thus create mechanical
>> work
>> -- and this can be used for all sorts of different things.
>>
>> Mike: But, if you want people to buy it, you have to tell them what it
>> will
>> do for them.  No one wants to buy a machine that sits in their livingroom
>> and makes pistons bob up and down.
>>
>> Ben: Ok, look.  This conversation is getting frustrating.  I'm going to
>> close the email window and get back to work.
>>
>> Mike: Darn, this conversation is getting frustrating.  I don't want to
>> buy a
>> bunch of exothermic reactions, I want to buy something that does
>> something
>> specific for me.
>>
>> ***
>>
>>
>> -- Ben
>>
>>
>> You could ask them for the specific purpose of the generator, and they
>> would
>> say: "Well, it can be used to power light bulbs, or computers, or cars,
>> or
>> refrigerators, or etc. etc. etc. ....  But yet, none of these particular
>> applications summarizes what it does.  What it does is to generate
>> electricity, which then can be used in a lot of applications.
>>
>>
>>  ________________________________
>>  This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?&;
>>
>>  ________________________________
>>
>>
>> No virus found in this incoming message.
>> Checked by AVG Free Edition.
>> Version: 7.5.467 / Virus Database: 269.6.2/782 - Release Date: 01/05/2007
>> 02:10
>>
>>
>> ________________________________
>>  This list is sponsored by AGIRI: http://www.agiri.org/email
>> To unsubscribe or change your options, please go to:
>> http://v2.listbox.com/member/?&;
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>
> --
> No virus found in this incoming message.
> Checked by AVG Free Edition. Version: 7.5.467 / Virus Database:
> 269.6.2/782 - Release Date: 01/05/2007 02:10
>
>


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=fabd7936

Reply via email to