Re: Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Ben Goertzel

Hi Richard,


I don't really want to get too sidetracked, but even if Immerman's
analysis were correct, would this make a difference to the way that Eric
was using NP-Hard, though?


No, Immerman's perspective on complexity classes doesn't really affect
your objections...

Firstly, the descriptive logic depiction of complexity classes is
**still** about what happens as n gets large.

So it doesn't affect one of the key objections that both you and Pei
have to using concepts from computational complexity theory to analyze
AGI: which is that AGI systems don't have to deal with general classes
of problems of problem size tending to infinity, they have to deal
with **particular** problems of bounded size.  For instance, if an AGI
is good at learning **human** language, it may not matter how its
language learning capability scales when dealing with languages
falling into the same grammar category as human language whose
grammars have sizes tending to infinity.  If an AGI is good at solving
path-finding problems in real life, it may not matter how its
worst-case path-finding capability scales when dealing with paths
between n cities as n tends to infinity  Etc.  In fact there are
decent qualitative arguments that most of the algorithms used by human
cognition (insofar as it makes sense to say that human cognition uses
"algorithms", which is another issue, as Pei has noted) are
**exponential time** in terms of their scaling as problem size
approaches infinity ... but the point is that they are tuned to give
tractable performance for the problem-instances that humans generally
encounter in real life...

Secondly, Immerman's analysis doesn't affect the fact that the
formalization of "language learning" referred to by Eric Baum is only
tenuously related to the actual cognitive phenomenon of human language
learning.

On the other hand, Immerman's analysis does **suggest** (not
demonstrate) that there could be some cognitive meaningfulness to the
classes P and NP.

For instance, if someone were to show that the learning of languages
in the same general category as human natural languages ("natural-like
languages")...

-- can be naturally represented using existential second-order logic

but

-- cannot be naturally represented using first-order logic with recursion

this would be interesting, and would match up naturally with the
observation that "natural-like language" learning is NP but not P.

On the other hand, this kind of analysis would only be really
cognitively meaningful in the context of an explanation of how this
formalization of language learning is related to actual cognitive
language learning  I happen to think that such an explanation
**could* be formulated; but no one has really done so, so far.  That
is, no one has given a formalization encompassing the embodied, social
semantics and pragmatics of language learning (as discussed e.g. in
Tomassello's excellent recent book "Constructing a Language"); and in
the absence of such a formalization, formal discussions of "grammar
learning" are not convincingly connected to real cognitive language
learning.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Ben Goertzel

It's just that "problem X is NP-hard" means roughly "Any problem Y in
NP is polynomial-time reducible to problem X", and your example did
not seem to exemplify this...

All your example seemed to exemplify was a problem that was solvable
in polynomial time (class P, not class NP-hard)

However, this is irrelevant to your main conceptual point, which as I
understood it was that theorems regarding the scaling behavior of the
worst-case complexity of problems as problem size n goes to infinity
are pragmatically irrelevant...

[I'm not sure I fully agree with your conceptual point, but that's
another issue.  I used to agree but when I encountered Immerman's
descriptive complexity theory, I started wavering.  Immerman showed
e.g. that

-- P, the class of problems solvable in polynomial time, corresponds
to languages recognizable by first-order logic plus a recursion
operator

-- NP, the class of problems whose solutions are checkable in
polynomial time, corresponds to languages recognized by existential
second order logic (second order logic with second-order existential
but not universal quantification)

This is interesting and suggests that these complexity classes could
possibly have some fundamental cognitive meaning, even though such a
meaning is not obvious from their standard definitions...]

-- Ben



On 11/24/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:
> Richard,
>
> I know it's peripheral to your main argument, but in this example ...
>
>> Suppose that the computational effort that evolution needs to build
>> "different sized" language understanding mechanisms scales as:
>>
>> 2.5 * (N/7 + 1)^^6 planet-years
>>
>> ... where "different sized" is captured by the value N, which is the
>> number of conceptual primitives used in the language understanding
>> mechanism, and a "planet-year" is one planet worth of human DNA randomly
>> working on the problem for one year.  (I am plucking this out of the
>> air, of course, but that doesn't matter.)
>>
>> Here are the resource requirements for this polynomial resource function:
>>
>> N   R
>>
>> 1   2.23E+000
>> 7   6.40E+001
>> 10  2.05E+002
>> 50  2.92E+005
>> 100 1.28E+007
>> 300 7.12E+009
>>
>> (N = Number of conceptual primitives)
>> (R = resource requirement in planet-years)
>>
>> I am assuming that the appropriate measure of size of problem is number
>> of conceptual primitives that are involved in the language understanding
>> mechanism (a measure picked at random, and as far as I can see, as
>> likely a measure as any, but if you think something else should be the
>> N, be my guest).
>>
>> If there were 300 conceptual primitives in the human LUM, resource
>> requirement would be 7 billion planet-years.  That would be bad.
>>
>> But if there are only 7 conceptual primitives, it would take 64 years.
>> Pathetically small and of no consequence.
>>
>> The function is polynomial, so in a sense you could say this is an
>> NP-hard problem.
>
> I don't think you're using the term "NP-hard" correctly.
>
> http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP
>
> "
> The class P consists of all those decision problems that can be solved
> on a deterministic sequential machine in an amount of time that is
> polynomial in the size of the input; the class NP consists of all
> those decision problems whose positive solutions can be **verified**
> in polynomial time given the right information.
> "
>
> [This page also reviews, and agrees with, many of your complaints
> regarding the intuitive interpretation of P as easy and NP as hard]
>
> http://en.wikipedia.org/wiki/NP-hard
>
> "
> In computational complexity theory, NP-hard (Non-deterministic
> Polynomial-time hard) refers to the class of decision problems H such
> that for every decision problem L in NP there exists a polynomial-time
> many-one reduction to H, written . If H itself is in NP, then H is
> called NP-complete.
> "

I'd certainly welcome clarification, and I may have gotten this wrong...
but I'm not quite sure where you are directing my attention here.

Are you targeting the fact that NP-Hard is defined with respect to
decision problems, or to the reduction aspect?

My understanding of NP-hard is that it does strictly only apply to
decision problems ... but what I was doing was trying to interpret the
loose sense in which Eric himself was using NP-Hard, so if I have
stretched the definition a little, I woudl claim I was inheriting
something that was already stretched.

But maybe that was not what you meant.  I stand ready to be corrected,
if it turns out I have goofed.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.c

Re: Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Ben Goertzel

Richard,

I know it's peripheral to your main argument, but in this example ...


Suppose that the computational effort that evolution needs to build
"different sized" language understanding mechanisms scales as:

2.5 * (N/7 + 1)^^6 planet-years

... where "different sized" is captured by the value N, which is the
number of conceptual primitives used in the language understanding
mechanism, and a "planet-year" is one planet worth of human DNA randomly
working on the problem for one year.  (I am plucking this out of the
air, of course, but that doesn't matter.)

Here are the resource requirements for this polynomial resource function:

N   R

1   2.23E+000
7   6.40E+001
10  2.05E+002
50  2.92E+005
100 1.28E+007
300 7.12E+009

(N = Number of conceptual primitives)
(R = resource requirement in planet-years)

I am assuming that the appropriate measure of size of problem is number
of conceptual primitives that are involved in the language understanding
mechanism (a measure picked at random, and as far as I can see, as
likely a measure as any, but if you think something else should be the
N, be my guest).

If there were 300 conceptual primitives in the human LUM, resource
requirement would be 7 billion planet-years.  That would be bad.

But if there are only 7 conceptual primitives, it would take 64 years.
Pathetically small and of no consequence.

The function is polynomial, so in a sense you could say this is an
NP-hard problem.


I don't think you're using the term "NP-hard" correctly.

http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP

"
The class P consists of all those decision problems that can be solved
on a deterministic sequential machine in an amount of time that is
polynomial in the size of the input; the class NP consists of all
those decision problems whose positive solutions can be **verified**
in polynomial time given the right information.
"

[This page also reviews, and agrees with, many of your complaints
regarding the intuitive interpretation of P as easy and NP as hard]

http://en.wikipedia.org/wiki/NP-hard

"
In computational complexity theory, NP-hard (Non-deterministic
Polynomial-time hard) refers to the class of decision problems H such
that for every decision problem L in NP there exists a polynomial-time
many-one reduction to H, written . If H itself is in NP, then H is
called NP-complete.
"

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Eric Baum

>> > I don't think the proofs depend on any special assumptions about
>> the > nature of learning.
>> 
>> I beg to differ.  IIRC the sense of "learning" they require is
>> induction over example sentences.  They exclude the use of real
>> world knowledge, in spite of the fact that such knowledge (or at
>> least > knowledge>) are posited to play a significant role in the learning
>> of grammar in humans.  As such, these proofs say nothing whatsoever
>> about the learning of NL grammars.
>> 

I fully agree the proofs don't take into account such stuff.
And I believe such stuff is critical. Thus
I've never claimed language learning was proved hard, I've just
suggested evolution could have encrypted it.

The point I began with was, if there are lots of different locally
optimal codings for thought, it may be hard to figure out which one is 
programamed
into the mind, and thus language learning could be a hard additional
problem to producing an AGI. The AGI has to understand what the word
"foobar" means, and thus it has to have (or build) a code module meaning
``foobar" that it can invoke with this word. If it has a different set
of modules, it might be sunk in communication.

My sense about grammars for natural language, is that there are lots
of different equally valid grammars that could be used to communicate.
For example, there are the grammars of English and of Swahili. One
isn't better than the other. And there is a wide variety of other
kinds of grammars that might be just as good, that aren't even used in
natural language, because evolution chose one convention at random.
Figuring out what that convention is, is hard, at least Linguists have
tried hard to do it and failed.
And this grammar stuff is pretty much on top of, the meanings of 
the words. It serves to disambiguate, for example for error correction
in understanding. But you could communicate pretty well in pidgin, 
without it, so long as you understand the meanings of the words.

The grammar learning results (as well as the experience of linguists,
who've tried very hard to build a model for natural grammar) 
I think, are indicative that this problem is hard, and it seems that
this problem is superimposed above the real world knowledge aspect.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Eric Baum

Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:

>> > I don't know what you mean by incrementally updateable, > but if
>> you look up the literature on language learning, you will find >
>> that learning various sorts of relatively simple grammars from >
>> examples, or even if memory serves examples and queries, is
>> NP-hard.  > Try looking for Dana Angluin's papers back in the 80's.
>> 
>> No, a thousand times no.  (Oh, why do we have to fight the same
>> battles over and over again?)
>> 
>> These proofs depend on assumptions about what "learning" is, and
>> those assumptions involve a type of learning that is stupider than
>> stupid.

Ben> I don't think the proofs depend on any special assumptions about
Ben> the nature of learning.

Ben> Rather, the points to be noted are:

Ben> 1) these are theorems about the learning of general grammars in a
Ben> certain class, as n (some measure of grammar size) goes to
Ben> infinity

Ben> 2) NP-hard is about worst-case time complexity of learning
Ben> grammars in that class, of size n

These comments are of course true of any NP-hardness result.
They are reasons why the NP-hardness result does not *prove* (even
if P!=NP) that the problem is insuperable.

However, the way to bet is generally that the problem is actually
hard. Ch. 11 of WIT? gives some arguments why.

If you don't believe that, you shouldn't rely on encryption.
Encryption has all the above weaknesses in spades, and plus,
its not even proved secure given P!=NP, that requires additional
assumptions.

Also, in addition to the hardness results, there has been considerable
effort in modelling natural grammars by linguists, which has failed,
thus also providing evidence the problem is hard.


Ben> So the reason these results are not cognitively interesting is:

Ben> 1) real language learning is about learning specific grammars of
Ben> finite size, not parametrized classes of grammars as n goes to
Ben> infinity

Ben> 2) even if you want to talk about learning over parametrized
Ben> classes, real learning is about average-case rather than
Ben> worst-case complexity, anyway (where the average is over some
Ben> appropriate probability distribution)

Ben> -- Ben G


>> Any learning mechanism that had the ability to do modest analogy
>> building across domains, and which had the benefit of primitives
>> involving concepts like "on", "in", "through", "manipulate",
>> "during", "before" (etc etc) would probably be able to do the
>> grammer learning, and in any case, the proofs are completely
>> incapable of representing the capabilities of such learning
>> mechanisms.
>> 
>> Such ideas have been (to coin a phrase) debunked every which way
>> from sunday. ;-)
>> 
>> 
>> Richard Loosemore

Ben> - This list is sponsored by AGIRI: http://www.agiri.org/email
Ben> To unsubscribe or change your options, please go to:
Ben> http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Ben Goertzel

> I don't think the proofs depend on any special assumptions about the
> nature of learning.

I beg to differ.  IIRC the sense of "learning" they require is induction
over example sentences.  They exclude the use of real world knowledge,
in spite of the fact that such knowledge (or at least ) are posited to
play a significant role in the learning of grammar in humans.  As such,
these proofs say nothing whatsoever about the learning of NL grammars.

I agree they do have other limitations, of the sort you suggest below.


Ah, I see  Yes, it is true that these theorems are about grammar
learning in isolation, not taking into account interactions btw
semantics, pragmatics and grammar, for example...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Ben Goertzel

> I don't know what you mean by incrementally updateable,
> but if you look up the literature on language learning, you will find
> that learning various sorts of relatively simple grammars from
> examples, or even if memory serves examples and queries, is NP-hard.
> Try looking for Dana Angluin's papers back in the 80's.

No, a thousand times no.  (Oh, why do we have to fight the same battles
over and over again?)

These proofs depend on assumptions about what "learning" is, and those
assumptions involve a type of learning that is stupider than stupid.


I don't think the proofs depend on any special assumptions about the
nature of learning.

Rather, the points to be noted are:

1) these are theorems about the learning of general grammars in a
certain class, as n (some measure of grammar size) goes to infinity

2) NP-hard is about worst-case time complexity of learning grammars in
that class, of size n

So the reason these results are not cognitively interesting is:

1) real language learning is about learning specific grammars of
finite size, not parametrized classes of grammars as n goes to
infinity

2) even if you want to talk about learning over parametrized classes,
real learning is about average-case rather than worst-case complexity,
anyway (where the average is over some appropriate probability
distribution)

-- Ben G



Any learning mechanism that had the ability to do modest analogy
building across domains, and which had the benefit of primitives
involving concepts like "on", "in", "through", "manipulate", "during",
"before" (etc etc) would probably be able to do the grammer learning,
and in any case, the proofs are completely incapable of representing the
capabilities of such learning mechanisms.

Such ideas have been (to coin a phrase) debunked every which way from
sunday. ;-)


Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Matt Mahoney
Ben Goertzel <[EMAIL PROTECTED]> wrote:
>I am afraid that it may not be possible to find an initial project that is both
>
>* small
>* clearly a meaningfully large step along the path to AGI
>* of significant practical benefit

I'm afraid you're right.  It is especially difficult because there is a long 
history of small (i.e narrow AI) projects that appear superficially to be 
meaningful steps toward AGI.  Sometimes it is decades before we discover that 
they don't scale.
 
-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Jef Allbright
Eric Baum wrote: 

> As I and Jef and you appear to agree, extant Intelligence 
> works because it exploits structure *of our world*; there is 
> and can be (unless P=NP or some such radical and unlikely 
> possibility) no such thing as as "General" Intelligence that 
> works in all worlds.

I'm going to risk being misunderstood again over a subtle point of
clarification:

I think we are in practical agreement on the point quoted above, but I
think that a more coherent view would avoid the binary distinction and
instead place general intelligence at the end of a scale where with
diminishing exploitation of regularities in the environment
computational requirements become increasingly intractable.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Ben Goertzel

Eric wrote:

The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and
1,000,000 PC-years  (i.e a $3Bn budget).
(Why should producing a human-level AI be cheaper than decoding the
genome?) And of course, it has to scale, in the sense that you have to
be able to prove with < $10^7 (preferably < $10^6 ) that the
methodology works (as was the case more or less with the genome.)
This, it seems to me, requires a first project much more limited
than understanding most of English, yet of significant practical
benefit. I'm wondering if someone has a good proposal.


I am afraid that it may not be possible to find an initial project that is both

* small
* clearly a meaningfully large step along the path to AGI
* of significant practical benefit

My observation is that for nearly all practical tasks, either

a) it is a fairly large amount of work to get them done within an AGI
architectre

or

b) narrow-AI methods can do them pretty well with a much smaller
amount of work than it would take to do them within an AGI
architecture

I suspect there are fundamental reasons for this, even though current
computer science and AI theory doesn't let us articulate these reasons
clearly, at this stage.

So, I think that, in terms of proving the value of AGI research, we
wll likely have to settle for a combination of:

a) an interim task that is relatively small, and is clearly along the
path to AGI, and is impressive in itself but is not necessarily of
large practical benefit unto itself.

b) interim tasks that are of practical value, and utilize AGI-related
ideas, but may also be achievable (with different strengths and
weaknesses) using narrow-AI methods

As an example of a, I suggest robustly learning to carry out a number
of Piagetan concrete-operational level tasks in a simulation world.

As an example of b, I suggest natural language question answering in a
limited domain.

Alternate suggestions of tasks are solicited and much valued ... any
suggestions??  ;-)

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eric Baum

Ben> Jef wrote:
>> As I see it, the present key challenge of artificial intelligence
>> is to develop a fast and frugal method of finding fast and frugal
>> methods,

Ben> However, this in itself is not possible.  There can be a fast
Ben> method of finding fast and frugal methods, or a frugal method of
Ben> finding fast and frugal methods, but not a fast and frugal method
Ben> of finding fast and frugal methods ... not in general ...

>> in other words to develop an efficient time-bound algorithm for
>> recognizing and compressing those regularities in "the world"
>> faster than the original blind methods of natural evolution.

Ben> This paragraph introduces the key restriction -- "the world",
Ben> i.e. the particular class of environments in which the AI is
Ben> biased to operate.

As I and Jef and you appear to agree, extant Intelligence works 
because it exploits structure *of our world*;
there is and can be (unless P=NP or some such radical and 
unlikely possibility) no such thing as as "General" Intelligence 
that works in all worlds.

Ben> It is possible to have a fast and frugal method of finding {fast
Ben> and frugal methods for operating in environments in class X} ...

Ben> [However, there can be no fast and frugal method for producing
Ben> such a method based solely on knowledge of the environment X ;-)
Ben> ]

I am unsure what you mean by this. Maybe what you are saying is, its not
going to be possible by writing down a simple algorithm and running it
for a week on a PC. This I agree with.

The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and 
1,000,000 PC-years  (i.e a $3Bn budget).
(Why should producing a human-level AI be cheaper than decoding the
genome?) And of course, it has to scale, in the sense that you have to
be able to prove with < $10^7 (preferably < $10^6 ) that the
methodology works (as was the case more or less with the genome.)
This, it seems to me, requires a first project much more limited
than understanding most of English, yet of significant practical 
benefit. I'm wondering if someone has a good proposal.


Ben> One of my current sub-projects is trying to precisely formulate
Ben> conditions on the environment under which it is the case that
Ben> Novamente's particular combination of AI algorithms is "fast and
Ben> frugal at finding fast and frugal methods for solving
Ben> environment-relevant problems"   I believe I know how to do
Ben> so, but proving my intuitions rigorously will be a bunch of work
Ben> which I don't have time for at the moment ... but the task will
Ben> go on my (long) queue...

Ben> -- Ben

Ben> - This list is sponsored by AGIRI: http://www.agiri.org/email
Ben> To unsubscribe or change your options, please go to:
Ben> http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Ben Goertzel

Jef wrote:

As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods,


However, this in itself is not possible.  There can be a fast method
of finding fast and frugal methods, or a frugal method of finding fast
and frugal methods, but not a fast and frugal method of finding fast
and frugal methods ... not in general ...


in
other words to develop an efficient time-bound algorithm for recognizing
and compressing those regularities in "the world" faster than the
original blind methods of natural evolution.


This paragraph introduces the key restriction -- "the world", i.e. the
particular class of environments in which the AI is biased to operate.

It is possible to have a fast and frugal method of finding {fast and
frugal methods for operating in environments in class X} ...

[However, there can be no fast and frugal method for producing such a
method based solely on knowledge of the environment X ;-)  ]

One of my current sub-projects is trying to precisely formulate
conditions on the environment under which it is the case that
Novamente's particular combination of AI algorithms is "fast and
frugal at finding fast and frugal methods for solving
environment-relevant problems"    I believe I know how to do so,
but proving my intuitions rigorously will be a bunch of work which I
don't have time for at the moment ... but the task will go on my
(long) queue...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread Ben Goertzel

How much of the Novamente system is meant to be autonomous, and how much
will be responding only from external stymulus such as a question or a task
given externally.

Is it intended after awhile to run "on its own" where it would be up 24
hours a day, exploring potentially some by itself, or more of a contained AI
to be called up as needed?


Yes, it is intended to run permanently and autonomously, of course...

Although at the moment it is being utilized in more of a task-focused
way... this is because it is still in development...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread James Ratcliff
Ben Goertzel <[EMAIL PROTECTED]> wrote: Hi,On 11/6/06, James Ratcliff <[EMAIL PROTECTED]> wrote:> Ben,>   I think it would be beneficial, at least to me, to see a list of tasks.> Not as a "defining" measure in any way.  But as a list of work items that a> general AGI should be able to complete effectively.I agree, and I think that this requires a lot of care.  Carefullyarticulating such a list is on my agenda for the first half of nextyear (not that it will take full-time for 6 months, it will be a"background task").  My approach will be based on porting a number ofbasic ideas from human developmental psychology into thenon-human-like-AGI-acting-in-a-simulation-world domain, but will alsobe useful beyond this particular domain...>  My thoughts on a list like
 this is that is should be marked in increasing> levels of difficulty, so an initial AGI should have the ability to complete> the first level of tasks and so on.Agreed, although most tasks will have the notion of "partialcompletion" rather than being binary in nature.> Ex: One item of AI task is a simple question answering ability, that can> respond with an answer currently in the Knowledge base of the system.> A more expansive item, would require the QA task to go and get outside> information.This seems not to be a very well-specified task ;-) ... the problem isthat it refers to the internal state of the AI system (its "knowledgebase"), whereas the tasks I will define will refer only to thesystem's external behaviors given various sets of stimuli...-- BenIm very focused here on its knowledge base as that is the main module I am working on.  I am
 priming it with a large amount of extracted information from news and other texts, and its first task will be able to answer basic questions, and model the knowledge correctly there, before it can go on into deeper areas of reasoning and behaviour.  I want the internal state to have  asomewhat stable start before it goes forward into other areas.How much of the Novamente system is meant to be autonomous, and how much will be responding only from external stymulus such as a question or a task given externally.Is it intended after awhile to run "on its own" where it would be up 24 hours a day, exploring potentially some by itself, or more of a contained AI to be called up as needed?James RatcliffThank YouJames Ratcliffhttp://falazar.com 


Check out the all-new Yahoo! Mail - Fire up a more powerful email and get things done faster.
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread Ben Goertzel

Hi,

On 11/6/06, James Ratcliff <[EMAIL PROTECTED]> wrote:

Ben,
  I think it would be beneficial, at least to me, to see a list of tasks.
Not as a "defining" measure in any way.  But as a list of work items that a
general AGI should be able to complete effectively.


I agree, and I think that this requires a lot of care.  Carefully
articulating such a list is on my agenda for the first half of next
year (not that it will take full-time for 6 months, it will be a
"background task").  My approach will be based on porting a number of
basic ideas from human developmental psychology into the
non-human-like-AGI-acting-in-a-simulation-world domain, but will also
be useful beyond this particular domain...


 My thoughts on a list like this is that is should be marked in increasing
levels of difficulty, so an initial AGI should have the ability to complete
the first level of tasks and so on.


Agreed, although most tasks will have the notion of "partial
completion" rather than being binary in nature.


Ex: One item of AI task is a simple question answering ability, that can
respond with an answer currently in the Knowledge base of the system.
A more expansive item, would require the QA task to go and get outside
information.


This seems not to be a very well-specified task ;-) ... the problem is
that it refers to the internal state of the AI system (its "knowledge
base"), whereas the tasks I will define will refer only to the
system's external behaviors given various sets of stimuli...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread James Ratcliff
Ben,  I think it would be beneficial, at least to me, to see a list of tasks.  Not as a "defining" measure in any way.  But as a list of work items that a general AGI should be able to complete effectively.  I started on a list, and pulled some information off the net before, but never completed one.  My thoughts on a list like this is that is should be marked in increasing levels of difficulty, so an initial AGI should have the ability to complete the first level of tasks and so on.    One problem I am running into here and in other discussions is the seperation between online AI, and robotic AI, and I have had to restrict alot of the research I am doing to online AI, and only a small amount of simulated robotic AI.There are many tasks that are not possible with non-robotic AI, and I would like to see the different classes of these tasks, so I could correctly model the system to handle the wide variety of the behaviors
 necessary.Ex: One item of AI task is a simple question answering ability, that can respond with an answer currently in the Knowledge base of the system.A more expansive item, would require the QA task to go and get outside information.James RatcliffBen Goertzel <[EMAIL PROTECTED]> wrote: > I am happy enough with the long-term goal of independent scientific> and mathematical discovery...>> And, in the short term, I am happy enough with the goals of carrying> out the (AGISim versions of) the standard tasks used by development> psychologists to study childrens' cognitive behavior...>> I don't see a real value to precisely quantifying these goals, though...To give an example of the kind of short-term goal that I think isuseful, though, consider the
 following.We are in early 2007 (if all goes according to plan) going to teachNovamente to carry out a game called "iterated Easter Egg hunt" --basically, to carry out an Easter Egg hunt in a room full of otheragents ... and then do so over and over again, modeling what the otheragents do and adjusting its behavior accordingly.Now, this task has a bit in common with the game Hide-and-Seek.  So,you'd expect that a Novamente instance that had been taught iteratedEaster Egg Hunt, would also be good at hide-and-seek.  So, we want tosee that the time required for an NM system to learn hide-and-seekwill be less if the NM system has previously learned to play iteratedEaster Egg hunt...This sort of goal is, I feel, good for infant-stage AGI educationHowever, I wouldn't want to try to turn it into an "objective IQtest."  Our goal is not to make the best possible system for playingEaster Egg hunt or hide and
 seek or fetch or whateverAnd, in terms of language learning, our initial goal will not be tomake the best possible system for conversing in baby-talk...Rather, our goal will be to make a system that can adequately fulfillthese early-stage tasks, but in a way that we feel will beindefinitely generalizable to more complex tasks.This, I'm afraid, highlights a general issue with formal quantitativeintelligence measures as applied to immature AGI systems/minds.  Oftenthe best way to achieve some early-developmental-stage task is goingto be an overfitted, narrow-AI type of algorithm, which is not easilyextendable to address more complex tasks.This is similar to my complaint about the Hutter Prize.  Yah, asuperhuman AGI will be an awesome text compressor.  But this doesn'tmean that the best way to achieve slightly better text compressionthan current methods is going to be **at all** extensible in
 thedirection of AGI.Matt, you have yet to convince me that seeking to optimize interimquantitative milestones is a meaningful path to AGI.  I think it isprobably just a path to creating milestone-task-overfit narrow-AIsystems without any real AGI-related expansion potential...-- Ben-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?list_id=303Thank YouJames Ratcliffhttp://falazar.com 


Sponsored Link
Free Uniden 5.8GHz Phone System with Packet8 Internet Phone Service
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-04 Thread Ben Goertzel

On 11/4/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 11/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I of course don't think that SHRDLU vs. AGISim is a fair comparison.

Agreed. SHRDLU didn't even try to solve the real problems - for the simple
and sufficient reason that it was impossible to make a credible attempt at
such on the hardware of the day. AGISim (if I understand it correctly) does.
Oh, I'm sure the current implementation makes fatal compromises to fit on
today's hardware - but the concept doesn't have an _inherent_ plateau the
way SHRDLU did, so it leaves room for later upgrade. It's headed in the
right compass direction.


Actually, I phrased my comment somewhat imprecisely.

What I should have said is: I don't think that "SHRDLU's Blocks World"
versus AGISim is a fair comparison  These are both environments
for AI's ...

The absurd comparison between AI systems would be SHRDLU versus Novamente ;-)

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Russell Wallace
On 11/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I of course don't think that SHRDLU vs. AGISim is a fair comparison.Agreed. SHRDLU didn't even try to solve the real problems - for the simple and sufficient reason that it was impossible to make a credible attempt at such on the hardware of the day. AGISim (if I understand it correctly) does. Oh, I'm sure the current implementation makes fatal compromises to fit on today's hardware - but the concept doesn't have an _inherent_ plateau the way SHRDLU did, so it leaves room for later upgrade. It's headed in the right compass direction.
And, deciding which AGI is smarter is not important either -- no moreimportant than deciding whether Ben, Matt or Pei is smarter.  Who
cares?Agreed. In practice the market will decide: which system ends up doing useful things in the real world, and therefore getting used? Academic judgements of which is smarter are, well, academic.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Ben Goertzel

I am happy enough with the long-term goal of independent scientific
and mathematical discovery...

And, in the short term, I am happy enough with the goals of carrying
out the (AGISim versions of) the standard tasks used by development
psychologists to study childrens' cognitive behavior...

I don't see a real value to precisely quantifying these goals, though...


To give an example of the kind of short-term goal that I think is
useful, though, consider the following.

We are in early 2007 (if all goes according to plan) going to teach
Novamente to carry out a game called "iterated Easter Egg hunt" --
basically, to carry out an Easter Egg hunt in a room full of other
agents ... and then do so over and over again, modeling what the other
agents do and adjusting its behavior accordingly.

Now, this task has a bit in common with the game Hide-and-Seek.  So,
you'd expect that a Novamente instance that had been taught iterated
Easter Egg Hunt, would also be good at hide-and-seek.  So, we want to
see that the time required for an NM system to learn hide-and-seek
will be less if the NM system has previously learned to play iterated
Easter Egg hunt...

This sort of goal is, I feel, good for infant-stage AGI education
However, I wouldn't want to try to turn it into an "objective IQ
test."  Our goal is not to make the best possible system for playing
Easter Egg hunt or hide and seek or fetch or whatever

And, in terms of language learning, our initial goal will not be to
make the best possible system for conversing in baby-talk...

Rather, our goal will be to make a system that can adequately fulfill
these early-stage tasks, but in a way that we feel will be
indefinitely generalizable to more complex tasks.

This, I'm afraid, highlights a general issue with formal quantitative
intelligence measures as applied to immature AGI systems/minds.  Often
the best way to achieve some early-developmental-stage task is going
to be an overfitted, narrow-AI type of algorithm, which is not easily
extendable to address more complex tasks.

This is similar to my complaint about the Hutter Prize.  Yah, a
superhuman AGI will be an awesome text compressor.  But this doesn't
mean that the best way to achieve slightly better text compression
than current methods is going to be **at all** extensible in the
direction of AGI.

Matt, you have yet to convince me that seeking to optimize interim
quantitative milestones is a meaningful path to AGI.  I think it is
probably just a path to creating milestone-task-overfit narrow-AI
systems without any real AGI-related expansion potential...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Ben Goertzel

Another reason for measurements is that it makes your goals concrete.  How do you define "general 
intelligence"?  Turing gave us a well defined goal, but there are some shortcomings.  The Turing test is 
subjective, time consuming, isn't appropriate for robotics, and really isn't a good goal if it means 
deliberately degrading performance in order to appear human.  So I am looking for "better" tests.  
I don't believe the approach of "let's just build it and see what it does" is going to produce 
anything useful.



I am happy enough with the long-term goal of independent scientific
and mathematical discovery...

And, in the short term, I am happy enough with the goals of carrying
out the (AGISim versions of) the standard tasks used by development
psychologists to study childrens' cognitive behavior...

I don't see a real value to precisely quantifying these goals, though...

Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Matt Mahoney
- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 9:28:24 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages

>I do not agree that having precise quantitative measures of system
>intelligence is critical, or even important to AGI.

The reason I ask is not just to compare different systems (which you can't 
really do if they serve different purposes), but also to measure progress.  
When I experiment with language models, I often try many variations, tune 
parameters, etc., so I need a quick test to see if what I did worked.  I can do 
that very quickly using text compression.  I can test tens or hundreds of 
slightly different models per day and make very precise measurements.  Of 
course it is also useful that I can tell if my model works better or worse than 
somebody else's model that uses a completely different method.

There does not seem to be much cooperation on this list toward the goal of 
achieving AGI.  Everyone has their own ideas.  That's OK.  The purpose of 
having a metric is not to make it a race, but to help us communicate what works 
and what doesn't so we can work together while still pursuing our own ideas.  
Papers on language modeling do this by comparing different algorithms and 
reporting the results by word perplexity.  So you don't have to re-experiment 
with various n-gram backoff models, LSA, statistical parsers, etc.  You already 
know a lot about what works and what doesn't.

Another reason for measurements is that it makes your goals concrete.  How do 
you define "general intelligence"?  Turing gave us a well defined goal, but 
there are some shortcomings.  The Turing test is subjective, time consuming, 
isn't appropriate for robotics, and really isn't a good goal if it means 
deliberately degrading performance in order to appear human.  So I am looking 
for "better" tests.  I don't believe the approach of "let's just build it and 
see what it does" is going to produce anything useful.

 
-- Matt Mahoney, [EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Ben Goertzel

It does not help that words in SHRDLU are grounded in an artificial world.  Its 
failure to scale hints that approaches such as AGI-Sim will have similar 
problems.  You cannot simulate complexity.


I of course don't think that SHRDLU vs. AGISim is a fair comparison.

Among other counterarguments:the idea is that AGI systems trained in
AGISim may then be able to use their learning to  operate in the
physical world, controlling robots similar to their AGISim simulated
robots

With this in mind, we have plans to eventually integrate the Pyro
robotics control toolkit with AGISim (likely extending Pyro in the
process), so that the same code can be used to control physical robots
as AGISim simulated robots...

Now, you can argue that this just won't work, because (you might say)
there is nothing in common between learning
perception-cognition-action in a simulated world like AGISim, and
learning the same thing in the physical world.  You might argue that
the relative lack of richness in perceptual stimuli and motoric
control makes a tremendous qualitative difference.  OK, I admit I
cannot rigorously prove this sort of argument false  Nor can you
prove it true.   As with anything else in AGI, we must to some extent
go on intuition until someone develops a real mathematical theory of
pragmatic AGI, or someone finally creates a working AGI based on their
intuition.

But at least, you must admit there is a plausible argument to be made
that effective AGI operation in a somewhat realistic simulation world
can transfer to similar operation in the physical world.  We are not
talking about SHRDLU here.  We are talking about a system that
perceives simulated visual stimuli and has to recognize objects as
patterns in these stimuli; that acts in the world by sending movement
commands to joints; etc.  Problems posed to the system need to be
recognized by the system in terms of these sensory and motoric
primitives, analogously to what happens with a system embedded in the
physical world via a physical body.


In a similar way, SHRDLU performed well in its artificial, simple world.  But 
how would you measure its performance in a real world?


I believe I have addressed this by noting that AGI performance is
intended to be portable from AGISim into the physical world.

Of course, with any simulated environment there is always the risk of
creating an AGI or AI system that is overfit to that simulated
environment.  However, being aware of that risk, I feel it is going to
be that difficult to avoid it.


If we are going to study AGI, we need a way to perform tests and measure 
results.  It is not just that we need to know what works and what doesn't.  The 
systems we build will be too complex to know what we have built.  How would you 
measure them?  The Turing test is the most widely accepted, but it is somewhat 
subjective and not really appropriate for an AGI with sensorimotor I/O.  I have 
proposed text compression.  It gives hard numbers, but it seems limited to 
measuring ungrounded language models.  What else would you use?  Suppose that 
in 10 years, NARS, Novamente, Cyc, and maybe several other
systems all claim to have solved the AGI problem.  How would you test
their claims?  How would you decide the winner?


I do not agree that having precise quantitative measures of system
intelligence is critical, or even important to AGI.

And, deciding which AGI is smarter is not important either -- no more
important than deciding whether Ben, Matt or Pei is smarter.  Who
cares?  Different systems may have different strengths and weaknesses,
so that "who is smarter" often explicitly comes down to a subjective
value judgment  We may ask who is likely to be better at carrying
out some particular problem-solving task; we may say that A is
generically smarter than B if A is better than B at carrying out
*every* problem-solving task (Pareto optimality, sorta), but this is
not a very useful notion in practice.

Once we have an AGI that can hold an English conversation that appears
to trained human scientists to be intelligent and creative, and that
makes original discoveries in science or mathematics, then the
question of whether it is "intelligent" or not will cease to be very
interesting.  That is our mid-term goal with Novamente.  I don't see
why quantitative measures of intelligence are necessary or even useful
along the path to getting there.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Ben Goertzel

Luke wrote:

It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
representation that is underlying it all. Simply the first-order logic or
usual programming styles are different from everyday communication.
Switching to Lojban might remove the remaining syntax errors, but
I don't see how it can help with this bigger problem. Ben, do you think
using Lojban can really substantially help or are you counting on Agi-Sim
world and Novamente architecture in general, and want to use Lojban
just to simplify language analysis?


Above all I am counting on the Novamente architecture in general

However, I do think the Lojban language, properly extended, has a lot of power.

Following up on the excellent point you made: I do think that a mode
of communication combining aspects of programming with aspects of
commonsense natural language communication can be achieved -- and that
this will be a fascinating thing.

However, I think this can be achieved only AFTER one has a reasonably
intelligent proto-AGI system that can take semantically
slightly-imprecise statements and automatically map them into fully
formalized programming-type statements.

Lojban has no syntactic ambiguity but it does allow semantic ambiguity
as well as extreme semantic precision.

Using Lojban for programming would involve using its capability for
extreme semantic precision; using it for commonsense communication
involves using its capability for judiciously controlled semantic
ambiguity.  Using both these capabilities together in a creative way
will be easier with a more powerful AI back end...

E.g., you'd like to be able to outline the obvious parts of your code
in a somewhat ambiguous way (but still, using Lojban, much less
ambiguously than would be the case in English), and have the AI figure
out the details.  But then, the tricky parts of the code would be
spelled out in detail using full programming-language-like precision.

Of course, it may be that once the AGI is smart enough to be used in
this way, it's only a short time after that until the AGI writes all
its own code and we become obsolete as coders anyway ;-)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Lukasz Kaiser

Hi.


It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a programming language
syntax based on Lojban, using his Speagram program interpreter
framework.

The nice thing about Lojban is that it does have the flexibility to be
used as a pragmatic programming language (tho no one has done this yet),
**or** to be used to describe everyday situations in the manner
of a natural language


Yes, in my opinion this **OR** should really be underlined. And I think
this is a very big problem -- you can talk about programming *or* talk
in everyday manner, but hardly both at the same time.

I could recently feel the pain as a friend of mine worked on using
Speagram in Wengo (an open source VoIP client) for language
control of different commands and actions. The problem is that, even
if you manage to get through parsing, context, disambiguation, add
some meaningful interaction etc., you end up with a set of commands
that is very hard to extend for non-programmer. So basically you can
activate a few pre-programmed commands in a quite-natural language
*and* you can add new commands in a naturally looking programming
language. But, even though this is internally the same language, there
is no way to say that you can program in a way that feels natural.

It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
representation that is underlying it all. Simply the first-order logic or
usual programming styles are different from everyday communication.
Switching to Lojban might remove the remaining syntax errors, but
I don't see how it can help with this bigger problem. Ben, do you think
using Lojban can really substantially help or are you counting on Agi-Sim
world and Novamente architecture in general, and want to use Lojban
just to simplify language analysis?

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Ben Goertzel

Hi,


I think an interesting goal would be to teach an AGI to write software.  If I 
understand your explanation, this is the same problem.


Yeah, it's the same problem.

It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a programming language
syntax based on Lojban, using his Speagram program interpreter
framework.

The nice thing about Lojban is that it does have the flexibility to be
used as a pragmatic programming language (tho no one has done this
yet), **or** to be used to describe everyday situations in the manner
of a natural language

> How could such an AGI be built?   What would be its architecture?
What learning algorithm?  What training data?  What computational
cost?

Well, I think Novamente is one architecture that can achieve this
But I do not know what the computational cost will be, as Novamente is
too complicated to support detailed theoretical calculations of its
computational cost in realistic situations.  I have my estimates of
the computational cost, but validating them will have to wait till the
project progresses further...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Ben Goertzel

Yes, teaching an AI in Esperanto would make more sense than teaching
it in English ... but, would not serve the same purpose as teaching it
in Lojban++ and a natural language in parallel...

In fact, an ideal educational programme would probably be to use, in parallel

-- an Esperanto-based, rather than English-based, version of  Lojban++
-- Esperanto

However, I hasten to emphasize that this whole discussion is (IMO)
largely peripheral to AGI.

The main point is to get the learning algorithms and knowledge
representation mechanisms right.  (Or if the learning algorithm learns
its own KR's, that's fine too...).  Once one has what seems like a
workable learning/representation framework, THEN one starts talking
about the right educational programme.  Discussing education in the
absence of an understanding of internal learning algorithms is perhaps
confusing...

Before developing Novamente in detail, I would not have liked the idea
of using Lojban++ to help teach an AGI, for much the same reasons that
you are now complaining.

But now, given the specifics of the Novamente system, it turns out
that this approach may actually make teaching the system considerably
easier -- and make the system more rapidly approach the point where it
can rapidly learn natural language on its own.

To use Eric Baum's language, it may be that by interacting with the
system in Lojban++, we human teachers can supply the baby Novamente
with much of the "inductive bias" that humans are born with, and that
helps us humans to learn natural languages so relatively easily

I guess that's a good way to put it.  Not that learning Lojban++ is a
substitute for learning English, rather that the knowledge gained via
interaction in Lojban++ may be a substitute for human babies'
language-focused and spacetime-focused inductive bias.

Of course, Lojban++ can be used in this way **only** with AGI systems
that combine
-- a robust reinforcement learning capability
-- an explicitly logic-based knowledge representation

But Novamente does combine these two factors.

I don't expect to convince you that this approach is a good one, but
perhaps I have made my motivations clearer, at any rate.  I am
appreciating this conversation, as it is pushing me to verbally
articulate my views more clearly than I had done before.

-- Ben G



On 11/2/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages

>Here is how I intend to use Lojban++ in teaching Novamente.  When
>Novamente is controlling a humanoid agent in the AGISim simulation
>world, the human teacher talks to it about what it is doing.  I would
>like the human teacher to talk to it in both Lojban++ and English, at
>the same time.  According to my understanding of Novamente's learning
>and reasoning methods, this will be the optimal way of getting the
>system to understand English.  At once, the system will get a
>perceptual-motor grounding for the English sentences, plus an
>understanding of the logical meaning of the sentences.  I can think of
>no better way to help a system understand English.  Yes, this is not
>the way humans do it. But so what?  Novamente does not have a human
>brain, it has a different sort of infrastructure with different
>strengths and weaknesses.

What about using "baby English" instead of an artificial language?  By this I 
mean simple English at the level of a 2 or 3 year old child.  Baby English has many of 
the properties that make artificial languages desirable, such as a small vocabulary, 
simple syntax and lack of ambiguity.  Adult English is ambiguous because adults can use 
vast knowledge and context to resolve ambiguity in complex sentences.  Children lack 
these abilities.

I don't believe it is possible to map between natural and structured language 
without solving the natural language modeling problem first.  I don't believe 
that having structured knowledge or a structured language available makes the 
problem any easier.  It is just something else to learn.  Humans learn natural 
language without having to learn structured languages, grammar rules, knowledge 
representation, etc.  I realize that Novamente is different from the human 
brain.  My argument is based on the structure of natural language, which is 
vastly different from artificial languages used for knowledge representation.  
To wit:

- Artificial languages are designed to be processed (translated or compiled) in 
the order: lexical tokenization, syntactic parsing, semantic extraction.  This 
does not work for natural language.  The correct order is the order in which 
children learn: lexical, semantics, syntax.  Thus we have successful language 
models that extract semantics without syntax 

Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

I know people can learn Lojban, just like they can learn Cycl or LISP.  Lets 
not repeat these mistakes.  This is not training, it is programming a knowledge 
base.  This is narrow AI.

-- Matt Mahoney, [EMAIL PROTECTED]


You seem not to understand the purpose of using Lojban to help teach an AI.

Of course it is not a substitute for teaching an AI a natural language.

It is simply a tool to help beef up the understanding of certain types
of AI systems to the point where they are ready to robustly understand
natural language  Just because humans don't learn this way doesn't
mean some kinds of AI's shouldn't.  And, just because Cyc is
associated with a poor theory of AI education, doesn't mean that all
logic-based AI systems are.  (Similarly, just because backprop NN's
are associated with a poor theory of AI education, doesn't mean all NN
systems necessarily are.)

Here is how I intend to use Lojban++ in teaching Novamente.  When
Novamente is controlling a humanoid agent in the AGISim simulation
world, the human teacher talks to it about what it is doing.  I would
like the human teacher to talk to it in both Lojban++ and English, at
the same time.  According to my understanding of Novamente's learning
and reasoning methods, this will be the optimal way of getting the
system to understand English.  At once, the system will get a
perceptual-motor grounding for the English sentences, plus an
understanding of the logical meaning of the sentences.  I can think of
no better way to help a system understand English.  Yes, this is not
the way humans do it. But so what?  Novamente does not have a human
brain, it has a different sort of infrastructure with different
strengths and weaknesses.

If it results in general intelligence, it is not "narrow AI".   The
goal of this teaching methodology is to give Novamente a general
conceptual understanding, using which it can flexibly generalize its
understanding to progressively more and more complex situations.

This is not what we are doing yet, mainly because we lack a Lojban++
parser still (just a matter of a few man-months of effort, but we have
other priorities), but it is in the queue and we will get there in
time, as resources permit...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

Eliezer wrote:

"Natural" language isn't.  Humans have one specific idiosyncratic
built-in grammar, and we might have serious trouble learning to
communicate in anything else - especially if the language was being used
by a mind quite unlike our own.


Well, some humans have learned to communicate in Lojban quite
effectively.  It's slow and sometimes painful and sometimes
delightful, but definitely possible, and there is no NL syntax
involved...


Even a "programming language" is still
something that humans made, and how many people do you know who can
*seriously*, not-jokingly, think in syntactical C++ the way they can
think in English?


One (and it's not me)

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

Hi,


Which brings up a question -- is it better to use a language based on
term or predicate logic, or one that imitates (is isomorphic to) natural
languages?  A formal language imitating a natural language would have the
same kinds of structures that almost all natural languages have:  nouns,
verbs, adjectives, prepositions, etc.  There must be a reason natural
languages almost always follow the pattern of something carrying out some
action, in some way, and if transitive, to or on something else.  On the
other hand, a logical language allows direct  translation into formal logic,
which can be used to derive all sorts of implications (not sure of the
terminology here) mechanically.


I think the Lojban strategy -- of parsing into formal logic -- is the
best approach, because the NL categories that you mention are wrapped
up with all sorts of irritating semantic ambiguities...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

For comparison, here are some versions of

"I saw the man with the telescope"

in Lojban++ ...

[ http://www.goertzel.org/papers/lojbanplusplus.pdf ]

1)
mi pu see le man sepi'o le telescope
"I saw the man, using the telescope as a tool"

2)
mi pu see le man pe le telescope
"I saw the man who was with the telescope, and not some other man"

3)
mi pu see le man ne le telescope
"I saw the man, and he happened to be with the telescope"

4)
mi pu saw le man sepi'o le telescope
"I carried out a sawing action on the man, using the telescope as a tool"

Each of these can be very simply and unambiguously translated into
predicate logic, using the Lojban++ cmavo ("function words") as
semantic primitives.

Some notes on Lojban++ as used in these very simple examples:

-- "pu" is an article indicating past tense.
-- "mi" means me/I
-- sepi'o means basically "the following item is used as a tool in the
predicate under discussion"
-- "le" is sort of like "the"
-- "pe" is association
-- "ne" is incidental association
-- in example 4, the parser must figure out that the action rather
than object meaning of "saw" is intended because two arguments are
provided (mi, and "le man")

Anyway, I consider the creation of a language that is suitable for
human-computer communication about everyday or scientific phenomena,
and that is minimally ambiguous syntactically and semantically, to be
a solved problem.  It was already basically solved by Lojban, but
Lojban suffers from a shortage-of-vocabulary issue which Lojban++
remedies.

There is a need for someone to write a Lojban++ parser and semantic
mapper, but this is a straightforward though definitely not trivial
task.

As discussed before, I feel the use of Lojban++ may be valuable in
order to help with the early stages of teaching an AGI.  I disagree
that "if an AGI system is smart, it can just learn English."  Human
babies take a long time to learn English or other natural languages,
and they have the benefit of some as yet unknown amount of inbuilt
wiring ("inductive bias") to help them.  There is nothing wrong with
taking explicit steps to make it easier to transform a powerful
learning system into an intelligent, communicative mind...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]