Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread John Scanlon
The development of real AI is a progressive evolutionary process.  The 
ability to use natural languages, with even a minimum of fluency,  is simply 
beyond the capacity of any AI technology that exists today.  A para-natural 
language can communicate all the essential meanings of a natural language 
without the intractable messiness, and can be parsed easily like any other 
computer language.  It's the best choice for the current primitive state of 
AI technology.  The development of human-level natural-language abilities 
will take as much time as the development of human-level intelligence, and 
this will not happen right away.  Dumb, to less dumb, to somewhat smart, to 
smart is a necessary progression.


- Original Message - 
From: "Richard Loosemore" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 31, 2006 9:08 AM
Subject: Re: [agi] Natural versus formal AI interface languages



John Scanlon wrote:
One of the major obstacles to real AI is the belief that knowledge of a 
natural language is necessary for intelligence.  A human-level 
intelligent system should be expected to have the ability to learn a 
natural language, but it is not necessary.  It is better to start with a 
formal language, with unambiguous formal syntax, as the primary interface 
between human beings and AI systems.  This type of language could be 
called a "para-natural formal language."  It eliminates all of the 
syntactical ambiguity that makes competent use of a natural language so 
difficult to implement in an AI system.  Such a language would also be a 
member of the class "fifth generation computer language."


Not true.  If it is too dumb to acquire a natural language then it is too 
dumb, period.


Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Richard Loosemore

John Scanlon wrote:
One of the major obstacles to real AI is the belief that knowledge of a 
natural language is necessary for intelligence.  A human-level 
intelligent system should be expected to have the ability to learn a 
natural language, but it is not necessary.  It is better to start with a 
formal language, with unambiguous formal syntax, as the primary 
interface between human beings and AI systems.  This type of language 
could be called a "para-natural formal language."  It eliminates all of 
the syntactical ambiguity that makes competent use of a natural language 
so difficult to implement in an AI system.  Such a language would also 
be a member of the class "fifth generation computer language."


Not true.  If it is too dumb to acquire a natural language then it is 
too dumb, period.


Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Pei Wang

Let's don't confuse two statements:

(1) To be able to use a natural language (so as to passing Turing
Test) is not a necessary condition for a system to be intelligent.

(2) A true AGI should have the potential to learn any natural language
(though not necessarily to the level of native speakers).

I agree with both of them, and I don't think they contradict to each other.

Pei

On 10/31/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

John Scanlon wrote:
> One of the major obstacles to real AI is the belief that knowledge of a
> natural language is necessary for intelligence.  A human-level
> intelligent system should be expected to have the ability to learn a
> natural language, but it is not necessary.  It is better to start with a
> formal language, with unambiguous formal syntax, as the primary
> interface between human beings and AI systems.  This type of language
> could be called a "para-natural formal language."  It eliminates all of
> the syntactical ambiguity that makes competent use of a natural language
> so difficult to implement in an AI system.  Such a language would also
> be a member of the class "fifth generation computer language."

Not true.  If it is too dumb to acquire a natural language then it is
too dumb, period.

Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Chuck Esterbrook

On 10/31/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

I guess the AI problem is solved, then.  I can already communicate with my
computer using formal, unambiguous languages.  It already does a lot of
things better than most humans, like arithmetic, chess, memorizing long
lists and recalling them perfectly...


Ben G. brought up an excellent example of language ambiguity at a
recent workshop:

"I saw the man with the telescope."

Does that mean:
(1) I saw the man and I used a telescope to do it.
(2) I saw the man, he had a telescope.
(3) I performed the action "to saw" using a telescope instead of using
a saw (presumably because I'm a dummy).

All three or completely different and also completely valid (unless
you throw in life experience which knocks out 3). Just reforming the
sentence in a more data-structure-like fashion helps immensely. Just
making something up here:

(1) I.saw(direct_object=man, using=telescope)
(2) I.saw(direct_object=(man, with=telescope))
(3) I.saw_cut(direct_object=man, using=telescope)

Getting more formal substantially lowers the work needed to obtain
correct meaning.

I imagine that's what lojban and its variants are intended to
accomplish although I haven't had time to check them out. I also
imagine they have a better approach to my off-the-cuff design.


If a machine can't pass the Turing test, then what is your definition of
intelligence?


The ability to learn in a variety of situations without having to be
re-engineered in each situation. Also, off-the-cuff, but I feel it's a
good start. For example, if we had software that could learn:
* List sorting
* Go
* Pong
* Basic Algebra
* etc.

*without* being hard coded for them or being reprogrammed for anything
other that access to input, that would feel pretty darn general to me.
But without natural language, it would not be human level.

I think "human level intelligence" is a bigger, harder goal than
"general intelligence" and that the latter will come first. And I
would be damned impressed if someone had an AGI capable of all the
above even if I had to communicate in lojban to teach it new tricks.

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Matt Mahoney
I guess the AI problem is solved, then.  I can already communicate with my computer using formal, unambiguous languages.  It already does a lot of things better than most humans, like arithmetic, chess, memorizing long lists and recalling them perfectly...If a machine can't pass the Turing test, then what is your definition of intelligence? -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: John Scanlon <[EMAIL PROTECTED]>To: agi@v2.listbox.comSent: Tuesday, October 31, 2006 8:48:43 AMSubject: [agi] Natural versus formal AI interface languages

 
 


One of the major obstacles to real AI is the belief 
that knowledge of a natural language is necessary for 
intelligence.  A human-level intelligent system should be expected to 
have the ability to learn a natural language, but it is not necessary.  It 
is better to start with a formal language, with unambiguous formal 
syntax, as the primary interface between human beings and AI systems.  
This type of language could be called a "para-natural 
formal language."  It eliminates all of the syntactical ambiguity 
that makes competent use of a natural language so difficult to implement in an 
AI system.  Such a language would also be a member of the class "fifth 
generation computer language."
 
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

John --

See

lojban.org

and

http://www.goertzel.org/papers/lojbanplusplus.pdf

-- Ben G

On 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:



One of the major obstacles to real AI is the belief that knowledge of a
natural language is necessary for intelligence.  A human-level intelligent
system should be expected to have the ability to learn a natural language,
but it is not necessary.  It is better to start with a formal language, with
unambiguous formal syntax, as the primary interface between human beings and
AI systems.  This type of language could be called a "para-natural
formal language."  It eliminates all of the syntactical ambiguity that makes
competent use of a natural language so difficult to implement in an AI
system.  Such a language would also be a member of the class "fifth
generation computer language."
 
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread John Scanlon
In the para-natural formal language I've developed, called Jinnteera, "I saw 
the man with the telescope." would be expressed for each meaning in a 
declarative phrase as:


1.  "I did see with a telescope the_man"
2.  "I did see the man which did have a telescope"
3.  "I saw with a telescope the_man" or "I use a telescope for action (saw 
the_man)  (where "saw" has the meaning of "saw a 2x4", never "see", which 
always takes the same form and means "to view")



- Original Message - 
From: "Chuck Esterbrook" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 31, 2006 12:58 PM
Subject: Re: [agi] Natural versus formal AI interface languages



On 10/31/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
I guess the AI problem is solved, then.  I can already communicate with 
my

computer using formal, unambiguous languages.  It already does a lot of
things better than most humans, like arithmetic, chess, memorizing long
lists and recalling them perfectly...


Ben G. brought up an excellent example of language ambiguity at a
recent workshop:

"I saw the man with the telescope."

Does that mean:
(1) I saw the man and I used a telescope to do it.
(2) I saw the man, he had a telescope.
(3) I performed the action "to saw" using a telescope instead of using
a saw (presumably because I'm a dummy).

All three or completely different and also completely valid (unless
you throw in life experience which knocks out 3). Just reforming the
sentence in a more data-structure-like fashion helps immensely. Just
making something up here:

(1) I.saw(direct_object=man, using=telescope)
(2) I.saw(direct_object=(man, with=telescope))
(3) I.saw_cut(direct_object=man, using=telescope)

Getting more formal substantially lowers the work needed to obtain
correct meaning.

I imagine that's what lojban and its variants are intended to
accomplish although I haven't had time to check them out. I also
imagine they have a better approach to my off-the-cuff design.


If a machine can't pass the Turing test, then what is your definition of
intelligence?


The ability to learn in a variety of situations without having to be
re-engineered in each situation. Also, off-the-cuff, but I feel it's a
good start. For example, if we had software that could learn:
* List sorting
* Go
* Pong
* Basic Algebra
* etc.

*without* being hard coded for them or being reprogrammed for anything
other that access to input, that would feel pretty darn general to me.
But without natural language, it would not be human level.

I think "human level intelligence" is a bigger, harder goal than
"general intelligence" and that the latter will come first. And I
would be damned impressed if someone had an AGI capable of all the
above even if I had to communicate in lojban to teach it new tricks.

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread John Scanlon

Ben,

   I did read your stuff on Lojban++, and it's the sort of language I'm 
talking about.  This kind of language lets the computer and the user meet 
halfway.  The computer can parse the language like any other computer 
language, but the terms and constructions are designed for talking about 
objects and events in the real world -- rather than for compilation into 
procedural machine code.


   Which brings up a question -- is it better to use a language based on 
term or predicate logic, or one that imitates (is isomorphic to) natural 
languages?  A formal language imitating a natural language would have the 
same kinds of structures that almost all natural languages have:  nouns, 
verbs, adjectives, prepositions, etc.  There must be a reason natural 
languages almost always follow the pattern of something carrying out some 
action, in some way, and if transitive, to or on something else.  On the 
other hand, a logical language allows direct  translation into formal logic, 
which can be used to derive all sorts of implications (not sure of the 
terminology here) mechanically.



- Original Message - 
From: "Ben Goertzel" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 31, 2006 12:24 PM
Subject: Re: [agi] Natural versus formal AI interface languages



John --

See

lojban.org

and

http://www.goertzel.org/papers/lojbanplusplus.pdf

-- Ben G

On 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:



One of the major obstacles to real AI is the belief that knowledge of a
natural language is necessary for intelligence.  A human-level 
intelligent
system should be expected to have the ability to learn a natural 
language,
but it is not necessary.  It is better to start with a formal language, 
with
unambiguous formal syntax, as the primary interface between human beings 
and

AI systems.  This type of language could be called a "para-natural
formal language."  It eliminates all of the syntactical ambiguity that 
makes

competent use of a natural language so difficult to implement in an AI
system.  Such a language would also be a member of the class "fifth
generation computer language."
 
 This list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe

or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Eliezer S. Yudkowsky

Pei Wang wrote:

Let's don't confuse two statements:

(1) To be able to use a natural language (so as to passing Turing
Test) is not a necessary condition for a system to be intelligent.

(2) A true AGI should have the potential to learn any natural language
(though not necessarily to the level of native speakers).

I agree with both of them, and I don't think they contradict to each other.


"Natural" language isn't.  Humans have one specific idiosyncratic 
built-in grammar, and we might have serious trouble learning to 
communicate in anything else - especially if the language was being used 
by a mind quite unlike our own.  Even a "programming language" is still 
something that humans made, and how many people do you know who can 
*seriously*, not-jokingly, think in syntactical C++ the way they can 
think in English?


I certainly think that something could be humanish-level intelligent in 
terms of optimization ability, and not be able to learn English, if it 
had a sufficiently alien cognitive architecture - nor would we be able 
to learn its languge.


Of course you can't be superintelligent and unable to speak English - 
*that* wouldn't make any sense.  I assume that's what you mean by "true 
AGI" above.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Matt Mahoney
Artificial languages that remove ambiguity like Lojban do not bring us any 
closer to solving the AI problem.  It is straightforward to convert between 
artificial languages and structured knowledge (e.g first order logic), but it 
is still a hard (AI complete) problem to convert between natural and artificial 
languages.  If you could translate English -> Lojban -> English, then you could 
just as well translate, e.g. English -> Lojban -> Russian.  Without a natural 
language model, you have no access to the vast knowledge base of the Internet, 
or most of the human race.  I know people can learn Lojban, just like they can 
learn Cycl or LISP.  Lets not repeat these mistakes.  This is not training, it 
is programming a knowledge base.  This is narrow AI.
 
-- Matt Mahoney, [EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread John Scanlon
Matt, I totally agree with you on Cyc and LISP.  To go further, I think Cyc 
is a dead end because of the assumption that intelligence is dependent on a 
vast store of knowledge, basically represented in a semantic net. 
Intelligence should start with the learning of simple patterns in images and 
some kind of language that can refer to them and their observed behavior. 
And this involves the training you are talking about.


But you don't quite understand the difference between a natural-like formal 
language and something like LISP.  I'm talking about a language that has 
formal syntax but most importantly has the full expressive power of a 
natural language (minus the idioms and aesthetic elements like poetry).


Now the training of such a system is the problem, and that's the problem 
that we're all working on.  I am just about finished with the parsing of my 
language, Jinnteera (in ANSI/ISO C++).  I have bitmaps coming in from 
clients to the intelligence engine and some image processing.  The next step 
is the semantic processing of the parse tree of incoming statements.  This 
system, in no way, has any intelligence yet, but it provides the initial 
framework for experimentation and the developement of AI, using any internal 
intelligence algorithms of choice.


It's basically an AI shell at the moment, and after some more development 
and polishing, I'm willing to share it with anyone whose interested.



- Original Message - 
From: "Matt Mahoney" <[EMAIL PROTECTED]>

To: 
Sent: Tuesday, October 31, 2006 9:03 PM
Subject: Re: [agi] Natural versus formal AI interface languages


Artificial languages that remove ambiguity like Lojban do not bring us any 
closer to solving the AI problem.  It is straightforward to convert between 
artificial languages and structured knowledge (e.g first order logic), but 
it is still a hard (AI complete) problem to convert between natural and 
artificial languages.  If you could translate English -> Lojban -> English, 
then you could just as well translate, e.g. English -> Lojban -> Russian. 
Without a natural language model, you have no access to the vast knowledge 
base of the Internet, or most of the human race.  I know people can learn 
Lojban, just like they can learn Cycl or LISP.  Lets not repeat these 
mistakes.  This is not training, it is programming a knowledge base.  This 
is narrow AI.


-- Matt Mahoney, [EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Charles D Hixson

John Scanlon wrote:

Ben,

   I did read your stuff on Lojban++, and it's the sort of language 
I'm talking about.  This kind of language lets the computer and the 
user meet halfway.  The computer can parse the language like any other 
computer language, but the terms and constructions are designed for 
talking about objects and events in the real world -- rather than for 
compilation into procedural machine code.


   Which brings up a question -- is it better to use a language based 
on term or predicate logic, or one that imitates (is isomorphic to) 
natural languages?  A formal language imitating a natural language 
would have the same kinds of structures that almost all natural 
languages have:  nouns, verbs, adjectives, prepositions, etc.  There 
must be a reason natural languages almost always follow the pattern of 
something carrying out some action, in some way, and if transitive, to 
or on something else.  On the other hand, a logical language allows 
direct  translation into formal logic, which can be used to derive all 
sorts of implications (not sure of the terminology here) mechanically.
The problem here is that when people use a language to communicate with 
each other they fall into the habit of using human, rather than formal, 
parsings.  This works between people, but would play hob with a 
computer's understanding (if it even had reasonable referrents for most 
of the terms under discussion).


Also, notice one major difference between ALL human languages and 
computer languages:

Human languages rarely use many local variables, computer languages do.
Even the words that appear to be local variables in human languages are 
generally references, rather than variables.


This is (partially) because computer languages are designed to describe 
processes, and human languages are quasi-serial communication 
protocols.  Notice that thoughts are not serial, and generally not 
translatable into words without extreme loss of meaning.  Human 
languages presume sufficient "understanding" at the other end of the 
communication channel to reconstruct a model of what the original 
thought might have been.


So.  Lojban++ might be a good language for humans to communicate to an 
AI with, but it would be a lousy language in which to implement that 
same AI.  But even for this purpose the language needs a "verifier" to 
insure that the correct forms are being followed.  Ideally such a 
verifier would paraphrase the statement that it was parsing and emit 
back to the sender either an error message, or the paraphrased 
sentence.  Then the sender would check that the received sentence 
matched in meaning the sentence that was sent.  (N.B.:  The verifier 
only checks the formal properties of the language to ensure that they 
are followed.  It had no understanding, so it can't check the meaning.)


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Jiri Jelinek
John,>One of the major obstacles to real AI is the belief 
that knowledge of a natural language is necessary for 
intelligence.I agree. And it's IMO nearly impossible for AGI to learn/understand NL when its only info source is NL. We get some extra [meta] data from our senses when learning NL (which NL itself wasn't "designed" to cover) and that extra info is often critical for plugging new concepts to our mental-model-of-the-world with all the important (ATM available) links to other concepts. BTW note that the ancient list of 5 senses (reportedly by Aristotle) is pretty obsolete. We just have a lot more than 5 and all of them help us to really understand NL-labeled and NL-not-covered concepts. So, practically, you IMO either need a bunch of (appropriately processed) human like senses (=LOTS of work for developers) OR (if it's [mostly] text I/O based AI) certain degree of formalization (higher than NL) for the input to get the meta data needed for decent understanding. The first alternative IMO requires resources most of us don't have so I go with the second option. Such systems need to learn a lot using some kind of formalized input = too much system-teaching for the dev team and I don't think a typical user would be eager to learn Lojban-like languages (which I see some issues with when it comes to meaning digging anyway) so I think an extra step is needed to really get "the computer and the user to meet" user-acceptable way (not exactly the "halfway"). As some of the above implies, languages get clumsy when describing certain types concepts. That's why
in my "wannabe" AGI (which is still more on paper than in a version
control system), I'm trying to design a user-AI interface that has a
couple of specialized (but easy to use) editors
in addition to its language itself.BTW a fellow coder just asked me "Can I borrow your eyes?". Obviously, NL is a mess. Sure, AGI should be able to learn it but 1) to learn it well, it requires already having a significant & well structured KB and 2) there is a LOT of very important problem solving that does not require being fluent in any NL.
  Matt,>I guess the AI problem is solved, then.  I can already communicate with my computer using formal, unambiguous languages.  It already does a lot of things better than most humans, like arithmetic, chess, memorizing long lists and recalling them perfectly...
AI is, AGI isn't. You are talking about domain specific systems that are unable to build "mental models" useful for general problem solving.Sorry I did not have a chance to read all the related posts so far.. I'll definitely get back to it later. This stuff is IMO really important for AGI.
Sincerely,Jiri JelinekOn 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:







One of the major obstacles to real AI is the belief 
that knowledge of a natural language is necessary for 
intelligence.  A human-level intelligent system should be expected to 
have the ability to learn a natural language, but it is not necessary.  It 
is better to start with a formal language, with unambiguous formal 
syntax, as the primary interface between human beings and AI systems.  
This type of language could be called a "para-natural 
formal language."  It eliminates all of the syntactical ambiguity 
that makes competent use of a natural language so difficult to implement in an 
AI system.  Such a language would also be a member of the class "fifth 
generation computer language."
 
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread BillK

On 11/1/06, Charles D Hixson wrote:

So.  Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI.  But even for this purpose the language needs a "verifier" to
insure that the correct forms are being followed.  Ideally such a
verifier would paraphrase the statement that it was parsing and emit
back to the sender either an error message, or the paraphrased
sentence.  Then the sender would check that the received sentence
matched in meaning the sentence that was sent.  (N.B.:  The verifier
only checks the formal properties of the language to ensure that they
are followed.  It had no understanding, so it can't check the meaning.)




This discussion reminds me of a story about the United Nations
assembly meetings.
Normally when a representative is speaking, all the translation staff
are jabbering away in tandem with the speaker.
But when the German representative starts speaking they all fall
silent and sit staring at him.

The reason is that they are waiting for the verb to come along.   :)

Billk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread James Ratcliff
The AGI really does need to be able to read and write english or another natural language to be decently useful, people are just NOT goign to learn or be impressed with a machine that spurts out something incoherent (which they already can do)It is suprising how little actuall semantic ambiguity there is in well written language such as news articles and such.  Especially when you take into account the statistical information of english.  It may occur, but not often.The telescope/man example is the most ambigous, but even the other example:"He hit the boy with the bat"You can statistically show that "hitting with a bat" is statistically high, and assume it was the tool used.If not, and even so, the AI should model both scenarios as possible.Most of these ambiguities are removed though, with the additional context sentences around them, or people should just be trained to avoid these ambiguities in writing, but not another
 language indeed.Even without the ambiguity of the texts discussed here, there is no easy formula for mapping english or other sentences directly into any sort of database, using FOL or any others.This is something I am working on and am interested in currently.I am currently seeing how many simple statements can be pulled from the current news articles into an AI information center.JamesMatt Mahoney <[EMAIL PROTECTED]> wrote: Artificial languages that remove ambiguity like Lojban do not bring us any closer to solving the AI problem.  It is straightforward to convert between artificial languages and structured knowledge (e.g first order logic), but it is still a hard (AI complete) problem to convert between natural and artificial languages.  If you could translate English -> Lojban -> English, then you
 could just as well translate, e.g. English -> Lojban -> Russian.  Without a natural language model, you have no access to the vast knowledge base of the Internet, or most of the human race.  I know people can learn Lojban, just like they can learn Cycl or LISP.  Lets not repeat these mistakes.  This is not training, it is programming a knowledge base.  This is narrow AI. -- Matt Mahoney, [EMAIL PROTECTED]-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/[EMAIL PROTECTED]Thank YouJames Ratcliffhttp://falazar.com 

Cheap Talk? Check out Yahoo! Messenger's low  PC-to-Phone call rates.

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread James Ratcliff
Forgot to add there is a large amount of syntactic and Word sense disambiguity, but there are some programs out there that handle that to a remarkable extent as well, and I believe can be improved upon.And for many tasks, I dont see any reason not to have some back and forth feedback in the loop for the AI.The "Smartest" response to the "I saw the man with the telescope." sentence to me would be simply:AI: "Did you have the telescope or did the man?"or "Was the man holding the telescope?"James RatcliffJames Ratcliff <[EMAIL PROTECTED]> wrote: The AGI really does need to be able to read and write english or another natural language to be decently useful, people are just NOT goign to learn or be impressed with a machine that spurts out something incoherent (which they already can do)It is suprising
 how little actuall semantic ambiguity there is in well written language such as news articles and such.  Especially when you take into account the statistical information of english.  It may occur, but not often.The telescope/man example is the most ambigous, but even the other example:"He hit the boy with the bat"You can statistically show that "hitting with a bat" is statistically high, and assume it was the tool used.If not, and even so, the AI should model both scenarios as possible.Most of these ambiguities are removed though, with the additional context sentences around them, or people should just be trained to avoid these ambiguities in writing, but not another  language indeed.Even without the ambiguity of the texts discussed here, there is no easy formula for mapping english or other sentences directly into any sort of database, using FOL or any others.This is something I am working on and am interested in
 currently.I am currently seeing how many simple statements can be pulled from the current news articles into an AI information center.JamesMatt Mahoney <[EMAIL PROTECTED]> wrote: Artificial languages that remove ambiguity like Lojban do not bring us any closer to solving the AI problem.  It is straightforward to convert between artificial languages and structured knowledge (e.g first order logic), but it is still a hard (AI complete) problem to convert between natural and artificial languages.  If you could translate English -> Lojban -> English, then you  could just as well translate, e.g. English -> Lojban -> Russian.  Without a natural language model, you have no access to the vast knowledge base of the Internet, or most of the human race.  I know people can learn Lojban, just like they can learn Cycl or
 LISP.  Lets not repeat these mistakes.  This is not training, it is programming a knowledge base.  This is narrow AI. -- Matt Mahoney, [EMAIL PROTECTED]-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/[EMAIL PROTECTED]Thank YouJames Ratcliffhttp://falazar.com   Cheap Talk? Check out Yahoo! Messenger's low  PC-to-Phone call rates.  This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] Thank
 YouJames Ratcliffhttp://falazar.com 


Everyone is raving about the  all-new Yahoo! Mail.

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Charles D Hixson

BillK wrote:

On 11/1/06, Charles D Hixson wrote:

So.  Lojban++ might be a good language for humans to communicate to an
AI with, but it would be a lousy language in which to implement that
same AI.  But even for this purpose the language needs a "verifier" to
insure that the correct forms are being followed.  Ideally such a
verifier would paraphrase the statement that it was parsing and emit
back to the sender either an error message, or the paraphrased
sentence.  Then the sender would check that the received sentence
matched in meaning the sentence that was sent.  (N.B.:  The verifier
only checks the formal properties of the language to ensure that they
are followed.  It had no understanding, so it can't check the meaning.)




This discussion reminds me of a story about the United Nations
assembly meetings.
Normally when a representative is speaking, all the translation staff
are jabbering away in tandem with the speaker.
But when the German representative starts speaking they all fall
silent and sit staring at him.

The reason is that they are waiting for the verb to come along.   :)

Billk
Yeah, it wouldn't be ideal for rapid interaction.  But it would help 
people to maintain adherence to the formal rules, and to notice when 
they weren't.


If you don't have feedback of this nature, the language will evolve 
different rules, more closely similar to those of natural languages.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Gregory Johnson
Perhaps there is a shortcut to all of this.

Provide the AGI with the hardware and software to jack into one or more human
brains and let the bio-software of the human brain be the language interface development tool.

I think we are creating  some of this the hardware.

This also puts AGI in a position to become reliant on humans to
interface with other  humans and perhaps also allows an AGI to
learn the virtues of carbon technology and the value of
continuing relationships with humans.

Some of the drivers that bring humans together  such as social
relations and sexual relations perhaps can be learned by an AGI
and  perhaps we can pussywhip
an antisocial AGI into a friendly AGI.

Remember the KISS rule , sometimes you can focus only on key areas with
enormous complexity and later discover that the result is far more simple than
originally envisioned.

Morris

On 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:







One of the major obstacles to real AI is the belief 
that knowledge of a natural language is necessary for 
intelligence.  A human-level intelligent system should be expected to 
have the ability to learn a natural language, but it is not necessary.  It 
is better to start with a formal language, with unambiguous formal 
syntax, as the primary interface between human beings and AI systems.  
This type of language could be called a "para-natural 
formal language."  It eliminates all of the syntactical ambiguity 
that makes competent use of a natural language so difficult to implement in an 
AI system.  Such a language would also be a member of the class "fifth 
generation computer language."
 
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-01 Thread Mark Nuzzolilo II




- Original Message - 
From: Gregory 
Johnson 
 
>Provide the AGI with the hardware and software to jack into one or more 
human>brains and let the bio-software of the human brain be the language 
interface development tool.
 
Jacking into the human brain?  That is hardly 
a shortcut to AGI, if we are to invent AGI in the next 30 or 40 years.  We 
are a long ways off from being able to use the human brain the way you 
mention.  
 
>Some of the drivers that bring humans together  such as social 
relations and sexual relations perhaps can be learned by >an AGI 
and perhaps we can pussywhip an antisocial AGI into a friendly AGI.
 
Could you elaborate on this?  I don't see the 
reliability of comparing an AGI's motivations with human 
motivation.
 
 
Mark N
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Eric Baum


Pei> (2) A true AGI should have the potential to learn any natural
Pei> language (though not necessarily to the level of native
Pei> speakers).

This embodies an implicit assumption about language which is worth
noting.

It is possible that the nature of natural language is such that humans
could not learn it if they did not have the key preprogrammed in
genetically.

Much data supports, and many authors would argue, that humans have
preprogrammed genetically a predisposition, what I would call a strong
inductive bias, to learn grammar of a certain type. It is likely that
they would be unable to learn grammar nearly as fast as they do
without it, indeed it might be computationally intractable even were
they given many lifetimes.

Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
learned. (This explains how we can learn 10 words a day, which
children routinely do.) An AGI might in principle be built on top of some other
conceptual structure, and have great difficulty comprehending human
words-- mapping them onto its concepts, much less learning them.

Moreover, it is worth noting the possibility that the amount of
computation that might in principle be necessary for learning a
"natural language" can't be bounded as one might think.
Historically, natural language was a creation of evolution (or of
evolution plus human ingenuity, but since humans were a creation of
evolution, and in my view evolution may often work by creating mechanisms
that then lead to ``or make" other discoveries, we can just consider
this for some purposes as a creation of evolution.)
Thus, you might posit that the amount of computation necessary for 
learning a natural language is bounded by the (truly vast) amount of 
computation that evolution could have devoted to the problem.
*But this does not follow*. 
Evolution did not "learn" natural language; it created it.
To the extent that language is an encryption system, evolution 
thus *chose* the encryption key, it did not have to decrypt it.
Thus in principle at least, learning a natural language without being 
given the key could be a very hard problem indeed, not something that
even evolution would have been capable of.

This is discussed in more detail in What is Thought?, ch 12 I believe.

Eric Baum

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread James Ratcliff
Thats a totally different problem, and considering the massive knowledge whole currently about how the human brain works, we would have some major problems in that area, though it is interesting.  One other problem there, what about two way communications?  You are proposing to have the brain talk to the AGI and the AGI to the brain correct?  You would need to put some pretty strict measures as to exactly what and where those communications would go because there is already some proven projects where they have the computer circuits activating human parts such as arms or legs.   But if you turn a full AGI loose in someones brain, there is no guarentee of friendliness, but there is a possibility it could grow in ability to tap into all other areas of the body, physically and mentally, and that would be really bad.Interesting research on brain interactions
 though:http://news.bbc.co.uk/2/hi/technology/3485918.stmhttp://news.bbc.co.uk/2/hi/science/nature/1871803.stmThe last one here is about a great project that actually imbeds a circuit in a monkeys brain, then they go through a number of experiments, including playing a video game for rewards.About two minutes into one session the female monkey just stops moving the joystick in her hand, and keeps playing the game with her mind alone, she had noticed that she didnt need to use the joystick, just think about it.This is what I want now for typing papers and programming, and could also be used possibly as a teaching tool for AI, just start thinking all the information you know into the computer.  Basically this should work very similarly, by thinking about typing, your brain could send the 'letters' to the computer instantly, just about as fast as you could think them.   It will be intersting to see how accurate and how fast that
 would actually be, and if you could then transfer that upwards into words instead of just letters.James Ratcliff  Gregory Johnson <[EMAIL PROTECTED]> wrote: Perhaps there is a shortcut to all of this.  Provide the AGI with the hardware and software to jack into one or more human brains and let the bio-software of the human brain be the language interface development tool.  I think we are creating  some of this the hardware.  This also puts AGI in a position to become reliant on humans to interface with other  humans and perhaps also allows an AGI to learn the virtues of carbon technology and the value of continuing relationships with humans.  Some of the drivers that bring humans together  such as social relations and sexual relations perhaps can be learned by an AGI
 and  perhaps we can pussywhip an antisocial AGI into a friendly AGI.  Remember the KISS rule , sometimes you can focus only on key areas with enormous complexity and later discover that the result is far more simple than originally envisioned.  Morris  On 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:One of the major obstacles to real AI is the belief  that knowledge of a natural language is necessary for  intelligence.  A human-level intelligent system should be expected to  have the ability to learn a natural language, but it is not necessary.  It  is better to start with a formal
 language, with unambiguous formal  syntax, as the primary interface between human beings and AI systems.   This type of language could be called a "para-natural  formal language."  It eliminates all of the syntactical ambiguity  that makes competent use of a natural language so difficult to implement in an  AI system.  Such a language would also be a member of the class "fifth  generation computer language."   This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED] Thank YouJames Ratcliffhttp://falazar.com 

Get your email and see which of your friends are online - Right on the  new Yahoo.com

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Russell Wallace
On 10/31/06, John Scanlon <[EMAIL PROTECTED]> wrote:







One of the major obstacles to real AI is the belief 
that knowledge of a natural language is necessary for 
intelligence.  A human-level intelligent system should be expected to 
have the ability to learn a natural language, but it is not necessary.  It 
is better to start with a formal language, with unambiguous formal 
syntax, as the primary interface between human beings and AI systems.  
This type of language could be called a "para-natural 
formal language."  It eliminates all of the syntactical ambiguity 
that makes competent use of a natural language so difficult to implement in an 
AI system.Syntactic ambiguity isn't the problem. The reason computers don't understand English is nothing to do with syntax, it's because they don't understand the world.It's easy to parse "The cat sat on the mat" into
  sit   cat   on   mat   past 
But the computer still doesn't understand the sentence, because it doesn't know what cats, mats and the act of sitting _are_. (The best test of such understanding is not language - it's having the computer draw an animation of the action.)
Such a language would also be a member of the class "fifth 
generation computer language."It might form the basis of one, but the hard part would be designing and implementing the functionality, the knowledge, that would need to be shipped with the language to make it useful.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Jef Allbright
Russell Wallace wrote: 
 
> Syntactic ambiguity isn't the problem. The reason computers don't 
> understand English is nothing to do with syntax, it's because they  
> don't understand the world.

 >  It's easy to parse "The cat sat on the mat" into 

 >  
 > sit 
 > cat 
 > on 
 > mat 
 > past  
 >  

> But the computer still doesn't understand the sentence, because it 
> doesn't know what cats, mats and the act of sitting _are_. (The best 
> test of such understanding is not language - it's having the 
> computer draw an animation of the action.) 

Russell, I agree, but it might be clearer if we point out that humans
don't understand the world either. We just process these symbols within
a more encompassing context.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Pei Wang

On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:



Pei> (2) A true AGI should have the potential to learn any natural
Pei> language (though not necessarily to the level of native
Pei> speakers).

This embodies an implicit assumption about language which is worth
noting.

It is possible that the nature of natural language is such that humans
could not learn it if they did not have the key preprogrammed in
genetically.

Much data supports, and many authors would argue, that humans have
preprogrammed genetically a predisposition, what I would call a strong
inductive bias, to learn grammar of a certain type. It is likely that
they would be unable to learn grammar nearly as fast as they do
without it, indeed it might be computationally intractable even were
they given many lifetimes.


I agree in general. The issue is about the nature of this key, and
whether it is specific to grammar learning only.


Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
learned. (This explains how we can learn 10 words a day, which
children routinely do.) An AGI might in principle be built on top of some other
conceptual structure, and have great difficulty comprehending human
words-- mapping them onto its concepts, much less learning them.


I think any AGI will need the ability to (1) using mental entities
(concepts) to summarize percepts and actions, and (2) using concepts
to extend past experience to new situations (reasoning). In this
sense, the categorization/learning/reasoning (thinking) mechanisms of
different AGIs may be very similar to each other, while the contents
of their conceptual structures are very different, due to the
differences in their sensors and effectors, as well as environments.

To me, language learning isn't carried out by a separate mechanism,
but by the general thinking process, since the task is the same: using
certain concepts (words, phrase, sentences, ...) in the places of
other concepts (mental images, internalized actions, as well as their
general and compound forms).

In summary, as far as the processing mechanism is concerned, any AGI
should have the power to learn any language. However, without a human
body and human experience, I don't think it will ever be able to use
the language as a native speaker. It will learn and comprehend the
word "cat" to an extent, though never the same as a human being ---
even human beings don't have it exactly the same way.

Of course, for any concrete language, it is probably always possible
to develop a special-purpose mechanism, which will handle the language
better than an AGI. As far as efficiency is concerned, I don't know
how much difference it will make.


Moreover, it is worth noting the possibility that the amount of
computation that might in principle be necessary for learning a
"natural language" can't be bounded as one might think.
Historically, natural language was a creation of evolution (or of
evolution plus human ingenuity, but since humans were a creation of
evolution, and in my view evolution may often work by creating mechanisms
that then lead to ``or make" other discoveries, we can just consider
this for some purposes as a creation of evolution.)
Thus, you might posit that the amount of computation necessary for
learning a natural language is bounded by the (truly vast) amount of
computation that evolution could have devoted to the problem.
*But this does not follow*.
Evolution did not "learn" natural language; it created it.
To the extent that language is an encryption system, evolution
thus *chose* the encryption key, it did not have to decrypt it.


Well, I'd rather not take language as an encryption system, in the
sense that each word and sentence has a "true meaning", independent to
the language, and that to learn the language means to build a mapping
between words and their denotations. This semantics, to me, its the
root of many problems in language learning.


Thus in principle at least, learning a natural language without being
given the key could be a very hard problem indeed, not something that
even evolution would have been capable of.


Again, I fully agree that there is a "key", but I don't think it in
the sense of an encryption key.


This is discussed in more detail in What is Thought?, ch 12 I believe.


I agree to many points you made about how the human mind gets its
ability. However, I'm still not convinced that an AGI must take the
same path. To me, an AGI only needs to be similar to the human mind in
certain (though important) aspects, rather than in all aspects,
therefore how the human mind gets here is not necessarily the most
efficient way for an AGI to be designed.

Pei Wang


Eric Baum

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This 

Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Matt Mahoney
- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages

>Here is how I intend to use Lojban++ in teaching Novamente.  When
>Novamente is controlling a humanoid agent in the AGISim simulation
>world, the human teacher talks to it about what it is doing.  I would
>like the human teacher to talk to it in both Lojban++ and English, at
>the same time.  According to my understanding of Novamente's learning
>and reasoning methods, this will be the optimal way of getting the
>system to understand English.  At once, the system will get a
>perceptual-motor grounding for the English sentences, plus an
>understanding of the logical meaning of the sentences.  I can think of
>no better way to help a system understand English.  Yes, this is not
>the way humans do it. But so what?  Novamente does not have a human
>brain, it has a different sort of infrastructure with different
>strengths and weaknesses.

What about using "baby English" instead of an artificial language?  By this I 
mean simple English at the level of a 2 or 3 year old child.  Baby English has 
many of the properties that make artificial languages desirable, such as a 
small vocabulary, simple syntax and lack of ambiguity.  Adult English is 
ambiguous because adults can use vast knowledge and context to resolve 
ambiguity in complex sentences.  Children lack these abilities.

I don't believe it is possible to map between natural and structured language 
without solving the natural language modeling problem first.  I don't believe 
that having structured knowledge or a structured language available makes the 
problem any easier.  It is just something else to learn.  Humans learn natural 
language without having to learn structured languages, grammar rules, knowledge 
representation, etc.  I realize that Novamente is different from the human 
brain.  My argument is based on the structure of natural language, which is 
vastly different from artificial languages used for knowledge representation.  
To wit:

- Artificial languages are designed to be processed (translated or compiled) in 
the order: lexical tokenization, syntactic parsing, semantic extraction.  This 
does not work for natural language.  The correct order is the order in which 
children learn: lexical, semantics, syntax.  Thus we have successful language 
models that extract semantics without syntax (such as information retrieval and 
text categorization), but not vice versa.

- Artificial language has a structure optimized for serial processing.  Natural 
language is optimized for parallel processing.  We resolve ambiguity and errors 
using context.  Context detection is a type of parallel pattern recognition.  
Patterns can be letters, groups of letters, words, word categories, phrases, 
and syntactic structures.  We recognize and combine perhaps tens or hundreds of 
patterns simultaneously by matching to perhaps 10^5 or more from memory.  
Artificial languages have no such mechanism and cannot tolerate ambiguity or 
errors.

- Natural language has a structure that allows incremental learning.  We can 
add words to the vocabulary one at a time.  Likewise for phrases, idioms, 
classes of words and syntactic structures.  Artificial languages must be 
processed by fixed algorithms.  Learning algorithms are unknown.

- Natural languages evolve slowly in a social environment.  Artificial 
languages are fixed according to some specificiation.

- Children can learn natural languages.  Artificial languages are difficult to 
learn even for adults.

- Writing in an artificial language is an iterative process in which the output 
is checked for errors by a computer and the utterance is revised.  Natural 
language uses both iterative and forward error correction.

By "natural language" I include man made languages like Esperanto.  Esperanto 
was designed for communication between humans and has all the other properties 
of natural language.  It lacks irregular verbs and such, but this is really a 
tiny part of a language's complexity.  A natural language like English has a 
complexity of about 10^9 bits.  How much information does it take to list all 
the irregularities in English like swim-swam, mouse-mice, etc?
 
-- Matt Mahoney, [EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Eliezer S. Yudkowsky

Pei Wang wrote:

On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:


Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
learned. (This explains how we can learn 10 words a day, which
children routinely do.) An AGI might in principle be built on top of 
some other

conceptual structure, and have great difficulty comprehending human
words-- mapping them onto its concepts, much less learning them.


I think any AGI will need the ability to (1) using mental entities
(concepts) to summarize percepts and actions, and (2) using concepts
to extend past experience to new situations (reasoning). In this
sense, the categorization/learning/reasoning (thinking) mechanisms of
different AGIs may be very similar to each other, while the contents
of their conceptual structures are very different, due to the
differences in their sensors and effectors, as well as environments.


Pei, I suspect that what Baum is talking about is - metaphorically 
speaking - the problem of an AI that runs on SVD talking to an AI that 
runs on SVM.  (Singular Value Decomposition vs. Support Vector 
Machines.)  Or the ability of an AI that runs on latent-structure Bayes 
nets to exchange concepts with an AI that runs on decision trees. 
Different AIs may carve up reality along different lines, so that even 
if they label their concepts, it may take considerable extra computing 
power for one of them to learn the other's concepts - it may not be 
"natural" to them.  They may not be working in the same space of easily 
learnable concepts.  Of course these examples are strictly metaphorical. 
 But the point is that human concepts may not correspond to anything 
that an AI can *natively* learn and *natively* process.


And when you think about running the process in reverse - trying to get 
a human to learn the AI's native language - then the problem is even 
worse.  We'd have to modify the AI's concept-learning mechanisms to only 
learn humanly-learnable concepts.  Because there's no way the humans can 
modify themselves, or run enough sequential serial operations, to 
understand the concepts that would be natural to an AI that used its 
computing power in the most efficient way.


A superintelligence, or a sufficiently self-modifying AI, should not be 
balked by English.  A superintelligence should carve up reality into 
sufficiently fine grains that it can learn any concept computable by our 
much smaller minds, unless P != NP and the concepts are genuinely 
encrypted.  And a self-modifying AI should be able to natively run 
whatever it likes.  This point, however, Baum may not agree with.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Pei Wang

On 11/2/06, Eliezer S. Yudkowsky <[EMAIL PROTECTED]> wrote:

Pei Wang wrote:
> On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:
>
>> Moreover, I argue that language is built on top of a heavy inductive
>> bias to develop a certain conceptual structure, which then renders the
>> names of concepts highly salient so that they can be readily
>> learned. (This explains how we can learn 10 words a day, which
>> children routinely do.) An AGI might in principle be built on top of
>> some other
>> conceptual structure, and have great difficulty comprehending human
>> words-- mapping them onto its concepts, much less learning them.
>
> I think any AGI will need the ability to (1) using mental entities
> (concepts) to summarize percepts and actions, and (2) using concepts
> to extend past experience to new situations (reasoning). In this
> sense, the categorization/learning/reasoning (thinking) mechanisms of
> different AGIs may be very similar to each other, while the contents
> of their conceptual structures are very different, due to the
> differences in their sensors and effectors, as well as environments.

Pei, I suspect that what Baum is talking about is - metaphorically
speaking - the problem of an AI that runs on SVD talking to an AI that
runs on SVM.  (Singular Value Decomposition vs. Support Vector
Machines.)  Or the ability of an AI that runs on latent-structure Bayes
nets to exchange concepts with an AI that runs on decision trees.
Different AIs may carve up reality along different lines, so that even
if they label their concepts, it may take considerable extra computing
power for one of them to learn the other's concepts - it may not be
"natural" to them.  They may not be working in the same space of easily
learnable concepts.  Of course these examples are strictly metaphorical.
  But the point is that human concepts may not correspond to anything
that an AI can *natively* learn and *natively* process.


That is why I tried to distinguish "content" from "mechanism" --- a
robot with sonar as the only sensor and wheels as the only effectors
surely won't categorize the environment in our concepts. However, I
tend to believe that the relations among the robot's concepts are more
or less what I call "inheritance", "similarity", and so on, and its
reasoning rules are not that different from the ones we use.

Can we understand such a language? I'd say "yes, to a certain extent,
though not fully", as far as there are ways for our experience to be
related to that of the robot.


A superintelligence, or a sufficiently self-modifying AI, should not be
balked by English.  A superintelligence should carve up reality into
sufficiently fine grains that it can learn any concept computable by our
much smaller minds, unless P != NP and the concepts are genuinely
encrypted.  And a self-modifying AI should be able to natively run
whatever it likes.  This point, however, Baum may not agree with.


I'm afraid that there are no "sufficiently fine grains" that can serve
as the common "atoms" of different sensorimotor systems. They may
categorize the same environment in incompatible ways, which cannot be
reduced to a common language with "more detailed" concepts.

Pei


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Lukasz Kaiser

Hi.


What about using "baby English" instead of an artificial language?


That seems to be good for experiments, but unluckily it does not seem to
have the benefits of real natural language, as there is neither a big body
of text written in baby English nor many people wanting to talk it to a machine.


- Artificial languages are designed to be processed (translated or compiled) in 
the order:
lexical tokenization, syntactic parsing, semantic extraction.


That is mostly true, but more and more artificial languages start to mix parsing
and semantics. Perhaps these should be called semi-artificial? But anyway,
what I have in mind are mostly programming or specification languages,
these feel a lot more artificial than Lobjan or Esperanto.


This does not work for natural language.  The correct order is the order in 
which
children learn: lexical, semantics, syntax.


I strongly disagree with this "correct order". First of all, the
stages are all concurrent,
there are great experimental difficulties in children research and
large differences
between (a) what children understand and what they produce, (b) cultures and
nationalities and even (c) single children. If you insist on giving any order,
I'd rather say that semantics is there even before lexical part and in
many cases
you are compelled to say that language structure ("syntax") comes before the
lexical part (what I have in mind is that many children first learn
the melody of
language and only later words, you can sometimes hear them make full, very
convincing conversations before they master pronunciation of all consonants).


Artificial languages have no such mechanism and cannot tolerate ambiguity or 
errors.


This improved a bit in recent years. For example you can take a look at
Attempto Controlled English (http://www.ifi.unizh.ch/attempto/) - it's a formal
language translated to first-order logic, but resolves anaphoric references and
some other context-related things.


Artificial languages must be processed by fixed algorithms.


Fixed algorithms are only as fixed as the mind of the programmer. There
are modern formal languages that add new syntax with each declaration.
For example you can define a new grammatical rule for each new function
or constructor you declare. This is possible, starts being done and seems
to have nothing to do with fixed algorithms.


Learning algorithms are unknown.


Well, learning grammar rules is very well known. I think it was even used
for learning regular-expression translators for languages a few decades ago.
Learning more complex things, as context dependency, is more difficult but
is quite an active area of research and many algorithms are known, even if
none of these is prefect or good enough for natural language on the web.


- Writing in an artificial language is an iterative process in which the output 
is checked
for errors by a computer and the utterance is revised.  Natural language uses 
both
iterative and forward error correction.


Error correction in artificial languages can be though of as similar to asking
additional questions to understand a statement in natural language. There
is an interaction going on in both cases. Just our programming languages have
very bad manners and report errors in horrible ways, but that's another problem.

With these arguments I do not want to convince you that artificial languages
are getting near the natural ones in ease of use, as this unluckily
does not seem
to be the case. But we are getting better in user-friendliness of
artificial (formal)
languages and add features like context-dependency, dynamic grammar learning
or better interaction for disambiguation and error correction. My point is
that even with all these features, our formal languages still feel
very unnatural,
and it seems this is because the _underlying semantic representation_ is not the
one we consider natural. And I really doubt that you can create a language that
will feel natural without first working a lot on what it will be
translated to and
first getting that at least more or less right. What do you think?

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Eric Baum


Eliezer> unless P != NP and the concepts are genuinely encrypted.  And

I am of course assuming P != NP, which seems to me a safe assumption.
If P = NP, and mind exploits that fact (which I don't believe) then 
we are at a serious handicap in producing an AGI till we understand
why P = NP, but it will become a lot easier afterward!  

I'm not, of course, saying that there was some "intent" or
evolutionary advantage to encryption, just that it very naturally
occurs. Evolution picks a grammar bias, for example. One is as good as
another, more or less, so it picks one. The AGI doesn't get the
privilege, though, it has to solve a learning problem, and such
learning problems are mostly known to be NP-hard. (We might, of
course, give it the grammar bias, rather than requiring it to learn
it, but alas, we don't know how to describe it... linguists study this
problem, but it has been too hard to solve...)

So Pei's comments are in some sense wishes. To be charitable--
maybe I should say beliefs supported by his experience.
But they are not established facts. It remains a possibility,
supported by reasonable evidence,
that language learning may be an intractable additional step
on top of building a program achieving other aspects of intelligence.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Matt Mahoney
I don't know enough about Novamente to say if your approach would work.  Using 
an artificial language as part of the environment (as opposed to a substitute 
for natural language) does seem to make sense.

I think an interesting goal would be to teach an AGI to write software.  If I 
understand your explanation, this is the same problem.  I want to teach the AGI 
two languages (English and x86-64 machine code), one to talk to me and the 
other to define its environment.  I would like to say to the AGI, "write a 
program to print the numbers 1 through 100", "are there any security flaws in 
this web browser?" and ultimately, "write a program like yourself, but smarter".

This is obviously a hard problem, even if I substitute a more "English-like" 
programming language like COBOL.  To solve the first example, the AGI needs an 
adult level understanding of English and arithmetic.  To solve the second, it 
needs a comprehensive world model, including an understanding of how people 
think and the things they can experience.  (If an embedded image can set a 
cookie, is this a security flaw?).  When it can solve the third, we are in 
trouble (topic for another list).

How could such an AGI be built?   What would be its architecture?  What 
learning algorithm?  What training data?  What computational cost?
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 2, 2006 3:45:42 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages

Yes, teaching an AI in Esperanto would make more sense than teaching
it in English ... but, would not serve the same purpose as teaching it
in Lojban++ and a natural language in parallel...

In fact, an ideal educational programme would probably be to use, in parallel

-- an Esperanto-based, rather than English-based, version of  Lojban++
-- Esperanto

However, I hasten to emphasize that this whole discussion is (IMO)
largely peripheral to AGI.

The main point is to get the learning algorithms and knowledge
representation mechanisms right.  (Or if the learning algorithm learns
its own KR's, that's fine too...).  Once one has what seems like a
workable learning/representation framework, THEN one starts talking
about the right educational programme.  Discussing education in the
absence of an understanding of internal learning algorithms is perhaps
confusing...

Before developing Novamente in detail, I would not have liked the idea
of using Lojban++ to help teach an AGI, for much the same reasons that
you are now complaining.

But now, given the specifics of the Novamente system, it turns out
that this approach may actually make teaching the system considerably
easier -- and make the system more rapidly approach the point where it
can rapidly learn natural language on its own.

To use Eric Baum's language, it may be that by interacting with the
system in Lojban++, we human teachers can supply the baby Novamente
with much of the "inductive bias" that humans are born with, and that
helps us humans to learn natural languages so relatively easily

I guess that's a good way to put it.  Not that learning Lojban++ is a
substitute for learning English, rather that the knowledge gained via
interaction in Lojban++ may be a substitute for human babies'
language-focused and spacetime-focused inductive bias.

Of course, Lojban++ can be used in this way **only** with AGI systems
that combine
-- a robust reinforcement learning capability
-- an explicitly logic-based knowledge representation

But Novamente does combine these two factors.

I don't expect to convince you that this approach is a good one, but
perhaps I have made my motivations clearer, at any rate.  I am
appreciating this conversation, as it is pushing me to verbally
articulate my views more clearly than I had done before.

-- Ben G



On 11/2/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> - Original Message 
> From: Ben Goertzel <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Tuesday, October 31, 2006 9:26:15 PM
> Subject: Re: Re: [agi] Natural versus formal AI interface languages
>
> >Here is how I intend to use Lojban++ in teaching Novamente.  When
> >Novamente is controlling a humanoid agent in the AGISim simulation
> >world, the human teacher talks to it about what it is doing.  I would
> >like the human teacher to talk to it in both Lojban++ and English, at
> >the same time.  According to my understanding of Novamente's learning
> >and reasoning methods, this will be the optimal way of getting the
> >system to understand English.  At once, the system will get a
> >perceptual-motor grounding for the English sentences, plus an
> >understanding of the logical meaning of the sentences.  I can think of
> >no better way to hel

Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Pei Wang

On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:


So Pei's comments are in some sense wishes. To be charitable--
maybe I should say beliefs supported by his experience.
But they are not established facts. It remains a possibility,
supported by reasonable evidence,
that language learning may be an intractable additional step
on top of building a program achieving other aspects of intelligence.


Of course you are right. We have no fact about AGI until someone build
it, and convince the others that it is indeed an AGI, which may take
longer than the former step. ;-)

As I mentioned before, I haven't done any actual experiment in
language learning yet, so my beliefs on this topic have relatively low
confidence compared to some of my other beliefs. I'm just not
convinced by the arguments about their impossibility. For example, I
don't think we know a system that is intelligent in every sense, but
cannot understand a human language, even after a reasonably long
training period.

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Richard Loosemore

Eliezer S. Yudkowsky wrote:

Pei Wang wrote:

On 11/2/06, Eric Baum <[EMAIL PROTECTED]> wrote:


Moreover, I argue that language is built on top of a heavy inductive
bias to develop a certain conceptual structure, which then renders the
names of concepts highly salient so that they can be readily
learned. (This explains how we can learn 10 words a day, which
children routinely do.) An AGI might in principle be built on top of 
some other

conceptual structure, and have great difficulty comprehending human
words-- mapping them onto its concepts, much less learning them.


I think any AGI will need the ability to (1) using mental entities
(concepts) to summarize percepts and actions, and (2) using concepts
to extend past experience to new situations (reasoning). In this
sense, the categorization/learning/reasoning (thinking) mechanisms of
different AGIs may be very similar to each other, while the contents
of their conceptual structures are very different, due to the
differences in their sensors and effectors, as well as environments.


Pei, I suspect that what Baum is talking about is - metaphorically 
speaking - the problem of an AI that runs on SVD talking to an AI that 
runs on SVM.  (Singular Value Decomposition vs. Support Vector 
Machines.)  Or the ability of an AI that runs on latent-structure Bayes 
nets to exchange concepts with an AI that runs on decision trees. 
Different AIs may carve up reality along different lines, so that even 
if they label their concepts, it may take considerable extra computing 
power for one of them to learn the other's concepts - it may not be 
"natural" to them.  They may not be working in the same space of easily 
learnable concepts.  Of course these examples are strictly metaphorical. 
 But the point is that human concepts may not correspond to anything 
that an AI can *natively* learn and *natively* process.


And when you think about running the process in reverse - trying to get 
a human to learn the AI's native language - then the problem is even 
worse.  We'd have to modify the AI's concept-learning mechanisms to only 
learn humanly-learnable concepts.  Because there's no way the humans can 
modify themselves, or run enough sequential serial operations, to 
understand the concepts that would be natural to an AI that used its 
computing power in the most efficient way.


A superintelligence, or a sufficiently self-modifying AI, should not be 
balked by English.  A superintelligence should carve up reality into 
sufficiently fine grains that it can learn any concept computable by our 
much smaller minds, unless P != NP and the concepts are genuinely 
encrypted.  And a self-modifying AI should be able to natively run 
whatever it likes.  This point, however, Baum may not agree with.




This is just speculation.

It is believable that different systems may have trouble if their 
experiences do not overlap (a Japanese friend of mine had great trouble 
with our conversations, even though her knowledge of the language per se 
was extremely good ... too many cultural references to British TV shows 
in typical Brit-speak).


But the idea that there might be an effect due to the design of the 
thinking mechanism is not based on any evidence that I can see.  For all 
we know, the concepts will converge if experiences are the same.


Your conclusions therefore do not follow.


Richard Loosemore







-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-03 Thread James Ratcliff
Jef, Even given a hand created checked and correct small but comprehensive Knowledge Representation of the sample world, it is STILL not a trivial effort to get the sentences from the complicated form of english into some computer processable format.  The cat example you gave is unfortunalty not the norm.An example from the texts I am working with is:"A draft United Nations resolution callingfor sanctions on Iran has been dealt a severe blowby China and Russia and, given the absence of anyevidence of nuclear-weapons proliferation by Iran,the momentum for UN action against Iran has begunto fizzle."Which becomes much harder to parse and put into machine readable format.There are two major interconnecting issues here, the Natural Langauge Processing, and the Knowledge
 Representation, which unfortunatly rely very heavily on eachother and must be solved together.JamesJef Allbright <[EMAIL PROTECTED]> wrote: Russell Wallace wrote:  > Syntactic ambiguity isn't the problem. The reason computers don't > understand English is nothing to do with syntax, it's because they  > don't understand the world. >  It's easy to parse "The cat sat on the mat" into  >   > sit  > cat  > on  > mat  > past   >  > But the computer still doesn't understand the sentence, because it > doesn't know what cats, mats and the act of sitting _are_. (The best > test of such
 understanding is not language - it's having the > computer draw an animation of the action.) Russell, I agree, but it might be clearer if we point out that humansdon't understand the world either. We just process these symbols withina more encompassing context.- Jef-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/[EMAIL PROTECTED]Thank YouJames Ratcliffhttp://falazar.com 

Want to start your own business? Learn how on  Yahoo! Small Business. 

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread James Ratcliff
Not necessarily childrens language, as tehy have their own problems and often use the wrong words and rules of grammar, but a simplified english, a reduced rule set. Something like no compound sentences for a start.  I believe most everything can be written without compound sentences, and that would greatly reduce the processing complexity, and anaphora resolution as a part of the language rules, so if you reference something in one place it will stay the same throughout the section.Its not quite as natural, but could be understood simply enough by humans as well as computers.One problem I have with all of this, is the super-flowery writing styles of cramming as many words and complex topics all into one sentence.JamesMatt Mahoney <[EMAIL PROTECTED]> wrote: - Original Message From:
 Ben Goertzel <[EMAIL PROTECTED]>To: agi@v2.listbox.comSent: Tuesday, October 31, 2006 9:26:15 PMSubject: Re: Re: [agi] Natural versus formal AI interface languages>Here is how I intend to use Lojban++ in teaching Novamente.  When>Novamente is controlling a humanoid agent in the AGISim simulation>world, the human teacher talks to it about what it is doing.  I would>like the human teacher to talk to it in both Lojban++ and English, at>the same time.  According to my understanding of Novamente's learning>and reasoning methods, this will be the optimal way of getting the>system to understand English.  At once, the system will get a>perceptual-motor grounding for the English sentences, plus an>understanding of the logical meaning of the sentences.  I can think of>no better way to help a system understand English.  Yes, this is not>the way humans do it. But so what?  Novamente does not
 have a human>brain, it has a different sort of infrastructure with different>strengths and weaknesses.What about using "baby English" instead of an artificial language?  By this I mean simple English at the level of a 2 or 3 year old child.  Baby English has many of the properties that make artificial languages desirable, such as a small vocabulary, simple syntax and lack of ambiguity.  Adult English is ambiguous because adults can use vast knowledge and context to resolve ambiguity in complex sentences.  Children lack these abilities.I don't believe it is possible to map between natural and structured language without solving the natural language modeling problem first.  I don't believe that having structured knowledge or a structured language available makes the problem any easier.  It is just something else to learn.  Humans learn natural language without having to learn structured languages, grammar rules, knowledge representation, etc.  I
 realize that Novamente is different from the human brain.  My argument is based on the structure of natural language, which is vastly different from artificial languages used for knowledge representation.  To wit:- Artificial languages are designed to be processed (translated or compiled) in the order: lexical tokenization, syntactic parsing, semantic extraction.  This does not work for natural language.  The correct order is the order in which children learn: lexical, semantics, syntax.  Thus we have successful language models that extract semantics without syntax (such as information retrieval and text categorization), but not vice versa.- Artificial language has a structure optimized for serial processing.  Natural language is optimized for parallel processing.  We resolve ambiguity and errors using context.  Context detection is a type of parallel pattern recognition.  Patterns can be letters, groups of letters, words, word categories, phrases, and
 syntactic structures.  We recognize and combine perhaps tens or hundreds of patterns simultaneously by matching to perhaps 10^5 or more from memory.  Artificial languages have no such mechanism and cannot tolerate ambiguity or errors.- Natural language has a structure that allows incremental learning.  We can add words to the vocabulary one at a time.  Likewise for phrases, idioms, classes of words and syntactic structures.  Artificial languages must be processed by fixed algorithms.  Learning algorithms are unknown.- Natural languages evolve slowly in a social environment.  Artificial languages are fixed according to some specificiation.- Children can learn natural languages.  Artificial languages are difficult to learn even for adults.- Writing in an artificial language is an iterative process in which the output is checked for errors by a computer and the utterance is revised.  Natural language uses both iterative and forward error
 correction.By "natural language" I include man made languages like Esperanto.  Esperanto was designed for communication between humans and has all the other properties of natural language.  It lacks irregular verbs and such, but this is really a tiny part of a language's complexity.  A natural language like English has a complexi

Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Richard Loosemore

James Ratcliff wrote:
Not necessarily childrens language, as tehy have their own problems and 
often use the wrong words and rules of grammar, but a simplified 
english, a reduced rule set.
 Something like no compound sentences for a start.  I believe most 
everything can be written without compound sentences, and that would 
greatly reduce the processing complexity,
and anaphora resolution as a part of the language rules, so if you 
reference something in one place it will stay the same throughout the 
section.


Its not quite as natural, but could be understood simply enough by 
humans as well as computers.
One problem I have with all of this, is the super-flowery writing styles 
of cramming as many words and complex topics all into one sentence.


This is a question directed at this whole thread, about simplifying 
language to communicate with an AI system, so we can at least get 
something working, and then go from there


This rationale is the very same rationale that drove researchers into 
Blocks World programs.  Winograd and SHRDLU, etc.  It was a mistake 
then:  it is surely just as much of a mistake now.




Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Matt Mahoney
I think SHRDLU (Blocks World) would have been more interesting if the language 
model was learned rather than programmed.  There is an important lesson here, 
and Winograd knew it: this route is a dead end.  Adult English has a complexity 
of about 10^9 bits (my estimate).  SHRDLU has a complexity of less than 7 x 
10^5 bits.  (I measured the upper bound by compressing the source code from 
http://hci.stanford.edu/winograd/shrdlu/code/ with paq8f).  One lesson I hope 
we learned is that there is no shortcut around complexity.  We have tried that 
route for 50 years.  There is no "simple" algorithm for AGI.  OpenCyc 1.0 has a 
download size (zip) of 147 MB.

It does not help that words in SHRDLU are grounded in an artificial world.  Its 
failure to scale hints that approaches such as AGI-Sim will have similar 
problems.  You cannot simulate complexity.  I learned this not from studying 
language, but from my dissertation work in a seemingly unrelated area: network 
intrusion detection.  In 1998 and 1999 MIT Lincoln Labs and DARPA developed a 
data set of simulated network traffic with various simulated attacks and ran 
contests to see which intrusion detection systems were best at detecting them.  
They spent probably millions of dollars trying to make the traffic seem 
realistic as possible, simulating hundreds of machines on a local network and 
thousands more on the Internet, generating fake email using word bigram models, 
web page downloads from public sites, etc, based on studies of real traffic.  
My approach was to use anomaly detection - model normal traffic and flag 
anything unusual as suspicious.  The problem turned out to be ridiculously 
easy: look at the first few dozen bytes of each network packet and flag any 
byte value you haven't seen before in that position.  It easily beat every 
system in the original contest.  If only it worked in real traffic.  The result 
of my studies was to basically discredit the data set.  What happened here can 
be explained in terms of algorithmic complexity.  The program that generated 
the artificial traffic was much smaller than the "program" that generates real 
traffic, so that inserting the attacks disproportionally increased the total 
complexity, making the traffic less predictable (or compressable).

In a similar way, SHRDLU performed well in its artificial, simple world.  But 
how would you measure its performance in a real world?  

If we are going to study AGI, we need a way to perform tests and measure 
results.  It is not just that we need to know what works and what doesn't.  The 
systems we build will be too complex to know what we have built.  How would you 
measure them?  The Turing test is the most widely accepted, but it is somewhat 
subjective and not really appropriate for an AGI with sensorimotor I/O.  I have 
proposed text compression.  It gives hard numbers, but it seems limited to 
measuring ungrounded language models.  What else would you use?  Suppose that 
in 10 years, NARS, Novamente, Cyc, and maybe several other
systems all claim to have solved the AGI problem.  How would you test
their claims?  How would you decide the winner?
 
-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-04 Thread John Scanlon



I'll keep this short, just to weigh in a vote - I 
completely agree with this.  AGI will be measured by what we recognize 
as intelligent behavior and the usefulness of that intelligence for 
tasks beyond the capabilities of ordinary software.  Normal metrics 
don't apply.
 
 
Russell Wallace wrote:
 
Ben Goertzel wrote: 

I 
  of course don't think that SHRDLU vs. AGISim is a fair comparison.
Agreed. 
SHRDLU didn't even try to solve the real problems - for the simple and 
sufficient reason that it was impossible to make a credible attempt at such on 
the hardware of the day. AGISim (if I understand it correctly) does. Oh, I'm 
sure the current implementation makes fatal compromises to fit on today's 
hardware - but the concept doesn't have an _inherent_ plateau the way SHRDLU 
did, so it leaves room for later upgrade. It's headed in the right compass 
direction. 
And, 
  deciding which AGI is smarter is not important either -- no moreimportant 
  than deciding whether Ben, Matt or Pei is smarter.  Who 
cares?
Agreed. 
In practice the market will decide: which system ends up doing useful things in 
the real world, and therefore getting used? Academic judgements of which is 
smarter are, well, academic. 
 
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



Re: [agi] Natural versus formal AI interface languages

2006-11-04 Thread Matt Mahoney
Ben,
The test you described (Easter Egg Hunt) is a perfectly good example of the 
type of test I was looking for.  When you run the experiment you will no doubt 
repeat it many times, adjusting various parameters.  Then you will evaluate by 
how many eggs are found, how fast, and the extent to which it helps the system 
learns to play Hide and Seek (also a measurable quantity).

Two other good qualities are that the test is easy to describe and obviously 
relevant to intelligence.  For text compression, the relevance is not so 
obvious.

I look forward to seeing a paper on the outcome of the tests.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 10:51:16 PM
Subject: Re: Re: Re: Re: [agi] Natural versus formal AI interface languages

> I am happy enough with the long-term goal of independent scientific
> and mathematical discovery...
>
> And, in the short term, I am happy enough with the goals of carrying
> out the (AGISim versions of) the standard tasks used by development
> psychologists to study childrens' cognitive behavior...
>
> I don't see a real value to precisely quantifying these goals, though...

To give an example of the kind of short-term goal that I think is
useful, though, consider the following.

We are in early 2007 (if all goes according to plan) going to teach
Novamente to carry out a game called "iterated Easter Egg hunt" --
basically, to carry out an Easter Egg hunt in a room full of other
agents ... and then do so over and over again, modeling what the other
agents do and adjusting its behavior accordingly.

Now, this task has a bit in common with the game Hide-and-Seek.  So,
you'd expect that a Novamente instance that had been taught iterated
Easter Egg Hunt, would also be good at hide-and-seek.  So, we want to
see that the time required for an NM system to learn hide-and-seek
will be less if the NM system has previously learned to play iterated
Easter Egg hunt...

This sort of goal is, I feel, good for infant-stage AGI education
However, I wouldn't want to try to turn it into an "objective IQ
test."  Our goal is not to make the best possible system for playing
Easter Egg hunt or hide and seek or fetch or whatever

And, in terms of language learning, our initial goal will not be to
make the best possible system for conversing in baby-talk...

Rather, our goal will be to make a system that can adequately fulfill
these early-stage tasks, but in a way that we feel will be
indefinitely generalizable to more complex tasks.

This, I'm afraid, highlights a general issue with formal quantitative
intelligence measures as applied to immature AGI systems/minds.  Often
the best way to achieve some early-developmental-stage task is going
to be an overfitted, narrow-AI type of algorithm, which is not easily
extendable to address more complex tasks.

This is similar to my complaint about the Hutter Prize.  Yah, a
superhuman AGI will be an awesome text compressor.  But this doesn't
mean that the best way to achieve slightly better text compression
than current methods is going to be **at all** extensible in the
direction of AGI.

Matt, you have yet to convince me that seeking to optimize interim
quantitative milestones is a meaningful path to AGI.  I think it is
probably just a path to creating milestone-task-overfit narrow-AI
systems without any real AGI-related expansion potential...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-05 Thread Charles D Hixson

Richard Loosemore wrote:

...
This is a question directed at this whole thread, about simplifying 
language to communicate with an AI system, so we can at least get 
something working, and then go from there


This rationale is the very same rationale that drove researchers into 
Blocks World programs.  Winograd and SHRDLU, etc.  It was a mistake 
then:  it is surely just as much of a mistake now.

Richard Loosemore.
-
Not surely.  It's definitely a defensible position, but I don't see any 
evidence that it has even a 50% probability of being correct.


Also I'm not certain that SHRDLU and Blocks World were mistakes.  They 
didn't succeed in their goals, but they remain as important markers.  At 
each step we have limitations imposed by both our knowledge and our 
resources.  These limits aren't constant.  (P.S.:  I'd throw Eliza into 
this same category...even though the purpose behind Eliza was different.)


Think of the various approaches taken as being experiments with the user 
interface...since that's a large part of what they were.  They are, of 
course, also experiments with how far one can push a given technique 
before encountering a combinatorial explosion.  People don't seem very 
good at understanding that intuitively.  In neural nets this same 
problem re-appears as saturation, the point at which as you learn new 
things old things become fuzzier and less certain.  This may have some 
relevance to the way that people are continually re-writing their 
memories whenever they remember something.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-05 Thread Matt Mahoney
Another important lesson from SHRDLU, aside from discovering that the approach 
of hand coding knowledge doesn't work, was how long it took to discover this.  
It was not at all obvious from the initial success.  Cycorp still hasn't 
figured it out after over 20 years.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Charles D Hixson <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, November 5, 2006 4:46:12 PM
Subject: Re: [agi] Natural versus formal AI interface languages

Richard Loosemore wrote:
> ...
> This is a question directed at this whole thread, about simplifying 
> language to communicate with an AI system, so we can at least get 
> something working, and then go from there
>
> This rationale is the very same rationale that drove researchers into 
> Blocks World programs.  Winograd and SHRDLU, etc.  It was a mistake 
> then:  it is surely just as much of a mistake now.
> Richard Loosemore.
> -
Not surely.  It's definitely a defensible position, but I don't see any 
evidence that it has even a 50% probability of being correct.

Also I'm not certain that SHRDLU and Blocks World were mistakes.  They 
didn't succeed in their goals, but they remain as important markers.  At 
each step we have limitations imposed by both our knowledge and our 
resources.  These limits aren't constant.  (P.S.:  I'd throw Eliza into 
this same category...even though the purpose behind Eliza was different.)

Think of the various approaches taken as being experiments with the user 
interface...since that's a large part of what they were.  They are, of 
course, also experiments with how far one can push a given technique 
before encountering a combinatorial explosion.  People don't seem very 
good at understanding that intuitively.  In neural nets this same 
problem re-appears as saturation, the point at which as you learn new 
things old things become fuzzier and less certain.  This may have some 
relevance to the way that people are continually re-writing their 
memories whenever they remember something.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread James Ratcliff
Richard,  The Blocks World (http://hci.stanford.edu/~winograd/shrdlu/) was over 36 years ago, and was a GREAT demonstration of what can be done with natural language.  It handled a wide variety of items, albeit with a very limited environment.  Currently MIT is doing work with robitics that uses the same types of systems, where they can talk to a grasper robot and tell it to pick up or move the yellow thing, and stuff like that.  It is limited to its small environment, but that was also over 36 years ago.Today, we sould be able to take something like this and expand upwards.  The harder part of the equation for a complex system like this is actually the robotics end, and image recognition tasks.  In some form or another we are going to HAVE to have a natural language interface, either a translation program that can convert our english to the machine  understandable form, or a simplified form of english that is
 trivial for a person to quickly understand and write.  Humans use natural speech to communicate and to have an effective AGI that we can itneract with, it will have to have easy communication with us.  That has been a critcal problem with all software since the beginning, a difficulty in the human computer interface.I go further to propose that as much knowledge information should be stored in easily recognizable natural language as well, only devolving into more complex forms where the cases warrant it, such as complex motor-sensor data sets, and some lower logic levels.James RatcliffRichard Loosemore <[EMAIL PROTECTED]> wrote: James Ratcliff wrote:> Not necessarily childrens language, as tehy have their own problems and > often use the wrong words and rules of grammar, but a
 simplified > english, a reduced rule set.>  Something like no compound sentences for a start.  I believe most > everything can be written without compound sentences, and that would > greatly reduce the processing complexity,> and anaphora resolution as a part of the language rules, so if you > reference something in one place it will stay the same throughout the > section.> > Its not quite as natural, but could be understood simply enough by > humans as well as computers.> One problem I have with all of this, is the super-flowery writing styles > of cramming as many words and complex topics all into one sentence.This is a question directed at this whole thread, about simplifying language to communicate with an AI system, so we can at least get something working, and then go from thereThis rationale is the very same rationale that drove researchers into Blocks
 World programs.  Winograd and SHRDLU, etc.  It was a mistake then:  it is surely just as much of a mistake now.Richard Loosemore.-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?list_id=303Thank YouJames Ratcliffhttp://falazar.com 


Sponsored Link 
Talk more and pay less. Vonage can save you up to $300 a year on your phone bill. Sign up now.
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread BillK

On 11/6/06, James Ratcliff wrote:

  In some form or another we are going to HAVE to have a natural language
interface, either a translation program that can convert our english to the
machine  understandable form, or a simplified form of english that is
trivial for a person to quickly understand and write.
  Humans use natural speech to communicate and to have an effective AGI that
we can itneract with, it will have to have easy communication with us.  That
has been a critcal problem with all software since the beginning, a
difficulty in the human computer interface.

I go further to propose that as much knowledge information should be stored
in easily recognizable natural language as well, only devolving into more
complex forms where the cases warrant it, such as complex motor-sensor data
sets, and some lower logic levels.



Anybody remember short wave radio?

The Voice of America does worldwide broadcasts in Special English.


Special English has a core vocabulary of 1500 words.  Most are simple
words that describe objects, actions or emotions.  Some words are more
difficult.  They are used for reporting world events and describing
discoveries in medicine and science.

Special English writers use short, simple sentences that contain only
one idea. They use active voice.  They do not use idioms.
--

There is also Basic English:

Basic English is a constructed language with a small number of words
created by Charles Kay Ogden and described in his book Basic English:
A General Introduction with Rules and Grammar (1930). The language is
based on a simplified version of English, in essence a subset of it.

Ogden said that it would take seven years to learn English, seven
months for Esperanto, and seven weeks for Basic English, comparable
with Ido. Thus Basic English is used by companies who need to make
complex books for international use, and by language schools that need
to give people some knowledge of English in a short time.

Also see:

Basic English is a selection of 850 English words, used in simple
structural patterns, which is both an international auxiliary language
and a self-contained first stage for the teaching of any form of wider
or Standard English. A subset, no unlearning.


BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread James Ratcliff
I dont believe that was the goal or lesson of the http://en.wikipedia.org/wiki/SHRDLU project.It was mainly centered aroudn a small test environment (the block world)and being able to create an interface that would allow the user to speak and be answered in a natural language.And in that goal it seemed to succeed fairly admirably.When we look at throwing the rest of the world into the simulation, we of course get problems, but that is understandable I believe.But this is a Type of goal that we would like to have (I think) is the ability in a much larger dynamic world environment to be able to talk and interact with the AI and have it understand you and be able to answer you.And as far as knowledge engineering (aka Cyc and others)  it DOES work.  The problem is just the size and complexity.I just finished a class here on Knowledge Engineering, and they worked on a project that allows you to do that and works very well on
 limited domains.One of the main drawbacks to the KE is the massive work-hours involved to get a decent knowledge base. and the limited scope of any of them.But the blocks world did show some sophisticated behavior for what it was:Person: PICK UP A BIG RED BLOCK. Computer: OK. Person: GRASP THE PYRAMID. Computer: I DON'T UNDERSTAND WHICH PYRAMID YOU MEAN. Person (changing their mind): FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX. Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING. Computer: OK. Person: WHAT DOES THE BOX CONTAIN? Computer: THE BLUE PYRAMID AND THE BLUE BLOCK. Person: WHAT IS THE PYRAMID SUPPORTED BY? Computer: THE BOX. Person: HOW MANY BLOCKS ARE NOT IN THE BOX? Computer: FOUR OF THEM. Person: IS AT LEAST ONE OF THEM NARROWER THAN THE ONE WHICH I TOLD YOU TO PICK UP? Computer: YES, THE RED CUBE.including
 reason and memory and ambiguity.James RatcliffMatt Mahoney <[EMAIL PROTECTED]> wrote: Another important lesson from SHRDLU, aside from discovering that the approach of hand coding knowledge doesn't work, was how long it took to discover this.  It was not at all obvious from the initial success.  Cycorp still hasn't figured it out after over 20 years. -- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: Charles D Hixson <[EMAIL PROTECTED]>To: agi@v2.listbox.comSent: Sunday, November 5, 2006 4:46:12 PMSubject: Re: [agi] Natural versus formal AI interface languagesRichard Loosemore wrote:> ...> This is a question directed at this whole thread, about simplifying > language to communicate with an AI system, so we can at least get >
 something working, and then go from there>> This rationale is the very same rationale that drove researchers into > Blocks World programs.  Winograd and SHRDLU, etc.  It was a mistake > then:  it is surely just as much of a mistake now.> Richard Loosemore.> -Not surely.  It's definitely a defensible position, but I don't see any evidence that it has even a 50% probability of being correct.Also I'm not certain that SHRDLU and Blocks World were mistakes.  They didn't succeed in their goals, but they remain as important markers.  At each step we have limitations imposed by both our knowledge and our resources.  These limits aren't constant.  (P.S.:  I'd throw Eliza into this same category...even though the purpose behind Eliza was different.)Think of the various approaches taken as being experiments with the user interface...since that's a large part of what they were.  They are, of
 course, also experiments with how far one can push a given technique before encountering a combinatorial explosion.  People don't seem very good at understanding that intuitively.  In neural nets this same problem re-appears as saturation, the point at which as you learn new things old things become fuzzier and less certain.  This may have some relevance to the way that people are continually re-writing their memories whenever they remember something.-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?list_id=303-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?list_id=303Thank YouJames Ratcliffhttp://falazar.com 


Sponsored Link
Free Uniden 5.8GHz Phone System with Packet8 Internet Phone Service
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-06 Thread Matt Mahoney
- Original Message 
From: BillK <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Monday, November 6, 2006 10:08:09 AM
Subject: Re: [agi] Natural versus formal AI interface languages

>Ogden said that it would take seven years to learn English, seven
>months for Esperanto, and seven weeks for Basic English, comparable
>with Ido.

Basic English = 850 words = 10 words per day.
Esperanto = 900 root forms or 17,000 words 
(http://www.freelang.net/dictionary/esperanto.html) = 4 to 80 words per day.
English = 30,000 to 80,000 words = 12 to 30 words per day.
SHRDLU = 200 words? = 0.3 words per day for 2 years.
 
-- Matt Mahoney, [EMAIL PROTECTED]





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-07 Thread James Ratcliff
I actually just stumbled on something, from a totally different work I was doing, but possibly interesting:http://simple.wikipedia.org/wiki/Main_PageAn entire wikipedia, using simple english, that should be much much easier to parse than its more complex brother.JamesBillK <[EMAIL PROTECTED]> wrote: On 11/6/06, James Ratcliff wrote:>   In some form or another we are going to HAVE to have a natural language> interface, either a translation program that can convert our english to the> machine  understandable form, or a simplified form of english that is> trivial for a person to quickly understand and write.>   Humans use natural speech to communicate and to have an effective AGI that> we can itneract with, it will have to have easy
 communication with us.  That> has been a critcal problem with all software since the beginning, a> difficulty in the human computer interface.>> I go further to propose that as much knowledge information should be stored> in easily recognizable natural language as well, only devolving into more> complex forms where the cases warrant it, such as complex motor-sensor data> sets, and some lower logic levels.>Anybody remember short wave radio?The Voice of America does worldwide broadcasts in Special English.Special English has a core vocabulary of 1500 words.  Most are simplewords that describe objects, actions or emotions.  Some words are moredifficult.  They are used for reporting world events and describingdiscoveries in medicine and science.Special English writers use short, simple sentences that contain onlyone idea. They use active voice.  They do not use
 idioms.--There is also Basic English:Basic English is a constructed language with a small number of wordscreated by Charles Kay Ogden and described in his book Basic English:A General Introduction with Rules and Grammar (1930). The language isbased on a simplified version of English, in essence a subset of it.Ogden said that it would take seven years to learn English, sevenmonths for Esperanto, and seven weeks for Basic English, comparablewith Ido. Thus Basic English is used by companies who need to makecomplex books for international use, and by language schools that needto give people some knowledge of English in a short time.Also see:Basic English is a selection of 850 English words, used in simplestructural patterns, which is both an international auxiliary languageand a self-contained first stage for the teaching of any form of wideror Standard English. A
 subset, no unlearning.BillK-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?list_id=303___James Ratcliff - http://falazar.comNew Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php 

Cheap Talk? Check out Yahoo! Messenger's low PC-to-Phone call rates.
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Eric Baum



James> Jef Allbright <[EMAIL PROTECTED]> wrote: Russell Wallace
James> wrote:
 
>> Syntactic ambiguity isn't the problem. The reason computers don't
>> understand English is nothing to do with syntax, it's because they
>> don't understand the world.

>> It's easy to parse "The cat sat on the mat" into

>> sit cat
>> 
James>  on

>> mat past
>> 

>> But the computer still doesn't understand the sentence, because it
>> doesn't know what cats, mats and the act of sitting _are_. (The
>> best test of such understanding is not language - it's having the
>> computer draw an animation of the action.)

James> Russell, I agree, but it might be clearer if we point out that
James> humans don't understand the world either. We just process these
James> symbols within a more encompassing context.

James, I would like to know what you mean by "understand".
In my view, what humans do is the example we have of understanding,
the word should be defined so as to have a reasonably precise meaning,
and to include the observed phenomenon.

You apparently have something else in mind by understanding.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread James Ratcliff
James Below Shouls be Jef, but I will respond as wellOrig Quotes:> But the computer still doesn't understand the sentence, because it > doesn't know what cats, mats and the act of sitting _are_. (The best > test of such  understanding is not language - it's having the > computer draw an animation of the action.) Russell, I agree, but it might be clearer if we point out that humansdon't understand the world either. We just process these symbols withina more encompassing context.- JefMe, James:  Understand is probably a red flag word, for computers and humans alike.  We have no nice judge of what is understood, and I
 try not to use that term generally, as it devolves into vague phsycho talk, and nothing concrete. But basically, a computer can do one of two things to "show" that it has "understood" something; 1. either show its internal representation.  You said cat, I know that cat is a mammal that is blah, and blah, and does blah, some cats I know are blah.2. It acts upon this information, "Bring me the cat"  is followed by the robot bringing the cat to you, it obviously "understands" what you mean.I believe with a very rich frame system of memory that will start a fairly good understanding of "What" somethings "means" and allow some basic "understanding".At the basest level a "cat" can only mean a certain few things, maybe using the WordNet ontology for filtering that out.The depending on context and usage, we can possibly narrow it down, and use the Frames for some basic pattern matching to narrow it down to the one.And, maybe
 if it cant be narrowed successfully, something else should happen, either model internally both or multiple objects / processes, or get outside intervention where available.We should remember that there are almost always humans around, and SHOULD be used in my opinion.Either if they are standing by the robot, then they can be quizzed directly, or if it is not a immediate deceision to be made, ask them via email or a phone call or something, and try to learn that information given so next time it will not have to ask.EX: "Bring me the cat."   Confusion in the AI, seeing 4 cats in front of it.  AI: Which cat do you want?  resolve abiguity thru interface.James RatcliffEric Baum <[EMAIL PROTECTED]> wrote: James> Jef Allbright <[EMAIL PROTECTED]> wrote: Russell
 WallaceJames> wrote: >> Syntactic ambiguity isn't the problem. The reason computers don't>> understand English is nothing to do with syntax, it's because they>> don't understand the world.>> It's easy to parse "The cat sat on the mat" into>> sit cat>> James>  on>> mat past>> >> But the computer still doesn't understand the sentence, because it>> doesn't know what cats, mats and the act of sitting _are_. (The>> best test of such understanding is not language - it's having the>> computer draw an animation of the action.)James> Russell, I agree, but it might be clearer if we point out thatJames> humans don't understand the world either. We just process theseJames> symbols within a more encompassing context.James, I would like to know what you mean by "understand".In my view, what humans
 do is the example we have of understanding,the word should be defined so as to have a reasonably precise meaning,and to include the observed phenomenon.You apparently have something else in mind by understanding.-This list is sponsored by AGIRI: http://www.agiri.org/emailTo unsubscribe or change your options, please go to:http://v2.listbox.com/member/?list_id=303___James Ratcliff - http://falazar.comNew Torrent Site, Has TV and Movie Downloads! http://www.falazar.com/projects/Torrents/tvtorrents_show.php 


Sponsored Link
Free Uniden 5.8GHz Phone System with Packet8 Internet Phone Service
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Jef Allbright
Eric Baum wrote: 

> James> Jef Allbright <[EMAIL PROTECTED]> wrote: Russell Wallace
> James> wrote:
>  
> >> Syntactic ambiguity isn't the problem. The reason computers don't 
> >> understand English is nothing to do with syntax, it's because they 
> >> don't understand the world.



> >> But the computer still doesn't understand the sentence, because it 
> >> doesn't know what cats, mats and the act of sitting _are_. 
> (The best 
> >> test of such understanding is not language - it's having 
> the computer 
> >> draw an animation of the action.)
> 
> James> Russell, I agree, but it might be clearer if we point out that 
> James> humans don't understand the world either. We just 
> process these 
> James> symbols within a more encompassing context.
> 
> James, I would like to know what you mean by "understand".
> In my view, what humans do is the example we have of 
> understanding, the word should be defined so as to have a 
> reasonably precise meaning, and to include the observed phenomenon.
> 
> You apparently have something else in mind by understanding.

Eric, you may refer to me as "James" ;-), but as with the topic at hand,
it adds an unnecessary level of complexity and impedes understanding.

It is common to think of machines as not possessing the faculty of
understanding while humans do.  Similarly, machines not possessing
consciousness while humans do.  This way of thinking is adequately
effective for daily use, but it carries and propagates the implicit
assumption that "understanding" and "consciousness" are somehow
intrinsically distinct from other types of processing carried out by
physical systems.

It is simpler and more coherent to think in terms of a scale of
processing within increasingly complex context, such that one might say
that a vending machine understands the difference between certain coins,
an infant understands that a nipple is a source of goodness, and most
adults understand that cooperation is more productive than conflict.
Alternatively we can say that a vending machine responds effectively to
the insertion of proper coins, an infant responds effectively to the
presence of a nipple, and most adults respond effectively by choosing
cooperation over conflict.

But let's rather not say that a vending machine doesn't really
understand the difference between coins, an infant doesn't really
understand the whys and wherefores of nipples, but most adults really do
understand in all its significant implications why cooperation is more
productive than conflict.

Each of these examples is of a physical system responding with some
degree of effectiveness based on an internal model that represents with
some degree of fidelity its local environment.  Its an unnecessary
complication, and leads to endless discussions of qualia, consciousness,
free will and the like, to assume that at some magical unspecified point
there is a transition to "true understanding".

None of which is intended to deny that from a common-sense point of
view, humans understand things that machines don't.  But for computer
scientists working on AI, I think such conceptualizing is sloppy and
impedes effective discussion and progress.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Jef Allbright
Jef wrote:
 
> Each of these examples is of a physical system responding 
> with some degree of effectiveness based on an internal model 
> that represents with some degree of fidelity its local 
> environment.  Its an unnecessary complication, and leads to 
> endless discussions of qualia, consciousness, free will and 
> the like, to assume that at some magical unspecified point 
> there is a transition to "true understanding".

It occurred to me that my use of the term "fidelity" with respect to an
agents internal model may have been misleading.

Rather than say the model represents its environment with some degree of
fidelity I should have said it represents its environment with some
degree of effectiveness, since it's a model of what seems to work,
rather than a model of what seems to be.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Eric Baum

James and Jef, my appologies for misattributing the question.

There is a phenomenon colloquially called "understanding" that is 
displayed by people and at best rarely displayed within limitted 
domains by extant computer programs. If you want to have any hope of
constructing an AGI, you are going to have to come to grips with what
it is and how it is achieved. As to what I believe the answer is, I
refer you to the top (new) paper at http://whatisthought.com/eric.html
entitled "A Working Hypothesis for General Intelligence"
(and to my book What is Thought? if you want more background.)

Eric Baum
http://whatisthought.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Jef Allbright
Eric -

Thanks to the pointer to your paper.  Upon reading I quickly saw what I
think provoked your reaction to my observation about understanding.  We
were actually saying much the same thing there.  My point was that no
human understands the world, because our understanding, as with all
examples of intelligence that we know of, is domain-specific.  I used
the word context as synonymous with domain.  My point was that not that
humans don't *understand* the world, but that humans don't understand
the *world*.  I tried to make that clear in my follow-up, but it appears
I lost your interest very early on.  In reading your paper, I see that
you seem to use the terms "world" and "domain" quite synonymously, but
I'm sure you can appreciate that "domain" connotes a limitation of scope
while "world" connotes expanded or ultimate scope. Our domain specific
knowledge is of the world, but one cannot derive the world from our
domain-specific knowledge since a great deal of information is lost in
the compression process, and that really speaks to the core of what it
means to "understand".

When I read in your paper "The claim is that the world has structure
that can be exploited to rapidly solve problems which arise, and that
underlying our thought processes are modules that accomplish this.",
that rang a familiar bell for me.  I can remember the intellectual
excitement I felt when I first came across this idea back in the 1990s,
probably from Gigerenzer, Kahneman & Tversky, Tooby & Cosmides or some
combination of their thinking on fast and frugal heuristics and bounded
rationality.  You might have deduced my bias toward the domain-specific
theory of (evolved) intelligence by my statement that the internal model
must represent what seems to work, rather than what seems to be, in the
environment.

As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods, in
other words to develop an efficient time-bound algorithm for recognizing
and compressing those regularities in "the world" faster than the
original blind methods of natural evolution.

- Jef 



 

> -Original Message-
> From: Eric Baum [mailto:[EMAIL PROTECTED] 
> Sent: Tuesday, November 07, 2006 1:44 PM
> To: agi@v2.listbox.com
> Subject: RE: [agi] Natural versus formal AI interface languages
> 
> 
> James and Jef, my appologies for misattributing the question.
> 
> There is a phenomenon colloquially called "understanding" 
> that is displayed by people and at best rarely displayed 
> within limitted domains by extant computer programs. If you 
> want to have any hope of constructing an AGI, you are going 
> to have to come to grips with what it is and how it is 
> achieved. As to what I believe the answer is, I refer you to 
> the top (new) paper at http://whatisthought.com/eric.html
> entitled "A Working Hypothesis for General Intelligence"
> (and to my book What is Thought? if you want more background.)
> 
> Eric Baum
> http://whatisthought.com
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email 
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=303
> 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eliezer S. Yudkowsky

Eric Baum wrote:

(Why should producing a human-level AI be cheaper than decoding the
genome?)


Because the genome is encrypted even worse than natural language.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eric Baum

Eliezer> Eric Baum wrote:
>> (Why should producing a human-level AI be cheaper than decoding the
>> genome?)

Eliezer> Because the genome is encrypted even worse than natural
Eliezer> language.

(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in context), which didn't involve any
decryption at all.

(b) why do you think so? 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Matt Mahoney
I think that natural language and the human genome have about the same order of 
magnitude complexity.

The genome is 6 x 10^9 bits (2 bits per base pair) uncompressed, but there is a 
lot of noncoding DNA and some redundancy.  By "decoding", I assume you mean 
building a model and understanding the genome to the point where you could 
modify it and predict what will happen.

The complexity of natural language is probably 10^9 bits.  This is supported by:
- Turing's 1950 estimate, which he did not explain.
- Landauer's estimate of human long term memory capacity.
- The quantity of language processed by an average adult, times Shannon's 
estimate of the entropy of written English of 1 bit per character.
- Extrapolating the relationship between language model training set size and 
compression ratio in this graph: http://cs.fit.edu/~mmahoney/dissertation/

I don't think the encryption of the genome is any worse.  Complex systems (that 
have high Kolmogorov complexity, are incrementally updatable, and do "useful" 
computation) tend to converge to the boundary between stability and chaos, 
where some perturbations decay while others grow.  A characteristic of such 
systems (as studied by Kaufmann) is that the number of stable states or 
attractors tends to the square root of the size.  The number of human genes is 
about the same as the size of the human vocabulary, about 30,000.  Neither 
system is "encrypted" in the mathematical sense.  Encryption cannot be an 
emergent property because it is at the extreme chaotic end of the spectrum.  
Changing one bit of the key or plaintext affects every bit of the ciphertext.

The difference is that it is easier (faster and more ethical) to experiment 
with language models than the human genome.

 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Eliezer S. Yudkowsky <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 8, 2006 3:23:10 PM
Subject: Re: [agi] Natural versus formal AI interface languages

Eric Baum wrote:
> (Why should producing a human-level AI be cheaper than decoding the
> genome?)

Because the genome is encrypted even worse than natural language.

-- 
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eliezer S. Yudkowsky

Eric Baum wrote:

Eliezer> Eric Baum wrote:


(Why should producing a human-level AI be cheaper than decoding the
genome?)


Eliezer> Because the genome is encrypted even worse than natural
Eliezer> language.

(a) By decoding the genome, I meant merely finding the sequence
(should have been clear in context), which didn't involve any
decryption at all.

(b) why do you think so? 


(a) Sorry, didn't pick up on that.  Possibly, more money has already 
been spent on failed AGI projects than on the human genome.


(b) Relative to an AI built by aliens, it's possible that the human 
proteome annotated by the corresponding selection pressures (= the 
decrypted genome), is easier to reverse-engineer than the causal graph 
of human language.  Human language, after all, takes place in the 
context of a complicated human mind.  But relative to humans, human 
language is certainly a lot easier for us to understand than the human 
proteome!


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-08 Thread John Scanlon
Fully decoding the human genome is almost impossible.  Not only is there the 
problem of protein folding, which I think even supercomputers can't fully 
solve, but the purpose for the structure of each protein depends on 
interaction with the incredibly complex molecular structures inside cells. 
Also, the genetic code for a human being is basically made of the same 
elements that the genetic code for the lowliest single-celled creature is 
made of, and yet it somehow describes the initial structure of a system of 
neural cells that then developes into a human brain through a process of 
embriological growth (which includes biological interaction from the 
mother -- why you can't just grow a human being from an embryo in a petri 
dish), and then a fairly long process of childhood development.


This is the way evolution created mind somewhat randomly over three billion 
(and a half?) years.  The human mind is the pinnacle of this evolution. 
With this mind along with collective intelligence, it shouldn't take another 
three billion years to engineer intelligence.  Evolution is slow -- human 
beings can engineer.



- Original Message - 
Eliezer S. Yudkowsky" wrote:



Eric Baum wrote:

(Why should producing a human-level AI be cheaper than decoding the
genome?)


Because the genome is encrypted even worse than natural language.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Eric Baum
Matt wrote:
Changing one bit of the key or plaintext affects every bit of the ciphertext.

That is simply not true of most encryptions. For example, Enigma. 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Eric Baum

John> Fully decoding the human genome is almost impossible.  Not only
John> is there the problem of protein folding, which I think even
John> supercomputers can't fully solve, but the purpose for the
John> structure of each protein depends on interaction with the
John> incredibly complex molecular structures inside cells. 

Yes, but you have all kinds of advantages in decoding the genome
that you don't have, for example, in decoding the human mind 
(although you might have in an AGI): such as the ability to perform 
ingenious knockout experiments, comparative genomics, etc.

Also, the
John> genetic code for a human being is basically made of the same
John> elements that the genetic code for the lowliest single-celled
John> creature is made of, and yet it somehow describes the initial
John> structure of a system of neural cells that then developes into a
John> human brain through a process of embriological growth (which
John> includes biological interaction from the mother -- why you can't
John> just grow a human being from an embryo in a petri dish), and
John> then a fairly long process of childhood development.

John> This is the way evolution created mind somewhat randomly over
John> three billion (and a half?) years.  The human mind is the
John> pinnacle of this evolution. With this mind along with collective
John> intelligence, it shouldn't take another three billion years to
John> engineer intelligence.  Evolution is slow -- human beings can
John> engineer.

Yes, but 
(a) evolution had vastly more computational power than we did-- it
had the ability to use this method to design the brain; and 
(b) plausible arguments (see What is Thought?) suggest that there
may be no better way to design a mind;
and 
(c) the supposition that evolution can't engineer is also unproven.
You believe evolution designed us, and we engineer, so in a sense you
believe evolution engineers. But I suggest, when we "engineer" what we
basically do is a search over alternatives strongly constrained by knowledge
evolution built in, and that the way evolution got to us was similarly
by building knowledge that strongly constrained its search,
recursively; so in fact it may make considerable sense to say that
evolution engineers in basically the same way we do. Why do you think
it looks so much like we are designed?

John> - Original Message - Eliezer S. Yudkowsky" wrote:

>> Eric Baum wrote: (Why should producing a human-level AI be cheaper
>> than decoding the genome?)
>> 
>> Because the genome is encrypted even worse than natural language.
>> 

John> - This list is sponsored by AGIRI:
John> http://www.agiri.org/email To unsubscribe or change your
John> options, please go to: http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Matt Mahoney
Eric Baum <[EMAIL PROTECTED]> wrote:
>Matt wrote:
>Changing one bit of the key or plaintext affects every bit of the ciphertext.

>That is simply not true of most encryptions. For example, Enigma. 

Enigma is laughably weak compared to modern encryption, such as AES, RSA, 
SHA-256, ECC, etc.  Enigma was broken with primitive mechanical computers and 
pencil and paper.  Modern ciphers are designed to withstand chosen plaintext 
attacks by adversaries with millions of years of supercomputer time.  They are 
designed so that the encryption function is computationally indistinguishable 
from a random oracle.

Encryption functions generally have low Kolmogorov complexity.  For example, 
RSA and Diffie-Hellman can be described with just a few mathematical equations. 
 RC4 is also extremely simple.  Although it has some weaknesses, it is far 
stronger than Enigma and still considered unbreakable if used properly.
http://en.wikipedia.org/wiki/RC4

(RC4 is a stream cipher, which means that changing one bit of the key affects 
the entire keystream, but changing one bit of plaintext only changes the 
corresponding bit of the ciphertext.  To make a stream cipher secure against 
undetected modification by an adversary, it is necessary to add a MAC (keyed 
hash).  The hash has the property that changing one bit of plaintext affects 
all the bits of the hash, and it is computationally infeasible to find another 
plaintext that will generate the same hash.)

Encryption systems cannot be complex.  The reason that complex systems tend 
toward the boundary between stable and chaotic is that stable systems (with a 
single attractor) don't do interesting or useful computation, and chaotic 
systems are not incrementally updatable.  Encryption systems are chaotic.  If 
you change a single bit in the system, you are likely to break the security.  
There is no easy way to test security.  It has to be done through careful 
mathematical analysis and by publishing the algorithm so that lots of people 
can hack at it.  Even then, most systems are eventually broken.

To be rigorous, stability and chaos are only defined for analytical systems, 
not discrete systems.  The systems studied by Kauffman were discrete, such as 
state machines with random logic gates, or gene regulation systems.  They have 
properties analogous to stability or chaos as the complexity grows large.
http://pespmc1.vub.ac.be/BOOLNETW.html

-- Matt Mahoney, [EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Matt Mahoney
Protein folding is hard.  We can't even plug in a simple formula like H2O and 
compute physical properties like density or melting point.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: John Scanlon <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Wednesday, November 8, 2006 10:22:09 PM
Subject: Re: [agi] Natural versus formal AI interface languages

Fully decoding the human genome is almost impossible.  Not only is there the 
problem of protein folding, which I think even supercomputers can't fully 
solve, but the purpose for the structure of each protein depends on 
interaction with the incredibly complex molecular structures inside cells. 
Also, the genetic code for a human being is basically made of the same 
elements that the genetic code for the lowliest single-celled creature is 
made of, and yet it somehow describes the initial structure of a system of 
neural cells that then developes into a human brain through a process of 
embriological growth (which includes biological interaction from the 
mother -- why you can't just grow a human being from an embryo in a petri 
dish), and then a fairly long process of childhood development.

This is the way evolution created mind somewhat randomly over three billion 
(and a half?) years.  The human mind is the pinnacle of this evolution. 
With this mind along with collective intelligence, it shouldn't take another 
three billion years to engineer intelligence.  Evolution is slow -- human 
beings can engineer.


- Original Message - 
Eliezer S. Yudkowsky" wrote:

> Eric Baum wrote:
>> (Why should producing a human-level AI be cheaper than decoding the
>> genome?)
>
> Because the genome is encrypted even worse than natural language.
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Eric Baum
Eric Baum <[EMAIL PROTECTED]> wrote:
>Matt wrote:
>Changing one bit of the key or plaintext affects every bit of the cipherte=
xt.

>That is simply not true of most encryptions. For example, Enigma.=20

Matt:
Enigma is laughably weak compared to modern encryption, such as AES, RSA, S=
HA-256, ECC, etc.  Enigma was broken with primitive mechanical computers an=
d pencil and paper.

Enigma was broken without modern computers, *given access to the
machine.* I chose Enigma as an example, because to break language it
may be necessary to pay attention to the machine-- namely examining 
the genomics. But that is more work than you envisage ;^)

It is true that much modern encryption is based on simple algorithms.
However, some crypto-experts would advise more primitive approaches.
RSA is not known to be hard, even if P!=NP, someone may find a
number-theoretic trick tomorrow that factors. (Or maybe they already
have it, and choose not to publish).
If you use a mess machine like a modern version of enigma, that is
much less likely to get broken, even though you may not have the 
theoretical results.

Your response admits that for stream ciphers changing a bit of the
plaintext doesn't affect many bits of the ciphertext, which was what I
was mainly responding to. You may prefer other kinds of cipher, but 
your arguments about chaos are clearly not germane to concluding
language is easy to decode.

Incidentally, while no encryption scheme is provably hard to break
(even assuming P!=NP) more is known about grammars: they are provably
hard to decode given P!=NP.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-09 Thread Brian Atkins

Matt Mahoney wrote:

Protein folding is hard.  We can't even plug in a simple formula like H2O and 
compute physical properties like density or melting point.
 


This seems to be a rapidly improving area:

http://tech.groups.yahoo.com/group/transhumantech/message/36865
--
Brian Atkins
Singularity Institute for Artificial Intelligence
http://www.singinst.org/

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-10 Thread Matt Mahoney
The security of Enigma depended on the secrecy of the algorithm in addition to 
the key.  This violated Kirchoff's principle, the requirement that a system be 
secure against an adversary who has everything except the key.  This mistake 
has been repeated many times by amateur cryptographers who thought that keeping 
the algorithm secret improved security.  Such systems are invariably broken.  
Secure systems are built by publishing the algorithm so that people can try to 
break them before they are used for anything important.  It has to be done this 
way because there is no provably secure system (regardless of whether P = NP), 
except the one time pad, which is impractical because it lacks message 
integrity, and the key has to be as large as the plaintext and can't be reused.

Anyway, my point is that decoding the human genome or natural language is not 
as hard as breaking encryption.  It cannot be because these systems are 
incrementally updatable, unlike ciphers.  This allows you to use search 
strategies that run in polynomial time.  A key search requires exponential 
time, or else the cipher is broken.  Modeling language or the genome in O(n) 
time or even O(n^2) time with n = 10^9 is much faster than brute force 
cryptanalysis in O(2^n) time with n = 128.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Eric Baum <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday, November 9, 2006 12:18:34 PM
Subject: Re: [agi] Natural versus formal AI interface languages

Eric Baum <[EMAIL PROTECTED]> wrote:
>Matt wrote:
>Changing one bit of the key or plaintext affects every bit of the cipherte=
xt.

>That is simply not true of most encryptions. For example, Enigma.=20

Matt:
Enigma is laughably weak compared to modern encryption, such as AES, RSA, S=
HA-256, ECC, etc.  Enigma was broken with primitive mechanical computers an=
d pencil and paper.

Enigma was broken without modern computers, *given access to the
machine.* I chose Enigma as an example, because to break language it
may be necessary to pay attention to the machine-- namely examining 
the genomics. But that is more work than you envisage ;^)

It is true that much modern encryption is based on simple algorithms.
However, some crypto-experts would advise more primitive approaches.
RSA is not known to be hard, even if P!=NP, someone may find a
number-theoretic trick tomorrow that factors. (Or maybe they already
have it, and choose not to publish).
If you use a mess machine like a modern version of enigma, that is
much less likely to get broken, even though you may not have the 
theoretical results.

Your response admits that for stream ciphers changing a bit of the
plaintext doesn't affect many bits of the ciphertext, which was what I
was mainly responding to. You may prefer other kinds of cipher, but 
your arguments about chaos are clearly not germane to concluding
language is easy to decode.

Incidentally, while no encryption scheme is provably hard to break
(even assuming P!=NP) more is known about grammars: they are provably
hard to decode given P!=NP.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Eric Baum

Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption.  It cannot be because these systems are =
incrementally updatable, unlike ciphers.  This allows you to use search str=
ategies that run in polynomial time.  A key search requires exponential tim=
e, or else the cipher is broken.  Modeling language or the genome in O(n) t=
ime or even O(n^2) time with n =3D 10^9 is much faster than brute force cry=
ptanalysis in O(2^n) time with n =3D 128.

I don't know what you mean by incrementally updateable,
but if you look up the literature on language learning, you will find
that learning various sorts of relatively simple grammars from
examples, or even if memory serves examples and queries, is NP-hard.
Try looking for Dana Angluin's papers back in the 80's.

If your claim is that evolution can not produce a 1-way function,
that's crazy.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Matt Mahoney
Eric, can you give an example of a one way function (such as a cryptographic 
hash or cipher) produced by evolution or by a genetic algorithm?  A one-way 
function f has the property that y = f(x) is easy to compute, but it is hard to 
find x given f and y.  Other examples might be modular exponentiation in large 
finite groups, or multiplication of prime numbers with thousands of digits.

By "incrementally updatable", I mean that you can make a small change to a 
system and the result will be a small change in behavior.  For example, most 
DNA mutations have a small effect.  We try to design software systems with this 
property so we can modify them without breaking them.  However, as the system 
gets bigger, there is more interaction between components, until it reaches the 
point where every change introduces more bugs than it fixes and the code 
becomes unmaintainable.  This is what happens when the system crosses the 
boundary from stability to chaotic.  My argument for Kauffman's observation 
that complex systems sit on this boundary is that stable systems are less 
useful, but chaotic systems can't be developed as a long sequence of small 
steps.  We are able to produce cryptosystems only because they are relatively 
simple, and even then it is hard.

I don't dispute that learning some simple grammars is NP-hard.  However, I 
don't believe that natural language is one of these grammars.  It certainly is 
not "simple".  The human brain is less powerful than a Turing machine, so it 
has no special ability to solve NP-hard problems.  The fact that humans can 
learn natural language is proof enough that it can be done.
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Eric Baum <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Sunday, November 12, 2006 9:29:13 AM
Subject: Re: [agi] Natural versus formal AI interface languages


Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption.  It cannot be because these systems are =
incrementally updatable, unlike ciphers.  This allows you to use search str=
ategies that run in polynomial time.  A key search requires exponential tim=
e, or else the cipher is broken.  Modeling language or the genome in O(n) t=
ime or even O(n^2) time with n =3D 10^9 is much faster than brute force cry=
ptanalysis in O(2^n) time with n =3D 128.

I don't know what you mean by incrementally updateable,
but if you look up the literature on language learning, you will find
that learning various sorts of relatively simple grammars from
examples, or even if memory serves examples and queries, is NP-hard.
Try looking for Dana Angluin's papers back in the 80's.

If your claim is that evolution can not produce a 1-way function,
that's crazy.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Richard Loosemore

Eric Baum wrote:

Matt wrote:
Anyway, my point is that decoding the human genome or natural language is n=
ot as hard as breaking encryption.  It cannot be because these systems are =
incrementally updatable, unlike ciphers.  This allows you to use search str=
ategies that run in polynomial time.  A key search requires exponential tim=
e, or else the cipher is broken.  Modeling language or the genome in O(n) t=
ime or even O(n^2) time with n =3D 10^9 is much faster than brute force cry=
ptanalysis in O(2^n) time with n =3D 128.

I don't know what you mean by incrementally updateable,
but if you look up the literature on language learning, you will find
that learning various sorts of relatively simple grammars from
examples, or even if memory serves examples and queries, is NP-hard.
Try looking for Dana Angluin's papers back in the 80's.


No, a thousand times no.  (Oh, why do we have to fight the same battles 
over and over again?)


These proofs depend on assumptions about what "learning" is, and those 
assumptions involve a type of learning that is stupider than stupid.


Any learning mechanism that had the ability to do modest analogy 
building across domains, and which had the benefit of primitives 
involving concepts like "on", "in", "through", "manipulate", "during", 
"before" (etc etc) would probably be able to do the grammer learning, 
and in any case, the proofs are completely incapable of representing the 
capabilities of such learning mechanisms.


Such ideas have been (to coin a phrase) debunked every which way from 
sunday. ;-)



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Richard Loosemore

Ben Goertzel wrote:

> I don't know what you mean by incrementally updateable,
> but if you look up the literature on language learning, you will find
> that learning various sorts of relatively simple grammars from
> examples, or even if memory serves examples and queries, is NP-hard.
> Try looking for Dana Angluin's papers back in the 80's.

No, a thousand times no.  (Oh, why do we have to fight the same battles
over and over again?)

These proofs depend on assumptions about what "learning" is, and those
assumptions involve a type of learning that is stupider than stupid.


I don't think the proofs depend on any special assumptions about the
nature of learning.


I beg to differ.  IIRC the sense of "learning" they require is induction 
over example sentences.  They exclude the use of real world knowledge, 
in spite of the fact that such knowledge (or at least involved in the development of real world knowledge>) are posited to 
play a significant role in the learning of grammar in humans.  As such, 
these proofs say nothing whatsoever about the learning of NL grammars.


I agree they do have other limitations, of the sort you suggest below.

Richard Loosemore.



Rather, the points to be noted are:

1) these are theorems about the learning of general grammars in a
certain class, as n (some measure of grammar size) goes to infinity

2) NP-hard is about worst-case time complexity of learning grammars in
that class, of size n

So the reason these results are not cognitively interesting is:

1) real language learning is about learning specific grammars of
finite size, not parametrized classes of grammars as n goes to
infinity

2) even if you want to talk about learning over parametrized classes,
real learning is about average-case rather than worst-case complexity,
anyway (where the average is over some appropriate probability
distribution)

-- Ben G



Any learning mechanism that had the ability to do modest analogy
building across domains, and which had the benefit of primitives
involving concepts like "on", "in", "through", "manipulate", "during",
"before" (etc etc) would probably be able to do the grammer learning,
and in any case, the proofs are completely incapable of representing the
capabilities of such learning mechanisms.

Such ideas have been (to coin a phrase) debunked every which way from
sunday. ;-)


Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Richard Loosemore

Eric Baum wrote:

Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:


I don't know what you mean by incrementally updateable, > but if

you look up the literature on language learning, you will find >
that learning various sorts of relatively simple grammars from >
examples, or even if memory serves examples and queries, is
NP-hard.  > Try looking for Dana Angluin's papers back in the 80's.

No, a thousand times no.  (Oh, why do we have to fight the same
battles over and over again?)

These proofs depend on assumptions about what "learning" is, and
those assumptions involve a type of learning that is stupider than
stupid.


Ben> I don't think the proofs depend on any special assumptions about
Ben> the nature of learning.

Ben> Rather, the points to be noted are:

Ben> 1) these are theorems about the learning of general grammars in a
Ben> certain class, as n (some measure of grammar size) goes to
Ben> infinity

Ben> 2) NP-hard is about worst-case time complexity of learning
Ben> grammars in that class, of size n

These comments are of course true of any NP-hardness result.
They are reasons why the NP-hardness result does not *prove* (even
if P!=NP) that the problem is insuperable.

However, the way to bet is generally that the problem is actually
hard. Ch. 11 of WIT? gives some arguments why.

If you don't believe that, you shouldn't rely on encryption.
Encryption has all the above weaknesses in spades, and plus,
its not even proved secure given P!=NP, that requires additional
assumptions.

Also, in addition to the hardness results, there has been considerable
effort in modelling natural grammars by linguists, which has failed,
thus also providing evidence the problem is hard.


Eric,

You quoted Ben above and addressed part 2 of his response, without 
noticing that he later retracted part 1 ("I don't think the proofs 
depend on any special assumptions about the nature of learning.") and 
therefore, because of that retraction, made the part 2 points irrelevant 
to the argument we were discussing.


The result of all that is that your own comments, above, are also 
stranded out on that irrelevant subbranch, because I have already 
pointed out that all the efforts of the linguists and others who talk 
about "grammar learning" are *indeed* making special assumptions about 
the nature of language learning that are extremely unlikely to be valid. 
 The result:  you cannot make any sensible conclusions about the 
hardness of the grammar learning task.


Here is my previous response to Ben's points that you quote above, 
together with his reply:


Ben Goertzel wrote:
>> > I don't think the proofs depend on any special assumptions about
>> > the nature of learning.
>>
>> Richard Loosemore wrote:
>> I beg to differ.  IIRC the sense of "learning" they require is
>> induction over example sentences.  They exclude the use of
>> real world knowledge, in spite of the fact that such knowledge
>> (or at least > world knowledge>) are posited to play a significant role in
>> the learning of grammar in humans.  As such, these proofs
>> say nothing whatsoever about the learning of NL grammars.
>>
>> I agree they do have other limitations, of the sort you
>> suggest below.
>
> Ben Goertzel wrote:
> Ah, I see  Yes, it is true that these theorems are about grammar
> learning in isolation, not taking into account interactions btw
> semantics, pragmatics and grammar, for example...
>
> ben


As I said before, since your arguments are based on these same 
assumptions, your claims about the learnability of grammars are 
completely spurious.


If you can show an analysis that includes the impact of real world 
knowledge on the learning mechanism, and prove that the grammar learning 
problem is still hard, you might be able to come to the conclusions you 
do, but I have never seen anyone show the remotest signs of being able 
to do that.



Richard Loosemore




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-17 Thread Richard Loosemore

Eric Baum wrote:

I don't think the proofs depend on any special assumptions about

the > nature of learning.

I beg to differ.  IIRC the sense of "learning" they require is
induction over example sentences.  They exclude the use of real
world knowledge, in spite of the fact that such knowledge (or at
least ) are posited to play a significant role in the learning
of grammar in humans.  As such, these proofs say nothing whatsoever
about the learning of NL grammars.



I fully agree the proofs don't take into account such stuff.
And I believe such stuff is critical. Thus
I've never claimed language learning was proved hard, I've just
suggested evolution could have encrypted it.

The point I began with was, if there are lots of different locally
optimal codings for thought, it may be hard to figure out which one is 
programamed
into the mind, and thus language learning could be a hard additional
problem to producing an AGI. The AGI has to understand what the word
"foobar" means, and thus it has to have (or build) a code module meaning
``foobar" that it can invoke with this word. If it has a different set
of modules, it might be sunk in communication.

My sense about grammars for natural language, is that there are lots
of different equally valid grammars that could be used to communicate.
For example, there are the grammars of English and of Swahili. One
isn't better than the other. And there is a wide variety of other
kinds of grammars that might be just as good, that aren't even used in
natural language, because evolution chose one convention at random.
Figuring out what that convention is, is hard, at least Linguists have
tried hard to do it and failed.
And this grammar stuff is pretty much on top of, the meanings of 
the words. It serves to disambiguate, for example for error correction
in understanding. But you could communicate pretty well in pidgin, 
without it, so long as you understand the meanings of the words.


The grammar learning results (as well as the experience of linguists,
who've tried very hard to build a model for natural grammar) 
I think, are indicative that this problem is hard, and it seems that

this problem is superimposed above the real world knowledge aspect.


Eric,

Thankyou, I think you have focussed down on the exact nature of the claim.

My reply could start from a couple of different places in your above 
text (all equivalent), but the one that brings out the point best is this:


>And there is a wide variety of other
> kinds of grammars that might be just as good, that aren't even used in
> natural language, because evolution chose one convention at random.
  ^^

This is precisely where I think the flase assumption is buried.  When I 
say that grammar learning can be dependent on real world knowledge, I 
mean specifically that there are certain conceptual primitives involved 
in the basic design of a concept-learning system.  We all share these 
primitives, and [my claim is that] our language learning mechanisms 
start from those things.  Because both I and a native Swahili speaker 
use languages whose grammars are founded on common conceptual 
primitives, our grammars are more alike than we imagine.


Not only that, but if myself and the Swahili speaker suddenly met and 
tried to discover each other's languages, we would be able to do so, 
eventually, because our conceptual primitives are the same and our 
learning mechanisms are so similar.


Finally, I would argue that most cognitive systems, if they are to be 
successful in negotiating this same 3-D universe, would do best to have 
much the same conceptual primitives that we do.  This is much harder to 
argue, but it could be done.


As a result of this, evolution would not by any means have been making 
random choices of languages to implement.  It remains to be seen just 
how constrained the choices are, but there is at least a prima facie 
case to be made (the one I just sketched) that evolution was extremely 
constrained in her choices.


In the face of these ideas, your argument that evolution essentially 
made a random choice from a quasi-infinite space of possibilities needs 
a great deal more to back it up.  The grammar-from-conceptual-primitives 
idea is so plausible that the burden is on you to give a powerful reason 
for rejecting it.


Correct me if I am wrong, but I see no argument from you on this 
specific point (maybe there is one in your book  but in that case, 
why say without qualification, as if it was obvious, that evolution made 
a random selection?).


Unless you can destroy the grammar-from-conceptual-primitives idea, 
surely these arguments about hardness of learning have to be rejected?





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-17 Thread James Ratcliff
The primitive terms arent random, just some of the structure of it.

English standard does Sub VB Ob, while others do 
VB Subj Ob 
or another manner, as long as they are known and roughly consistently used, the 
 actual choice coudl well be random there and not matter, 
but a 'concept' of a dog in any language is roughly the same, based on what we 
share when we see, hear, smell, and interact with the concept.

Everything is based on these primitives of experiencing the world, 
so I am using english, but modeling my knowledge base terms on these 
experiences.

James


Richard Loosemore <[EMAIL PROTECTED]> wrote: Eric Baum wrote:
 I don't think the proofs depend on any special assumptions about
>>> the > nature of learning.
>>>
>>> I beg to differ.  IIRC the sense of "learning" they require is
>>> induction over example sentences.  They exclude the use of real
>>> world knowledge, in spite of the fact that such knowledge (or at
>>> least 

>>> knowledge>) are posited to play a significant role in the learning
>>> of grammar in humans.  As such, these proofs say nothing whatsoever
>>> about the learning of NL grammars.
>>>
> 
> I fully agree the proofs don't take into account such stuff.
> And I believe such stuff is critical. Thus
> I've never claimed language learning was proved hard, I've just
> suggested evolution could have encrypted it.
> 
> The point I began with was, if there are lots of different locally
> optimal codings for thought, it may be hard to figure out which one is 
> programamed
> into the mind, and thus language learning could be a hard additional
> problem to producing an AGI. The AGI has to understand what the word
> "foobar" means, and thus it has to have (or build) a code module meaning
> ``foobar" that it can invoke with this word. If it has a different set
> of modules, it might be sunk in communication.
> 
> My sense about grammars for natural language, is that there are lots
> of different equally valid grammars that could be used to communicate.
> For example, there are the grammars of English and of Swahili. One
> isn't better than the other. And there is a wide variety of other
> kinds of grammars that might be just as good, that aren't even used in
> natural language, because evolution chose one convention at random.
> Figuring out what that convention is, is hard, at least Linguists have
> tried hard to do it and failed.
> And this grammar stuff is pretty much on top of, the meanings of 
> the words. It serves to disambiguate, for example for error correction
> in understanding. But you could communicate pretty well in pidgin, 
> without it, so long as you understand the meanings of the words.
> 
> The grammar learning results (as well as the experience of linguists,
> who've tried very hard to build a model for natural grammar) 
> I think, are indicative that this problem is hard, and it seems that
> this problem is superimposed above the real world knowledge aspect.

Eric,

Thankyou, I think you have focussed down on the exact nature of the claim.

My reply could start from a couple of different places in your above 
text (all equivalent), but the one that brings out the point best is this:

 >And there is a wide variety of other
 > kinds of grammars that might be just as good, that aren't even used in
 > natural language, because evolution chose one convention at random.
   ^^

This is precisely where I think the flase assumption is buried.  When I 
say that grammar learning can be dependent on real world knowledge, I 
mean specifically that there are certain conceptual primitives involved 
in the basic design of a concept-learning system.  We all share these 
primitives, and [my claim is that] our language learning mechanisms 
start from those things.  Because both I and a native Swahili speaker 
use languages whose grammars are founded on common conceptual 
primitives, our grammars are more alike than we imagine.

Not only that, but if myself and the Swahili speaker suddenly met and 
tried to discover each other's languages, we would be able to do so, 
eventually, because our conceptual primitives are the same and our 
learning mechanisms are so similar.

Finally, I would argue that most cognitive systems, if they are to be 
successful in negotiating this same 3-D universe, would do best to have 
much the same conceptual primitives that we do.  This is much harder to 
argue, but it could be done.

As a result of this, evolution would not by any means have been making 
random choices of languages to implement.  It remains to be seen just 
how constrained the choices are, but there is at least a prima facie 
case to be made (the one I just sketched) that evolution was extremely 
constrained in her choices.

In the face of these ideas, your argument that evolution essentially 
made a random choice from a quasi-infinite space of possibilities needs 
a great deal more to back

Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Eric Baum

Richard> Eric Baum wrote:
> I don't think the proofs depend on any special assumptions about
 the > nature of learning.
 
 I beg to differ.  IIRC the sense of "learning" they require is
 induction over example sentences.  They exclude the use of real
 world knowledge, in spite of the fact that such knowledge (or at
 least  ) are posited to play a significant role in the learning
 of grammar in humans.  As such, these proofs say nothing
 whatsoever about the learning of NL grammars.
 
>> I fully agree the proofs don't take into account such stuff.  And I
>> believe such stuff is critical. Thus I've never claimed language
>> learning was proved hard, I've just suggested evolution could have
>> encrypted it.
>> 
>> The point I began with was, if there are lots of different locally
>> optimal codings for thought, it may be hard to figure out which one
>> is programamed into the mind, and thus language learning could be a
>> hard additional problem to producing an AGI. The AGI has to
>> understand what the word "foobar" means, and thus it has to have
>> (or build) a code module meaning ``foobar" that it can invoke with
>> this word. If it has a different set of modules, it might be sunk
>> in communication.
>> 
>> My sense about grammars for natural language, is that there are
>> lots of different equally valid grammars that could be used to
>> communicate.  For example, there are the grammars of English and of
>> Swahili. One isn't better than the other. And there is a wide
>> variety of other kinds of grammars that might be just as good, that
>> aren't even used in natural language, because evolution chose one
>> convention at random.  Figuring out what that convention is, is
>> hard, at least Linguists have tried hard to do it and failed.  And
>> this grammar stuff is pretty much on top of, the meanings of the
>> words. It serves to disambiguate, for example for error correction
>> in understanding. But you could communicate pretty well in pidgin,
>> without it, so long as you understand the meanings of the words.
>> 
>> The grammar learning results (as well as the experience of
>> linguists, who've tried very hard to build a model for natural
>> grammar) I think, are indicative that this problem is hard, and it
>> seems that this problem is superimposed above the real world
>> knowledge aspect.

Richard> Eric,

Richard> Thankyou, I think you have focussed down on the exact nature
Richard> of the claim.

Richard> My reply could start from a couple of different places in
Richard> your above text (all equivalent), but the one that brings out
Richard> the point best is this:

>> And there is a wide variety of other kinds of grammars that might
>> be just as good, that aren't even used in natural language, because
>> evolution chose one convention at random.
Richard>
Richard> ^^

Richard> This is precisely where I think the flase assumption is
Richard> buried.  When I say that grammar learning can be dependent on
Richard> real world knowledge, I mean specifically that there are
Richard> certain conceptual primitives involved in the basic design of
Richard> a concept-learning system.  We all share these primitives,
Richard> and [my claim is that] our language learning mechanisms start
Richard> from those things.  Because both I and a native Swahili
Richard> speaker use languages whose grammars are founded on common
Richard> conceptual primitives, our grammars are more alike than we
Richard> imagine.

Richard> Not only that, but if myself and the Swahili speaker suddenly
Richard> met and tried to discover each other's languages, we would be
Richard> able to do so, eventually, because our conceptual primitives
Richard> are the same and our learning mechanisms are so similar.

Richard> Finally, I would argue that most cognitive systems, if they
Richard> are to be successful in negotiating this same 3-D universe,
Richard> would do best to have much the same conceptual primitives
Richard> that we do.  This is much harder to argue, but it could be
Richard> done.

Richard> As a result of this, evolution would not by any means have
Richard> been making random choices of languages to implement.  It
Richard> remains to be seen just how constrained the choices are, but
Richard> there is at least a prima facie case to be made (the one I
Richard> just sketched) that evolution was extremely constrained in
Richard> her choices.

Richard> In the face of these ideas, your argument that evolution
Richard> essentially made a random choice from a quasi-infinite space
Richard> of possibilities needs a great deal more to back it up.  The
Richard> grammar-from-conceptual-primitives idea is so plausible that
Richard> the burden is on you to give a powerful reason for rejecting
Richard> it.

Richard> Correct me if I am wrong, but I see no argument from you on
Richard> this specific point (maybe there is one in your book  but
Richard> in that case, why say without qualification, as if it wa

Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Richard Loosemore

[snip]...

Richard> This is precisely where I think the false assumption is
Richard> buried.  When I say that grammar learning can be dependent on
Richard> real world knowledge, I mean specifically that there are
Richard> certain conceptual primitives involved in the basic design of
Richard> a concept-learning system.  We all share these primitives,
Richard> and [my claim is that] our language learning mechanisms start
Richard> from those things.  Because both I and a native Swahili
Richard> speaker use languages whose grammars are founded on common
Richard> conceptual primitives, our grammars are more alike than we
Richard> imagine.

Richard> Not only that, but if myself and the Swahili speaker suddenly
Richard> met and tried to discover each other's languages, we would be
Richard> able to do so, eventually, because our conceptual primitives
Richard> are the same and our learning mechanisms are so similar.

Richard> Finally, I would argue that most cognitive systems, if they
Richard> are to be successful in negotiating this same 3-D universe,
Richard> would do best to have much the same conceptual primitives
Richard> that we do.  This is much harder to argue, but it could be
Richard> done.

Richard> As a result of this, evolution would not by any means have
Richard> been making random choices of languages to implement.  It
Richard> remains to be seen just how constrained the choices are, but
Richard> there is at least a prima facie case to be made (the one I
Richard> just sketched) that evolution was extremely constrained in
Richard> her choices.

Richard> In the face of these ideas, your argument that evolution
Richard> essentially made a random choice from a quasi-infinite space
Richard> of possibilities needs a great deal more to back it up.  The
Richard> grammar-from-conceptual-primitives idea is so plausible that
Richard> the burden is on you to give a powerful reason for rejecting
Richard> it.

Richard> Correct me if I am wrong, but I see no argument from you on
Richard> this specific point (maybe there is one in your book  but
Richard> in that case, why say without qualification, as if it was
Richard> obvious, that evolution made a random selection?).

Richard> Unless you can destroy the grammar-from-conceptual-primitives
Richard> idea, surely these arguments about hardness of learning have
Richard> to be rejected?


The argument, in very brief, is the following. Evolution found a
very compact program that does the right thing. (This is my
hypothesis, not claimed proved but lots of reasons to believe it
given in WIT?.) Finding such programs is NP-hard.


Hold it right there.  As far as I can see, you just asserted the result 
that is under dispute, right there at the beginning of your argument!


Finding a language-understanding mechanism is NP-hard?

That prompts two questions:

1) Making statements about NP-hardness requires a problem to be 
formalized in such a way as to do the math.  But in order to do that 
formalization you have to make assumptions, and the only assumptions I 
have ever seen reported in this context are close relatives of the ones 
that are under dispute (that grammar induction is context free, 
essentially), and if you have made those assumptions, you have assumed 
what you were trying to demonstrate!


In other words, if the only way we can get a handle on the way a grammar 
induction mechanism works is to make (outrageously implausible) 
assumptions about context-free nature of that mechanism [see my previous 
comments quote above], how can anyone get a handle on the even more 
complex process of desiging a grammar induction mechanism (the design 
prcess that evolution went through)?


I'll be blunt:  I simply do not believe that you have formalized the 
grammar-mechanism *design* process in such a way as to make a precise 
statement about its NP-hardness, I think you just asserted that it is 
NP-hard.


2)  My second question is:  what would it matter anyway, even if the 
design process were NP-hard, unless you specify the exact sense in which 
it is NP-hard?


The reason I say that, is that NP-hardness by itself tells us absolutely 
nothing.  NP-hardness tells us about how algorithms scale with changes 
of input size  so if you give me a succession of "different-sized" 
language understanding mechanisms, and if I were to know that building 
these LUMs was NP-hard, I would know something about how the building 
process would *scale* as the size of the LUM increased.  It would say 
nothing about the hardness of any given problem unless you specified the 
exact formula and the scaling variables involved.


I am sure you know what this is about, but just in case, I will 
illustarte the point.


Suppose that the computational effort that evolution needs to build 
"different sized" language understanding mechanisms scales as:


2.5 * (N/7 + 1)^^6 planet-years

... where "different sized" is captured by the value N, which is the 
number of conceptual primitives used in

Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Richard Loosemore

Ben Goertzel wrote:

Richard,

I know it's peripheral to your main argument, but in this example ...


Suppose that the computational effort that evolution needs to build
"different sized" language understanding mechanisms scales as:

2.5 * (N/7 + 1)^^6 planet-years

... where "different sized" is captured by the value N, which is the
number of conceptual primitives used in the language understanding
mechanism, and a "planet-year" is one planet worth of human DNA randomly
working on the problem for one year.  (I am plucking this out of the
air, of course, but that doesn't matter.)

Here are the resource requirements for this polynomial resource function:

N   R

1   2.23E+000
7   6.40E+001
10  2.05E+002
50  2.92E+005
100 1.28E+007
300 7.12E+009

(N = Number of conceptual primitives)
(R = resource requirement in planet-years)

I am assuming that the appropriate measure of size of problem is number
of conceptual primitives that are involved in the language understanding
mechanism (a measure picked at random, and as far as I can see, as
likely a measure as any, but if you think something else should be the
N, be my guest).

If there were 300 conceptual primitives in the human LUM, resource
requirement would be 7 billion planet-years.  That would be bad.

But if there are only 7 conceptual primitives, it would take 64 years.
Pathetically small and of no consequence.

The function is polynomial, so in a sense you could say this is an
NP-hard problem.


I don't think you're using the term "NP-hard" correctly.

http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP

"
The class P consists of all those decision problems that can be solved
on a deterministic sequential machine in an amount of time that is
polynomial in the size of the input; the class NP consists of all
those decision problems whose positive solutions can be **verified**
in polynomial time given the right information.
"

[This page also reviews, and agrees with, many of your complaints
regarding the intuitive interpretation of P as easy and NP as hard]

http://en.wikipedia.org/wiki/NP-hard

"
In computational complexity theory, NP-hard (Non-deterministic
Polynomial-time hard) refers to the class of decision problems H such
that for every decision problem L in NP there exists a polynomial-time
many-one reduction to H, written . If H itself is in NP, then H is
called NP-complete.
"


I'd certainly welcome clarification, and I may have gotten this wrong... 
but I'm not quite sure where you are directing my attention here.


Are you targeting the fact that NP-Hard is defined with respect to 
decision problems, or to the reduction aspect?


My understanding of NP-hard is that it does strictly only apply to 
decision problems ... but what I was doing was trying to interpret the 
loose sense in which Eric himself was using NP-Hard, so if I have 
stretched the definition a little, I woudl claim I was inheriting 
something that was already stretched.


But maybe that was not what you meant.  I stand ready to be corrected, 
if it turns out I have goofed.




Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Eric Baum

>> 
>> 
>> The argument, in very brief, is the following. Evolution found a
>> very compact program that does the right thing. (This is my
>> hypothesis, not claimed proved but lots of reasons to believe it
>> given in WIT?.) Finding such programs is NP-hard.

Richard> Hold it right there.  As far as I can see, you just asserted
Richard> the result that is under dispute, right there at the
Richard> beginning of your argument!

First, above I was discussing finding an understanding system,
not necessarily a language understanding system-- say a monkey,
not a person. Then I went on to talk about additional problems
coming when you want language.

Richard> Finding a language-understanding mechanism is NP-hard?

Richard> That prompts two questions:

Richard> 1) Making statements about NP-hardness requires a problem to
Richard> be formalized in such a way as to do the math.  But in order
Richard> to do that formalization you have to make assumptions, and
Richard> the only assumptions I have ever seen reported in this
Richard> context are close relatives of the ones that are under
Richard> dispute (that grammar induction is context free,
Richard> essentially), and if you have made those assumptions, you
Richard> have assumed what you were trying to demonstrate!

At this point, I suggest again you read What is Thought?
In emails, I am skipping a lot of corners and not giving caveats
and whatnot, to give 3 paragraph summaries.

But if you don't want to take the time, I'll cut to the chase.
I am not claiming to have proved that building a human is NP-hard.

I am suggesting a theory of thought, for which I think there is 
a lot of evidence and basis. Computational learning theory has more
or less established that generalization (never mind thought) follows
from finding constrained-enough hypothesis. And in every case that has
been studied of sufficient richness to be really interesting, it turns
out that the problems you have to solve are NP-hard. So naturally, in
my extrapolation to thought, I expect that the problems you will have
to solve here are NP-hard as well. This isn't exactly rigorous,
but its the way to bet. Your protests are mostly wishful thinking.

Its also true that proving something is NP-hard doesn't prove its 
insoluble, or even hard in the average case, or hard in any particular
case, or any of that. Hell, it might even be true that P=NP.
But there are a lot of strong reasons to think that all of this is
also wishful thinking, and that NP-hard problems are really hard.
I'll say again, if you don't believe that, you shouldn't be using 
cryptography, because cryptography as practiced not only relies on
P!=NP, but much much stronger assumptions, like its very hard to
factor *random* products of primes, and factoring isn't even NP-hard.

Its clear from what you are writing here that you are not familiar
with computational learning theory, or computational complexity
theory. I strongly suggest you read WIT?. I think you will learn 
a lot. Chapter 4 is a review of the results in computational learning
theory that is meant to be accessible. Chapter 11 is a review of 
Complexity Theory, also meant to be accessible. The book is, I think,
written from a point of view that you will find amenable-- I am very
much focussed on semantics, I see the world a lot like you do,
except I am bringing the conclusions of computational learning theory,
complexity theory, and linguistics to bear. I am not asserting these
conclusions-- I question each one and discuss data pro and con.
Where they seem very well founded and relevant, I discuss their 
implications. I also am not arguing rigorously. I am extracting
from these disciplines the intuition behind rigorous results,
and extending it to give what I argue is a compelling view of 
what cognition is.



Richard> In other words, if the only way we can get a handle on the
Richard> way a grammar induction mechanism works is to make
Richard> (outrageously implausible) assumptions about context-free
Richard> nature of that mechanism [see my previous comments quote
Richard> above], how can anyone get a handle on the even more complex
Richard> process of desiging a grammar induction mechanism (the design
Richard> prcess that evolution went through)?

Richard> I'll be blunt: I simply do not believe that you have
Richard> formalized the grammar-mechanism *design* process in such a
Richard> way as to make a precise statement about its NP-hardness, I
Richard> think you just asserted that it is NP-hard.

Richard> 2) My second question is: what would it matter anyway, even
Richard> if the design process were NP-hard, unless you specify the
Richard> exact sense in which it is NP-hard?

Richard> The reason I say that, is that NP-hardness by itself tells us
Richard> absolutely nothing.  NP-hardness tells us about how
Richard> algorithms scale with changes of input size  so if you
Richard> give me a succession of "different-sized" language
Richard> understanding mechanisms, and if I 

Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Pei Wang

On 11/24/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


I have seen this kind of computational complexity talk so often, and it
is just (if you'll forgive an expression of frustration here) just
driving me nuts.  It is ludicrous:  these concepts are being bandied
about as if they make the argument wonderfully rigorous and high-quality
 but they mean nothing without some explicit specification of
assumptions.


I have a similar feeling.

In many cases, the notion of "computational complexity" has been
misused when applied to thinking processes.

By definition, this notion is about an algorithm applied to a problem
class, and maps the size of a problem instance to the time the
algorithm spends in the solution.

When talking about a human problem-solving process, such as "designing
a NLP mechanism for an AGI", the notion cannot be used in its exact
sense, because:

(1) We are not trying to solve a "problem class", but a "problem instance".

Even if someone successfully designed a NLP interface for an AGI, it
doesn't mean that he/she has an algorithm that can design such
interfaces for all kinds of AGIs.

When problems are solved in a case-by-case manner using various ad hoc
methods, these solutions cannot be analyzed as following the same
algorithm, with a fixed complexity function.

(2) Human thinking processes usually do not follow problem-specific algorithms.

As I argued before, I don't have an algorithm for playing chess. You
cannot say I have one though don't know it myself, since in different
time I move differently at the same position. If you say that my
algorithm is "time-dependent" or "context-sensitive", then it is
effectively the same as saying I have no chess-specific algorithm.
Anyway, the time spent in the solution is not a fixed function of the
"problem instance" alone.

Furthermore, thinking processes are usually open-ended. If I add a NLP
interface for NARS in the future, it will surely be an incrementally
improving process, so it will be hard, if possible, to say how much
time it takes, since it is probably never finally finished.

In summary, like many other math notions, to use "computational
complexity" outside math and computer science doesn't always make
sense. Of course, the notion can be used metaphorically, or on a
"formalization" of the original problem (which turns the problem into
a computation), but such a usage has little "rigorous and
high-quality" nature with respect to the real problem.

The above conclusion doesn't mean that these problems cannot be solved
in AI, but that the traditional theory of computation is largely
irrelevant in designing and analyzing their solutions.

For detailed arguments and explanations, read
http://nars.wang.googlepages.com/wang.computation.pdf and
http://www.springer.com/west/home/computer/artificial?SGWID=4-147-22-173659733-0

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Richard Loosemore

Ben Goertzel wrote:

It's just that "problem X is NP-hard" means roughly "Any problem Y in
NP is polynomial-time reducible to problem X", and your example did
not seem to exemplify this...

All your example seemed to exemplify was a problem that was solvable
in polynomial time (class P, not class NP-hard)

However, this is irrelevant to your main conceptual point, which as I
understood it was that theorems regarding the scaling behavior of the
worst-case complexity of problems as problem size n goes to infinity
are pragmatically irrelevant...

[I'm not sure I fully agree with your conceptual point, but that's
another issue.  I used to agree but when I encountered Immerman's
descriptive complexity theory, I started wavering.  Immerman showed
e.g. that

-- P, the class of problems solvable in polynomial time, corresponds
to languages recognizable by first-order logic plus a recursion
operator

-- NP, the class of problems whose solutions are checkable in
polynomial time, corresponds to languages recognized by existential
second order logic (second order logic with second-order existential
but not universal quantification)

This is interesting and suggests that these complexity classes could
possibly have some fundamental cognitive meaning, even though such a
meaning is not obvious from their standard definitions...]


Your point is well taken:  I did fudge the issue by giving an example 
that was a specific, polynomial instance and made the mistake of calling 
it NP-Hard.  My goal (as you correctly point out) was to try to make it 
clear that NP-Hardness statements are not about how hard a given 
language mechanism is to build, but about the scaling behavior.


I don't really want to get too sidetracked, but even if Immerman's 
analysis were correct, would this make a difference to the way that Eric 
was using NP-Hard, though?  In other words, this would still not 
undermine my point that a statement like "the building of a language 
/learning mechanism by evolution is NP-Hard" does not actually tell us 
anything about how difficult the particular process that led to human 
language really was?  It sounds like Immerman is putting the 
significance of complexity classes on firmer ground, but not changing 
the nature of what they are saying.




Richard Loosemore
















-- Ben



On 11/24/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:
> Richard,
>
> I know it's peripheral to your main argument, but in this example ...
>
>> Suppose that the computational effort that evolution needs to build
>> "different sized" language understanding mechanisms scales as:
>>
>> 2.5 * (N/7 + 1)^^6 planet-years
>>
>> ... where "different sized" is captured by the value N, which is the
>> number of conceptual primitives used in the language understanding
>> mechanism, and a "planet-year" is one planet worth of human DNA 
randomly

>> working on the problem for one year.  (I am plucking this out of the
>> air, of course, but that doesn't matter.)
>>
>> Here are the resource requirements for this polynomial resource 
function:

>>
>> N   R
>>
>> 1   2.23E+000
>> 7   6.40E+001
>> 10  2.05E+002
>> 50  2.92E+005
>> 100 1.28E+007
>> 300 7.12E+009
>>
>> (N = Number of conceptual primitives)
>> (R = resource requirement in planet-years)
>>
>> I am assuming that the appropriate measure of size of problem is 
number
>> of conceptual primitives that are involved in the language 
understanding

>> mechanism (a measure picked at random, and as far as I can see, as
>> likely a measure as any, but if you think something else should be the
>> N, be my guest).
>>
>> If there were 300 conceptual primitives in the human LUM, resource
>> requirement would be 7 billion planet-years.  That would be bad.
>>
>> But if there are only 7 conceptual primitives, it would take 64 years.
>> Pathetically small and of no consequence.
>>
>> The function is polynomial, so in a sense you could say this is an
>> NP-hard problem.
>
> I don't think you're using the term "NP-hard" correctly.
>
> http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP
>
> "
> The class P consists of all those decision problems that can be solved
> on a deterministic sequential machine in an amount of time that is
> polynomial in the size of the input; the class NP consists of all
> those decision problems whose positive solutions can be **verified**
> in polynomial time given the right information.
> "
>
> [This page also reviews, and agrees with, many of your complaints
> regarding the intuitive interpretation of P as easy and NP as hard]
>
> http://en.wikipedia.org/wiki/NP-hard
>
> "
> In computational complexity theory, NP-hard (Non-deterministic
> Polynomial-time hard) refers to the class of decision problems H such
> that for every decision problem L in NP there exists a polynomial-time
> many-one reduction to H, written . If H itself is in NP, then H is
> called NP-

Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Richard Loosemore

Eric Baum wrote:


The argument, in very brief, is the following. Evolution found a
very compact program that does the right thing. (This is my
hypothesis, not claimed proved but lots of reasons to believe it
given in WIT?.) Finding such programs is NP-hard.


Richard> Hold it right there.  As far as I can see, you just asserted
Richard> the result that is under dispute, right there at the
Richard> beginning of your argument!

First, above I was discussing finding an understanding system,
not necessarily a language understanding system-- say a monkey,
not a person. Then I went on to talk about additional problems
coming when you want language.

Richard> Finding a language-understanding mechanism is NP-hard?

Richard> That prompts two questions:

Richard> 1) Making statements about NP-hardness requires a problem to
Richard> be formalized in such a way as to do the math.  But in order
Richard> to do that formalization you have to make assumptions, and
Richard> the only assumptions I have ever seen reported in this
Richard> context are close relatives of the ones that are under
Richard> dispute (that grammar induction is context free,
Richard> essentially), and if you have made those assumptions, you
Richard> have assumed what you were trying to demonstrate!

At this point, I suggest again you read What is Thought?
In emails, I am skipping a lot of corners and not giving caveats
and whatnot, to give 3 paragraph summaries.

But if you don't want to take the time, I'll cut to the chase.
I am not claiming to have proved that building a human is NP-hard.


Eric,

I am having serious difficulty here:  I made a very specific point, 
originally, and you made a reply to that - but your replies are 
wandering off into other topics.   For example:  I neither thought nor 
implied that your claim was "to have proved that building a human is 
NP-hard," so I am puzzled why you should say this.



I am suggesting a theory of thought, for which I think there is 
a lot of evidence and basis. Computational learning theory has more

or less established that generalization (never mind thought) follows
from finding constrained-enough hypothesis. And in every case that has
been studied of sufficient richness to be really interesting, it turns
out that the problems you have to solve are NP-hard. So naturally, in
my extrapolation to thought, I expect that the problems you will have
to solve here are NP-hard as well. This isn't exactly rigorous,
but its the way to bet. Your protests are mostly wishful thinking.

Its also true that proving something is NP-hard doesn't prove its 
insoluble, or even hard in the average case, or hard in any particular

case, or any of that. Hell, it might even be true that P=NP.
But there are a lot of strong reasons to think that all of this is
also wishful thinking, and that NP-hard problems are really hard.
I'll say again, if you don't believe that, you shouldn't be using 
cryptography, because cryptography as practiced not only relies on

P!=NP, but much much stronger assumptions, like its very hard to
factor *random* products of primes, and factoring isn't even NP-hard.

Its clear from what you are writing here that you are not familiar
with computational learning theory, or computational complexity
theory. I strongly suggest you read WIT?. I think you will learn 
a lot. 


[Please don't be tempted to make comments about my general level of 
expertise in this or that field.  You are mistaken in this, but I am not 
going to argue about it].


My understanding of these areas is reasonably deep, though not complete, 
and I have made specific points that (if you will read Pei Wang's 
comment) are being reinforced by others more expert than myself.


As for COLT, you have said something that I completely disagree with: 
"Computational learning theory has more or less established that 
generalization (never mind thought) follows from finding 
constrained-enough hypothesis."  That would be true if you accepted the 
narrow characterization of "generalization" that is used in COLT, but it 
is most emphatically not true if (as I do) you consider this narrow 
reading to be only a trivial version of the mechanism to be found in the 
human cognitive system.


This is an absolutely crucial point.  The COLT people have decided to 
use the word "generalization" to describe a formal process that THEY 
define, and the reason they define it their way is that their narrow 
definition makes it amenable to mathematical proofs (e.g. proofs that 
THEIR version is an NP-Hard problem).  Frankly, I don't care if they can 
prove that their version is NP-Hard, because nothing follows from it.


Other people do not take that narrow view, but instead consider 
"generalization" to be a much more complicated process (probably a 
cluster of several processes) defined on a system that is not nearly as 
simple in structure as the systems that COLT people use . and for 
all these reasons, there is no way to get a handle on this lar

Re: [agi] Natural versus formal AI interface languages

2006-11-28 Thread Philip Goetz

On 11/9/06, Eric Baum <[EMAIL PROTECTED]> wrote:

It is true that much modern encryption is based on simple algorithms.
However, some crypto-experts would advise more primitive approaches.
RSA is not known to be hard, even if P!=NP, someone may find a
number-theoretic trick tomorrow that factors. (Or maybe they already
have it, and choose not to publish).
If you use a mess machine like a modern version of enigma, that is
much less likely to get broken, even though you may not have the
theoretical results.


DES is essentially a big messy bit-scrambler; like Enigma, but with
bits instead of letters.  The relative security of the two approaches
is debated by cryptologists.  On one hand, RSA could be broken by a
computational trick (or a quantum computer).  On the other hand, DES
is so messy that it's very hard to be sure there isn't a foothold for
an attack, or even a deliberate backdoor, in it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

For comparison, here are some versions of

"I saw the man with the telescope"

in Lojban++ ...

[ http://www.goertzel.org/papers/lojbanplusplus.pdf ]

1)
mi pu see le man sepi'o le telescope
"I saw the man, using the telescope as a tool"

2)
mi pu see le man pe le telescope
"I saw the man who was with the telescope, and not some other man"

3)
mi pu see le man ne le telescope
"I saw the man, and he happened to be with the telescope"

4)
mi pu saw le man sepi'o le telescope
"I carried out a sawing action on the man, using the telescope as a tool"

Each of these can be very simply and unambiguously translated into
predicate logic, using the Lojban++ cmavo ("function words") as
semantic primitives.

Some notes on Lojban++ as used in these very simple examples:

-- "pu" is an article indicating past tense.
-- "mi" means me/I
-- sepi'o means basically "the following item is used as a tool in the
predicate under discussion"
-- "le" is sort of like "the"
-- "pe" is association
-- "ne" is incidental association
-- in example 4, the parser must figure out that the action rather
than object meaning of "saw" is intended because two arguments are
provided (mi, and "le man")

Anyway, I consider the creation of a language that is suitable for
human-computer communication about everyday or scientific phenomena,
and that is minimally ambiguous syntactically and semantically, to be
a solved problem.  It was already basically solved by Lojban, but
Lojban suffers from a shortage-of-vocabulary issue which Lojban++
remedies.

There is a need for someone to write a Lojban++ parser and semantic
mapper, but this is a straightforward though definitely not trivial
task.

As discussed before, I feel the use of Lojban++ may be valuable in
order to help with the early stages of teaching an AGI.  I disagree
that "if an AGI system is smart, it can just learn English."  Human
babies take a long time to learn English or other natural languages,
and they have the benefit of some as yet unknown amount of inbuilt
wiring ("inductive bias") to help them.  There is nothing wrong with
taking explicit steps to make it easier to transform a powerful
learning system into an intelligent, communicative mind...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

Hi,


Which brings up a question -- is it better to use a language based on
term or predicate logic, or one that imitates (is isomorphic to) natural
languages?  A formal language imitating a natural language would have the
same kinds of structures that almost all natural languages have:  nouns,
verbs, adjectives, prepositions, etc.  There must be a reason natural
languages almost always follow the pattern of something carrying out some
action, in some way, and if transitive, to or on something else.  On the
other hand, a logical language allows direct  translation into formal logic,
which can be used to derive all sorts of implications (not sure of the
terminology here) mechanically.


I think the Lojban strategy -- of parsing into formal logic -- is the
best approach, because the NL categories that you mention are wrapped
up with all sorts of irritating semantic ambiguities...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

Eliezer wrote:

"Natural" language isn't.  Humans have one specific idiosyncratic
built-in grammar, and we might have serious trouble learning to
communicate in anything else - especially if the language was being used
by a mind quite unlike our own.


Well, some humans have learned to communicate in Lojban quite
effectively.  It's slow and sometimes painful and sometimes
delightful, but definitely possible, and there is no NL syntax
involved...


Even a "programming language" is still
something that humans made, and how many people do you know who can
*seriously*, not-jokingly, think in syntactical C++ the way they can
think in English?


One (and it's not me)

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-10-31 Thread Ben Goertzel

I know people can learn Lojban, just like they can learn Cycl or LISP.  Lets 
not repeat these mistakes.  This is not training, it is programming a knowledge 
base.  This is narrow AI.

-- Matt Mahoney, [EMAIL PROTECTED]


You seem not to understand the purpose of using Lojban to help teach an AI.

Of course it is not a substitute for teaching an AI a natural language.

It is simply a tool to help beef up the understanding of certain types
of AI systems to the point where they are ready to robustly understand
natural language  Just because humans don't learn this way doesn't
mean some kinds of AI's shouldn't.  And, just because Cyc is
associated with a poor theory of AI education, doesn't mean that all
logic-based AI systems are.  (Similarly, just because backprop NN's
are associated with a poor theory of AI education, doesn't mean all NN
systems necessarily are.)

Here is how I intend to use Lojban++ in teaching Novamente.  When
Novamente is controlling a humanoid agent in the AGISim simulation
world, the human teacher talks to it about what it is doing.  I would
like the human teacher to talk to it in both Lojban++ and English, at
the same time.  According to my understanding of Novamente's learning
and reasoning methods, this will be the optimal way of getting the
system to understand English.  At once, the system will get a
perceptual-motor grounding for the English sentences, plus an
understanding of the logical meaning of the sentences.  I can think of
no better way to help a system understand English.  Yes, this is not
the way humans do it. But so what?  Novamente does not have a human
brain, it has a different sort of infrastructure with different
strengths and weaknesses.

If it results in general intelligence, it is not "narrow AI".   The
goal of this teaching methodology is to give Novamente a general
conceptual understanding, using which it can flexibly generalize its
understanding to progressively more and more complex situations.

This is not what we are doing yet, mainly because we lack a Lojban++
parser still (just a matter of a few man-months of effort, but we have
other priorities), but it is in the queue and we will get there in
time, as resources permit...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Ben Goertzel

Yes, teaching an AI in Esperanto would make more sense than teaching
it in English ... but, would not serve the same purpose as teaching it
in Lojban++ and a natural language in parallel...

In fact, an ideal educational programme would probably be to use, in parallel

-- an Esperanto-based, rather than English-based, version of  Lojban++
-- Esperanto

However, I hasten to emphasize that this whole discussion is (IMO)
largely peripheral to AGI.

The main point is to get the learning algorithms and knowledge
representation mechanisms right.  (Or if the learning algorithm learns
its own KR's, that's fine too...).  Once one has what seems like a
workable learning/representation framework, THEN one starts talking
about the right educational programme.  Discussing education in the
absence of an understanding of internal learning algorithms is perhaps
confusing...

Before developing Novamente in detail, I would not have liked the idea
of using Lojban++ to help teach an AGI, for much the same reasons that
you are now complaining.

But now, given the specifics of the Novamente system, it turns out
that this approach may actually make teaching the system considerably
easier -- and make the system more rapidly approach the point where it
can rapidly learn natural language on its own.

To use Eric Baum's language, it may be that by interacting with the
system in Lojban++, we human teachers can supply the baby Novamente
with much of the "inductive bias" that humans are born with, and that
helps us humans to learn natural languages so relatively easily

I guess that's a good way to put it.  Not that learning Lojban++ is a
substitute for learning English, rather that the knowledge gained via
interaction in Lojban++ may be a substitute for human babies'
language-focused and spacetime-focused inductive bias.

Of course, Lojban++ can be used in this way **only** with AGI systems
that combine
-- a robust reinforcement learning capability
-- an explicitly logic-based knowledge representation

But Novamente does combine these two factors.

I don't expect to convince you that this approach is a good one, but
perhaps I have made my motivations clearer, at any rate.  I am
appreciating this conversation, as it is pushing me to verbally
articulate my views more clearly than I had done before.

-- Ben G



On 11/2/06, Matt Mahoney <[EMAIL PROTECTED]> wrote:

- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, October 31, 2006 9:26:15 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages

>Here is how I intend to use Lojban++ in teaching Novamente.  When
>Novamente is controlling a humanoid agent in the AGISim simulation
>world, the human teacher talks to it about what it is doing.  I would
>like the human teacher to talk to it in both Lojban++ and English, at
>the same time.  According to my understanding of Novamente's learning
>and reasoning methods, this will be the optimal way of getting the
>system to understand English.  At once, the system will get a
>perceptual-motor grounding for the English sentences, plus an
>understanding of the logical meaning of the sentences.  I can think of
>no better way to help a system understand English.  Yes, this is not
>the way humans do it. But so what?  Novamente does not have a human
>brain, it has a different sort of infrastructure with different
>strengths and weaknesses.

What about using "baby English" instead of an artificial language?  By this I 
mean simple English at the level of a 2 or 3 year old child.  Baby English has many of 
the properties that make artificial languages desirable, such as a small vocabulary, 
simple syntax and lack of ambiguity.  Adult English is ambiguous because adults can use 
vast knowledge and context to resolve ambiguity in complex sentences.  Children lack 
these abilities.

I don't believe it is possible to map between natural and structured language 
without solving the natural language modeling problem first.  I don't believe 
that having structured knowledge or a structured language available makes the 
problem any easier.  It is just something else to learn.  Humans learn natural 
language without having to learn structured languages, grammar rules, knowledge 
representation, etc.  I realize that Novamente is different from the human 
brain.  My argument is based on the structure of natural language, which is 
vastly different from artificial languages used for knowledge representation.  
To wit:

- Artificial languages are designed to be processed (translated or compiled) in 
the order: lexical tokenization, syntactic parsing, semantic extraction.  This 
does not work for natural language.  The correct order is the order in which 
children learn: lexical, semantics, syntax.  Thus we have successful language 
models that extract semantics without syntax 

Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Ben Goertzel

Hi,


I think an interesting goal would be to teach an AGI to write software.  If I 
understand your explanation, this is the same problem.


Yeah, it's the same problem.

It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a programming language
syntax based on Lojban, using his Speagram program interpreter
framework.

The nice thing about Lojban is that it does have the flexibility to be
used as a pragmatic programming language (tho no one has done this
yet), **or** to be used to describe everyday situations in the manner
of a natural language

> How could such an AGI be built?   What would be its architecture?
What learning algorithm?  What training data?  What computational
cost?

Well, I think Novamente is one architecture that can achieve this
But I do not know what the computational cost will be, as Novamente is
too complicated to support detailed theoretical calculations of its
computational cost in realistic situations.  I have my estimates of
the computational cost, but validating them will have to wait till the
project progresses further...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Lukasz Kaiser

Hi.


It's a very small step from Lojban to a programming language, and in
fact Luke Kaiser and I have talked about making a programming language
syntax based on Lojban, using his Speagram program interpreter
framework.

The nice thing about Lojban is that it does have the flexibility to be
used as a pragmatic programming language (tho no one has done this yet),
**or** to be used to describe everyday situations in the manner
of a natural language


Yes, in my opinion this **OR** should really be underlined. And I think
this is a very big problem -- you can talk about programming *or* talk
in everyday manner, but hardly both at the same time.

I could recently feel the pain as a friend of mine worked on using
Speagram in Wengo (an open source VoIP client) for language
control of different commands and actions. The problem is that, even
if you manage to get through parsing, context, disambiguation, add
some meaningful interaction etc., you end up with a set of commands
that is very hard to extend for non-programmer. So basically you can
activate a few pre-programmed commands in a quite-natural language
*and* you can add new commands in a naturally looking programming
language. But, even though this is internally the same language, there
is no way to say that you can program in a way that feels natural.

It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
representation that is underlying it all. Simply the first-order logic or
usual programming styles are different from everyday communication.
Switching to Lojban might remove the remaining syntax errors, but
I don't see how it can help with this bigger problem. Ben, do you think
using Lojban can really substantially help or are you counting on Agi-Sim
world and Novamente architecture in general, and want to use Lojban
just to simplify language analysis?

- lk

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Ben Goertzel

It does not help that words in SHRDLU are grounded in an artificial world.  Its 
failure to scale hints that approaches such as AGI-Sim will have similar 
problems.  You cannot simulate complexity.


I of course don't think that SHRDLU vs. AGISim is a fair comparison.

Among other counterarguments:the idea is that AGI systems trained in
AGISim may then be able to use their learning to  operate in the
physical world, controlling robots similar to their AGISim simulated
robots

With this in mind, we have plans to eventually integrate the Pyro
robotics control toolkit with AGISim (likely extending Pyro in the
process), so that the same code can be used to control physical robots
as AGISim simulated robots...

Now, you can argue that this just won't work, because (you might say)
there is nothing in common between learning
perception-cognition-action in a simulated world like AGISim, and
learning the same thing in the physical world.  You might argue that
the relative lack of richness in perceptual stimuli and motoric
control makes a tremendous qualitative difference.  OK, I admit I
cannot rigorously prove this sort of argument false  Nor can you
prove it true.   As with anything else in AGI, we must to some extent
go on intuition until someone develops a real mathematical theory of
pragmatic AGI, or someone finally creates a working AGI based on their
intuition.

But at least, you must admit there is a plausible argument to be made
that effective AGI operation in a somewhat realistic simulation world
can transfer to similar operation in the physical world.  We are not
talking about SHRDLU here.  We are talking about a system that
perceives simulated visual stimuli and has to recognize objects as
patterns in these stimuli; that acts in the world by sending movement
commands to joints; etc.  Problems posed to the system need to be
recognized by the system in terms of these sensory and motoric
primitives, analogously to what happens with a system embedded in the
physical world via a physical body.


In a similar way, SHRDLU performed well in its artificial, simple world.  But 
how would you measure its performance in a real world?


I believe I have addressed this by noting that AGI performance is
intended to be portable from AGISim into the physical world.

Of course, with any simulated environment there is always the risk of
creating an AGI or AI system that is overfit to that simulated
environment.  However, being aware of that risk, I feel it is going to
be that difficult to avoid it.


If we are going to study AGI, we need a way to perform tests and measure 
results.  It is not just that we need to know what works and what doesn't.  The 
systems we build will be too complex to know what we have built.  How would you 
measure them?  The Turing test is the most widely accepted, but it is somewhat 
subjective and not really appropriate for an AGI with sensorimotor I/O.  I have 
proposed text compression.  It gives hard numbers, but it seems limited to 
measuring ungrounded language models.  What else would you use?  Suppose that 
in 10 years, NARS, Novamente, Cyc, and maybe several other
systems all claim to have solved the AGI problem.  How would you test
their claims?  How would you decide the winner?


I do not agree that having precise quantitative measures of system
intelligence is critical, or even important to AGI.

And, deciding which AGI is smarter is not important either -- no more
important than deciding whether Ben, Matt or Pei is smarter.  Who
cares?  Different systems may have different strengths and weaknesses,
so that "who is smarter" often explicitly comes down to a subjective
value judgment  We may ask who is likely to be better at carrying
out some particular problem-solving task; we may say that A is
generically smarter than B if A is better than B at carrying out
*every* problem-solving task (Pareto optimality, sorta), but this is
not a very useful notion in practice.

Once we have an AGI that can hold an English conversation that appears
to trained human scientists to be intelligent and creative, and that
makes original discoveries in science or mathematics, then the
question of whether it is "intelligent" or not will cease to be very
interesting.  That is our mid-term goal with Novamente.  I don't see
why quantitative measures of intelligence are necessary or even useful
along the path to getting there.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Matt Mahoney
- Original Message 
From: Ben Goertzel <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Friday, November 3, 2006 9:28:24 PM
Subject: Re: Re: [agi] Natural versus formal AI interface languages

>I do not agree that having precise quantitative measures of system
>intelligence is critical, or even important to AGI.

The reason I ask is not just to compare different systems (which you can't 
really do if they serve different purposes), but also to measure progress.  
When I experiment with language models, I often try many variations, tune 
parameters, etc., so I need a quick test to see if what I did worked.  I can do 
that very quickly using text compression.  I can test tens or hundreds of 
slightly different models per day and make very precise measurements.  Of 
course it is also useful that I can tell if my model works better or worse than 
somebody else's model that uses a completely different method.

There does not seem to be much cooperation on this list toward the goal of 
achieving AGI.  Everyone has their own ideas.  That's OK.  The purpose of 
having a metric is not to make it a race, but to help us communicate what works 
and what doesn't so we can work together while still pursuing our own ideas.  
Papers on language modeling do this by comparing different algorithms and 
reporting the results by word perplexity.  So you don't have to re-experiment 
with various n-gram backoff models, LSA, statistical parsers, etc.  You already 
know a lot about what works and what doesn't.

Another reason for measurements is that it makes your goals concrete.  How do 
you define "general intelligence"?  Turing gave us a well defined goal, but 
there are some shortcomings.  The Turing test is subjective, time consuming, 
isn't appropriate for robotics, and really isn't a good goal if it means 
deliberately degrading performance in order to appear human.  So I am looking 
for "better" tests.  I don't believe the approach of "let's just build it and 
see what it does" is going to produce anything useful.

 
-- Matt Mahoney, [EMAIL PROTECTED]




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Russell Wallace
On 11/4/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
I of course don't think that SHRDLU vs. AGISim is a fair comparison.Agreed. SHRDLU didn't even try to solve the real problems - for the simple and sufficient reason that it was impossible to make a credible attempt at such on the hardware of the day. AGISim (if I understand it correctly) does. Oh, I'm sure the current implementation makes fatal compromises to fit on today's hardware - but the concept doesn't have an _inherent_ plateau the way SHRDLU did, so it leaves room for later upgrade. It's headed in the right compass direction.
And, deciding which AGI is smarter is not important either -- no moreimportant than deciding whether Ben, Matt or Pei is smarter.  Who
cares?Agreed. In practice the market will decide: which system ends up doing useful things in the real world, and therefore getting used? Academic judgements of which is smarter are, well, academic.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: RE: [agi] Natural versus formal AI interface languages

2006-11-07 Thread Ben Goertzel

Jef wrote:

As I see it, the present key challenge of artificial intelligence is to
develop a fast and frugal method of finding fast and frugal methods,


However, this in itself is not possible.  There can be a fast method
of finding fast and frugal methods, or a frugal method of finding fast
and frugal methods, but not a fast and frugal method of finding fast
and frugal methods ... not in general ...


in
other words to develop an efficient time-bound algorithm for recognizing
and compressing those regularities in "the world" faster than the
original blind methods of natural evolution.


This paragraph introduces the key restriction -- "the world", i.e. the
particular class of environments in which the AI is biased to operate.

It is possible to have a fast and frugal method of finding {fast and
frugal methods for operating in environments in class X} ...

[However, there can be no fast and frugal method for producing such a
method based solely on knowledge of the environment X ;-)  ]

One of my current sub-projects is trying to precisely formulate
conditions on the environment under which it is the case that
Novamente's particular combination of AI algorithms is "fast and
frugal at finding fast and frugal methods for solving
environment-relevant problems"    I believe I know how to do so,
but proving my intuitions rigorously will be a bunch of work which I
don't have time for at the moment ... but the task will go on my
(long) queue...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Eric Baum

Ben> Jef wrote:
>> As I see it, the present key challenge of artificial intelligence
>> is to develop a fast and frugal method of finding fast and frugal
>> methods,

Ben> However, this in itself is not possible.  There can be a fast
Ben> method of finding fast and frugal methods, or a frugal method of
Ben> finding fast and frugal methods, but not a fast and frugal method
Ben> of finding fast and frugal methods ... not in general ...

>> in other words to develop an efficient time-bound algorithm for
>> recognizing and compressing those regularities in "the world"
>> faster than the original blind methods of natural evolution.

Ben> This paragraph introduces the key restriction -- "the world",
Ben> i.e. the particular class of environments in which the AI is
Ben> biased to operate.

As I and Jef and you appear to agree, extant Intelligence works 
because it exploits structure *of our world*;
there is and can be (unless P=NP or some such radical and 
unlikely possibility) no such thing as as "General" Intelligence 
that works in all worlds.

Ben> It is possible to have a fast and frugal method of finding {fast
Ben> and frugal methods for operating in environments in class X} ...

Ben> [However, there can be no fast and frugal method for producing
Ben> such a method based solely on knowledge of the environment X ;-)
Ben> ]

I am unsure what you mean by this. Maybe what you are saying is, its not
going to be possible by writing down a simple algorithm and running it
for a week on a PC. This I agree with.

The challenge is to find a methodology
for producing fast enough and frugal enough code, where that
methodology is practicable. For example, as a rough upper bound,
it would be practicable if it required 10,000 programmer years and 
1,000,000 PC-years  (i.e a $3Bn budget).
(Why should producing a human-level AI be cheaper than decoding the
genome?) And of course, it has to scale, in the sense that you have to
be able to prove with < $10^7 (preferably < $10^6 ) that the
methodology works (as was the case more or less with the genome.)
This, it seems to me, requires a first project much more limited
than understanding most of English, yet of significant practical 
benefit. I'm wondering if someone has a good proposal.


Ben> One of my current sub-projects is trying to precisely formulate
Ben> conditions on the environment under which it is the case that
Ben> Novamente's particular combination of AI algorithms is "fast and
Ben> frugal at finding fast and frugal methods for solving
Ben> environment-relevant problems"   I believe I know how to do
Ben> so, but proving my intuitions rigorously will be a bunch of work
Ben> which I don't have time for at the moment ... but the task will
Ben> go on my (long) queue...

Ben> -- Ben

Ben> - This list is sponsored by AGIRI: http://www.agiri.org/email
Ben> To unsubscribe or change your options, please go to:
Ben> http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


RE: RE: [agi] Natural versus formal AI interface languages

2006-11-08 Thread Jef Allbright
Eric Baum wrote: 

> As I and Jef and you appear to agree, extant Intelligence 
> works because it exploits structure *of our world*; there is 
> and can be (unless P=NP or some such radical and unlikely 
> possibility) no such thing as as "General" Intelligence that 
> works in all worlds.

I'm going to risk being misunderstood again over a subtle point of
clarification:

I think we are in practical agreement on the point quoted above, but I
think that a more coherent view would avoid the binary distinction and
instead place general intelligence at the end of a scale where with
diminishing exploitation of regularities in the environment
computational requirements become increasingly intractable.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Ben Goertzel

> I don't know what you mean by incrementally updateable,
> but if you look up the literature on language learning, you will find
> that learning various sorts of relatively simple grammars from
> examples, or even if memory serves examples and queries, is NP-hard.
> Try looking for Dana Angluin's papers back in the 80's.

No, a thousand times no.  (Oh, why do we have to fight the same battles
over and over again?)

These proofs depend on assumptions about what "learning" is, and those
assumptions involve a type of learning that is stupider than stupid.


I don't think the proofs depend on any special assumptions about the
nature of learning.

Rather, the points to be noted are:

1) these are theorems about the learning of general grammars in a
certain class, as n (some measure of grammar size) goes to infinity

2) NP-hard is about worst-case time complexity of learning grammars in
that class, of size n

So the reason these results are not cognitively interesting is:

1) real language learning is about learning specific grammars of
finite size, not parametrized classes of grammars as n goes to
infinity

2) even if you want to talk about learning over parametrized classes,
real learning is about average-case rather than worst-case complexity,
anyway (where the average is over some appropriate probability
distribution)

-- Ben G



Any learning mechanism that had the ability to do modest analogy
building across domains, and which had the benefit of primitives
involving concepts like "on", "in", "through", "manipulate", "during",
"before" (etc etc) would probably be able to do the grammer learning,
and in any case, the proofs are completely incapable of representing the
capabilities of such learning mechanisms.

Such ideas have been (to coin a phrase) debunked every which way from
sunday. ;-)


Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-12 Thread Ben Goertzel

> I don't think the proofs depend on any special assumptions about the
> nature of learning.

I beg to differ.  IIRC the sense of "learning" they require is induction
over example sentences.  They exclude the use of real world knowledge,
in spite of the fact that such knowledge (or at least ) are posited to
play a significant role in the learning of grammar in humans.  As such,
these proofs say nothing whatsoever about the learning of NL grammars.

I agree they do have other limitations, of the sort you suggest below.


Ah, I see  Yes, it is true that these theorems are about grammar
learning in isolation, not taking into account interactions btw
semantics, pragmatics and grammar, for example...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Eric Baum

Sorry for my delay in responding... too busy to keep up with most
of this, just got some downtime and scanning various messages:

>> > I don't know what you mean by incrementally updateable, > but if
>> you look up the literature on language learning, you will find >
>> that learning various sorts of relatively simple grammars from >
>> examples, or even if memory serves examples and queries, is
>> NP-hard.  > Try looking for Dana Angluin's papers back in the 80's.
>> 
>> No, a thousand times no.  (Oh, why do we have to fight the same
>> battles over and over again?)
>> 
>> These proofs depend on assumptions about what "learning" is, and
>> those assumptions involve a type of learning that is stupider than
>> stupid.

Ben> I don't think the proofs depend on any special assumptions about
Ben> the nature of learning.

Ben> Rather, the points to be noted are:

Ben> 1) these are theorems about the learning of general grammars in a
Ben> certain class, as n (some measure of grammar size) goes to
Ben> infinity

Ben> 2) NP-hard is about worst-case time complexity of learning
Ben> grammars in that class, of size n

These comments are of course true of any NP-hardness result.
They are reasons why the NP-hardness result does not *prove* (even
if P!=NP) that the problem is insuperable.

However, the way to bet is generally that the problem is actually
hard. Ch. 11 of WIT? gives some arguments why.

If you don't believe that, you shouldn't rely on encryption.
Encryption has all the above weaknesses in spades, and plus,
its not even proved secure given P!=NP, that requires additional
assumptions.

Also, in addition to the hardness results, there has been considerable
effort in modelling natural grammars by linguists, which has failed,
thus also providing evidence the problem is hard.


Ben> So the reason these results are not cognitively interesting is:

Ben> 1) real language learning is about learning specific grammars of
Ben> finite size, not parametrized classes of grammars as n goes to
Ben> infinity

Ben> 2) even if you want to talk about learning over parametrized
Ben> classes, real learning is about average-case rather than
Ben> worst-case complexity, anyway (where the average is over some
Ben> appropriate probability distribution)

Ben> -- Ben G


>> Any learning mechanism that had the ability to do modest analogy
>> building across domains, and which had the benefit of primitives
>> involving concepts like "on", "in", "through", "manipulate",
>> "during", "before" (etc etc) would probably be able to do the
>> grammer learning, and in any case, the proofs are completely
>> incapable of representing the capabilities of such learning
>> mechanisms.
>> 
>> Such ideas have been (to coin a phrase) debunked every which way
>> from sunday. ;-)
>> 
>> 
>> Richard Loosemore

Ben> - This list is sponsored by AGIRI: http://www.agiri.org/email
Ben> To unsubscribe or change your options, please go to:
Ben> http://v2.listbox.com/member/?list_id=303

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-16 Thread Eric Baum

>> > I don't think the proofs depend on any special assumptions about
>> the > nature of learning.
>> 
>> I beg to differ.  IIRC the sense of "learning" they require is
>> induction over example sentences.  They exclude the use of real
>> world knowledge, in spite of the fact that such knowledge (or at
>> least > knowledge>) are posited to play a significant role in the learning
>> of grammar in humans.  As such, these proofs say nothing whatsoever
>> about the learning of NL grammars.
>> 

I fully agree the proofs don't take into account such stuff.
And I believe such stuff is critical. Thus
I've never claimed language learning was proved hard, I've just
suggested evolution could have encrypted it.

The point I began with was, if there are lots of different locally
optimal codings for thought, it may be hard to figure out which one is 
programamed
into the mind, and thus language learning could be a hard additional
problem to producing an AGI. The AGI has to understand what the word
"foobar" means, and thus it has to have (or build) a code module meaning
``foobar" that it can invoke with this word. If it has a different set
of modules, it might be sunk in communication.

My sense about grammars for natural language, is that there are lots
of different equally valid grammars that could be used to communicate.
For example, there are the grammars of English and of Swahili. One
isn't better than the other. And there is a wide variety of other
kinds of grammars that might be just as good, that aren't even used in
natural language, because evolution chose one convention at random.
Figuring out what that convention is, is hard, at least Linguists have
tried hard to do it and failed.
And this grammar stuff is pretty much on top of, the meanings of 
the words. It serves to disambiguate, for example for error correction
in understanding. But you could communicate pretty well in pidgin, 
without it, so long as you understand the meanings of the words.

The grammar learning results (as well as the experience of linguists,
who've tried very hard to build a model for natural grammar) 
I think, are indicative that this problem is hard, and it seems that
this problem is superimposed above the real world knowledge aspect.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Ben Goertzel

Richard,

I know it's peripheral to your main argument, but in this example ...


Suppose that the computational effort that evolution needs to build
"different sized" language understanding mechanisms scales as:

2.5 * (N/7 + 1)^^6 planet-years

... where "different sized" is captured by the value N, which is the
number of conceptual primitives used in the language understanding
mechanism, and a "planet-year" is one planet worth of human DNA randomly
working on the problem for one year.  (I am plucking this out of the
air, of course, but that doesn't matter.)

Here are the resource requirements for this polynomial resource function:

N   R

1   2.23E+000
7   6.40E+001
10  2.05E+002
50  2.92E+005
100 1.28E+007
300 7.12E+009

(N = Number of conceptual primitives)
(R = resource requirement in planet-years)

I am assuming that the appropriate measure of size of problem is number
of conceptual primitives that are involved in the language understanding
mechanism (a measure picked at random, and as far as I can see, as
likely a measure as any, but if you think something else should be the
N, be my guest).

If there were 300 conceptual primitives in the human LUM, resource
requirement would be 7 billion planet-years.  That would be bad.

But if there are only 7 conceptual primitives, it would take 64 years.
Pathetically small and of no consequence.

The function is polynomial, so in a sense you could say this is an
NP-hard problem.


I don't think you're using the term "NP-hard" correctly.

http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP

"
The class P consists of all those decision problems that can be solved
on a deterministic sequential machine in an amount of time that is
polynomial in the size of the input; the class NP consists of all
those decision problems whose positive solutions can be **verified**
in polynomial time given the right information.
"

[This page also reviews, and agrees with, many of your complaints
regarding the intuitive interpretation of P as easy and NP as hard]

http://en.wikipedia.org/wiki/NP-hard

"
In computational complexity theory, NP-hard (Non-deterministic
Polynomial-time hard) refers to the class of decision problems H such
that for every decision problem L in NP there exists a polynomial-time
many-one reduction to H, written . If H itself is in NP, then H is
called NP-complete.
"

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: [agi] Natural versus formal AI interface languages

2006-11-24 Thread Ben Goertzel

It's just that "problem X is NP-hard" means roughly "Any problem Y in
NP is polynomial-time reducible to problem X", and your example did
not seem to exemplify this...

All your example seemed to exemplify was a problem that was solvable
in polynomial time (class P, not class NP-hard)

However, this is irrelevant to your main conceptual point, which as I
understood it was that theorems regarding the scaling behavior of the
worst-case complexity of problems as problem size n goes to infinity
are pragmatically irrelevant...

[I'm not sure I fully agree with your conceptual point, but that's
another issue.  I used to agree but when I encountered Immerman's
descriptive complexity theory, I started wavering.  Immerman showed
e.g. that

-- P, the class of problems solvable in polynomial time, corresponds
to languages recognizable by first-order logic plus a recursion
operator

-- NP, the class of problems whose solutions are checkable in
polynomial time, corresponds to languages recognized by existential
second order logic (second order logic with second-order existential
but not universal quantification)

This is interesting and suggests that these complexity classes could
possibly have some fundamental cognitive meaning, even though such a
meaning is not obvious from their standard definitions...]

-- Ben



On 11/24/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:
> Richard,
>
> I know it's peripheral to your main argument, but in this example ...
>
>> Suppose that the computational effort that evolution needs to build
>> "different sized" language understanding mechanisms scales as:
>>
>> 2.5 * (N/7 + 1)^^6 planet-years
>>
>> ... where "different sized" is captured by the value N, which is the
>> number of conceptual primitives used in the language understanding
>> mechanism, and a "planet-year" is one planet worth of human DNA randomly
>> working on the problem for one year.  (I am plucking this out of the
>> air, of course, but that doesn't matter.)
>>
>> Here are the resource requirements for this polynomial resource function:
>>
>> N   R
>>
>> 1   2.23E+000
>> 7   6.40E+001
>> 10  2.05E+002
>> 50  2.92E+005
>> 100 1.28E+007
>> 300 7.12E+009
>>
>> (N = Number of conceptual primitives)
>> (R = resource requirement in planet-years)
>>
>> I am assuming that the appropriate measure of size of problem is number
>> of conceptual primitives that are involved in the language understanding
>> mechanism (a measure picked at random, and as far as I can see, as
>> likely a measure as any, but if you think something else should be the
>> N, be my guest).
>>
>> If there were 300 conceptual primitives in the human LUM, resource
>> requirement would be 7 billion planet-years.  That would be bad.
>>
>> But if there are only 7 conceptual primitives, it would take 64 years.
>> Pathetically small and of no consequence.
>>
>> The function is polynomial, so in a sense you could say this is an
>> NP-hard problem.
>
> I don't think you're using the term "NP-hard" correctly.
>
> http://en.wikipedia.org/wiki/Complexity_classes_P_and_NP
>
> "
> The class P consists of all those decision problems that can be solved
> on a deterministic sequential machine in an amount of time that is
> polynomial in the size of the input; the class NP consists of all
> those decision problems whose positive solutions can be **verified**
> in polynomial time given the right information.
> "
>
> [This page also reviews, and agrees with, many of your complaints
> regarding the intuitive interpretation of P as easy and NP as hard]
>
> http://en.wikipedia.org/wiki/NP-hard
>
> "
> In computational complexity theory, NP-hard (Non-deterministic
> Polynomial-time hard) refers to the class of decision problems H such
> that for every decision problem L in NP there exists a polynomial-time
> many-one reduction to H, written . If H itself is in NP, then H is
> called NP-complete.
> "

I'd certainly welcome clarification, and I may have gotten this wrong...
but I'm not quite sure where you are directing my attention here.

Are you targeting the fact that NP-Hard is defined with respect to
decision problems, or to the reduction aspect?

My understanding of NP-hard is that it does strictly only apply to
decision problems ... but what I was doing was trying to interpret the
loose sense in which Eric himself was using NP-Hard, so if I have
stretched the definition a little, I woudl claim I was inheriting
something that was already stretched.

But maybe that was not what you meant.  I stand ready to be corrected,
if it turns out I have goofed.



Richard Loosemore.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.c

Re: Re: [agi] Natural versus formal AI interface languages

2006-11-25 Thread Ben Goertzel

Hi Richard,


I don't really want to get too sidetracked, but even if Immerman's
analysis were correct, would this make a difference to the way that Eric
was using NP-Hard, though?


No, Immerman's perspective on complexity classes doesn't really affect
your objections...

Firstly, the descriptive logic depiction of complexity classes is
**still** about what happens as n gets large.

So it doesn't affect one of the key objections that both you and Pei
have to using concepts from computational complexity theory to analyze
AGI: which is that AGI systems don't have to deal with general classes
of problems of problem size tending to infinity, they have to deal
with **particular** problems of bounded size.  For instance, if an AGI
is good at learning **human** language, it may not matter how its
language learning capability scales when dealing with languages
falling into the same grammar category as human language whose
grammars have sizes tending to infinity.  If an AGI is good at solving
path-finding problems in real life, it may not matter how its
worst-case path-finding capability scales when dealing with paths
between n cities as n tends to infinity  Etc.  In fact there are
decent qualitative arguments that most of the algorithms used by human
cognition (insofar as it makes sense to say that human cognition uses
"algorithms", which is another issue, as Pei has noted) are
**exponential time** in terms of their scaling as problem size
approaches infinity ... but the point is that they are tuned to give
tractable performance for the problem-instances that humans generally
encounter in real life...

Secondly, Immerman's analysis doesn't affect the fact that the
formalization of "language learning" referred to by Eric Baum is only
tenuously related to the actual cognitive phenomenon of human language
learning.

On the other hand, Immerman's analysis does **suggest** (not
demonstrate) that there could be some cognitive meaningfulness to the
classes P and NP.

For instance, if someone were to show that the learning of languages
in the same general category as human natural languages ("natural-like
languages")...

-- can be naturally represented using existential second-order logic

but

-- cannot be naturally represented using first-order logic with recursion

this would be interesting, and would match up naturally with the
observation that "natural-like language" learning is NP but not P.

On the other hand, this kind of analysis would only be really
cognitively meaningful in the context of an explanation of how this
formalization of language learning is related to actual cognitive
language learning  I happen to think that such an explanation
**could* be formulated; but no one has really done so, so far.  That
is, no one has given a formalization encompassing the embodied, social
semantics and pragmatics of language learning (as discussed e.g. in
Tomassello's excellent recent book "Constructing a Language"); and in
the absence of such a formalization, formal discussions of "grammar
learning" are not convincingly connected to real cognitive language
learning.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-02 Thread Ben Goertzel

Luke wrote:

It seems to be like this: when you start programming, even though the
syntax is still natural, the language gets really awkward and does not
resemble the way you would express the same thing naturally. For me it
just shows that the real problem is somewhere deeper, in the semantic
representation that is underlying it all. Simply the first-order logic or
usual programming styles are different from everyday communication.
Switching to Lojban might remove the remaining syntax errors, but
I don't see how it can help with this bigger problem. Ben, do you think
using Lojban can really substantially help or are you counting on Agi-Sim
world and Novamente architecture in general, and want to use Lojban
just to simplify language analysis?


Above all I am counting on the Novamente architecture in general

However, I do think the Lojban language, properly extended, has a lot of power.

Following up on the excellent point you made: I do think that a mode
of communication combining aspects of programming with aspects of
commonsense natural language communication can be achieved -- and that
this will be a fascinating thing.

However, I think this can be achieved only AFTER one has a reasonably
intelligent proto-AGI system that can take semantically
slightly-imprecise statements and automatically map them into fully
formalized programming-type statements.

Lojban has no syntactic ambiguity but it does allow semantic ambiguity
as well as extreme semantic precision.

Using Lojban for programming would involve using its capability for
extreme semantic precision; using it for commonsense communication
involves using its capability for judiciously controlled semantic
ambiguity.  Using both these capabilities together in a creative way
will be easier with a more powerful AI back end...

E.g., you'd like to be able to outline the obvious parts of your code
in a somewhat ambiguous way (but still, using Lojban, much less
ambiguously than would be the case in English), and have the AI figure
out the details.  But then, the tricky parts of the code would be
spelled out in detail using full programming-language-like precision.

Of course, it may be that once the AGI is smart enough to be used in
this way, it's only a short time after that until the AGI writes all
its own code and we become obsolete as coders anyway ;-)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: [agi] Natural versus formal AI interface languages

2006-11-03 Thread Ben Goertzel

Another reason for measurements is that it makes your goals concrete.  How do you define "general 
intelligence"?  Turing gave us a well defined goal, but there are some shortcomings.  The Turing test is 
subjective, time consuming, isn't appropriate for robotics, and really isn't a good goal if it means 
deliberately degrading performance in order to appear human.  So I am looking for "better" tests.  
I don't believe the approach of "let's just build it and see what it does" is going to produce 
anything useful.



I am happy enough with the long-term goal of independent scientific
and mathematical discovery...

And, in the short term, I am happy enough with the goals of carrying
out the (AGISim versions of) the standard tasks used by development
psychologists to study childrens' cognitive behavior...

I don't see a real value to precisely quantifying these goals, though...

Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


  1   2   >