Re: [agi] Books

2007-06-15 Thread YKY (Yan King Yin)

On 6/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

I'll try to answer this and Mike Tintner's question at the same time. The
typical GOFAI engine over the past decades has had a layer structure
something like this:

Problem-specific assertions
Inference engine/database
Lisp

on top of the machine and OS. Now it turns out that this is plenty to

build a

system that can configure VAX computers or do NLP at the level of Why did
you put the red block on the blue one? or What is the capital of the
largest country in North America?

The problem is that this leaves your symbols as atomic tokens in a
logic-like environment, whose meaning is determined entirely from above,

i.e.

solely by virtue of their placement in expressions (or equivalently, links

to

other symbols in a semantic network).

These formulations of a top layer were largely built on introspection, as

was

logic (and the Turing machine!). So chances are that a reasonable top

layer

could be built like that -- but the underpinnings are something a lot more
capable than token-expression pattern matching. there's a big gap between

the

top layer(s) as found in AI programs and the bottom layers as found in
existing programming systems. This is what I call Formalist Float in the
book.

It's not that any existing level is wrong, but there aren't enough of

them, so

that the higher ones aren't being built on the right primitives in current
systems. Word-level concepts in the mind are much more elastic and plastic
than logic tokens.

You can build a factory where everything is made top-down, constructed

with

full attention to all its details. But if you try to build a farm that

way,

you'll do a huge amount of work and not get much -- your crops and

livestock

have to grow for themselves (and it's still a huge amount of work!).

I think that the intermediate levels in the brain are built of robotic

body

controllers, mechanism with a flavor much like cybernetics, simply because
that's what evolution had to work with. That's my working assumption in my
experiments, anyway.

Hi Josh,

You haven't explained how your layered approach works, but I think you
correctly exposed the problem of representation with logic-tokens.  My
solution to this is not exactly layers, but I see this as a difference
between organic and inorganic knowledgebases.  In Cyc, for example, the
facts you enter into the KB remain the exact same way with the logic-tokens
that you chose to enter them with.  This is what I call inorganic.  In an
organic KB the facts will be *assimilated* into the KB via truth
maintenance (old-fashioned term) or belief revision or cognitive dissonance
resolution, etc.  I think that mechanism is at the core of AGI.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-14 Thread J Storrs Hall, PhD
On Thursday 14 June 2007 02:12:29 am Joshua Fox wrote:
 I don't want to join any herd -- perhaps I just want to figure out why there
 is no AGI herd yet; as much a sociological question as a scientific one.

It's probably worth pointing out to this group that for the first 25 years of 
its history, mainstream AI *was* an AGI herd. The universal assumption 
behind research was that they were going to build a full-fledged, 
language-speaking, cognitively complete, intelligent mind.

Through the 60s AI got princely funding from ARPA (this in an era when 
computers cost, as a rule of thumb, a dollar per byte of memory). The 
funding, and AI with it, took a left turn during the 70s, towards 
applications, so that by the 80s the field was mostly expert systems and 
later neural nets. But the Golden Age had culminated in systems like Shrdlu 
and AM/Eurisko. If they had kept going on the original track, doing basic, 
general-intelligence-oriented research, they would have AGI now, I think.

As far as I know, nobody even tried to put Shrdlu and Eurisko together to see 
what they would get until Novamente, which, I'm just guessing, is essentially 
that with a dollop of Copycat thrown in. (¿Ben?)

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-11 Thread Joshua Fox

Josh,

Your point about layering makes perfect sense.

I just ordered your book, but, impatient as I am, could I ask a question
about this, though I've asked a similar question before: Why have not the
elite of intelligent and open-minded leading AI researchers not attempted a
multi-layered approach?

Joshua


2007/6/10, J Storrs Hall, PhD [EMAIL PROTECTED]:


Here's a big one: Levels of abstraction.
I assume many of you are using a GUI mail client to read this. You're
interacting with it in terms of windows, panels, boxes, buttons, menus,
dragging and dropping.
The GUI was written in terms of a toolkit that implements those concepts
on
top of an ontology involving events, queues, processes, locks, mutexes,
and
so forth. The program using the toolkit uses other libraries that are
about
rfc822-format messages, mime extensions, POP mailboxes, and the like.
Typically, the programs and many of the libraries are written in
programming
languages which offer a model providing concepts like objects, methods,
and
functions. These in turn are based on lower-level languages where records,
pointers, and memory allocation are the order of the day. In order to
write
code in any of this you have to understand, at least implicitly, the
syntax
of the language and use the translator that reads your code and compiles
it
into some internal form, using (most likely) an automatically generated
shift-reduce parser. At some stage further down, the result will be
assembly
language for the machine you're running on, and then binary machine
language.
(And note that I somehow managed to leave out the entire level of the OS
and
hardware drivers and interrupt-level programming).
There's just a big a stack of abstractions standing between the machine
language and the transistors, in the machine architecture.

Most AI (including a lot of what gets talked about here) is the equivalent
of
trying to implement the mail-reader directly in machine code (or
transistors,
for connectionists). Why people can't get the notion that the brain is
going
to be at least as ontologically deep as a desktop GUI is beyond me, but
it's
pretty much universal.

Josh




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-11 Thread J Storrs Hall, PhD
I'll try to answer this and Mike Tintner's question at the same time. The 
typical GOFAI engine over the past decades has had a layer structure 
something like this:

Problem-specific assertions
Inference engine/database
Lisp

on top of the machine and OS. Now it turns out that this is plenty to build a 
system that can configure VAX computers or do NLP at the level of Why did 
you put the red block on the blue one? or What is the capital of the 
largest country in North America?

The problem is that this leaves your symbols as atomic tokens in a 
logic-like environment, whose meaning is determined entirely from above, i.e. 
solely by virtue of their placement in expressions (or equivalently, links to 
other symbols in a semantic network).

These formulations of a top layer were largely built on introspection, as was 
logic (and the Turing machine!). So chances are that a reasonable top layer 
could be built like that -- but the underpinnings are something a lot more 
capable than token-expression pattern matching. there's a big gap between the 
top layer(s) as found in AI programs and the bottom layers as found in 
existing programming systems. This is what I call Formalist Float in the 
book. 

It's not that any existing level is wrong, but there aren't enough of them, so 
that the higher ones aren't being built on the right primitives in current 
systems. Word-level concepts in the mind are much more elastic and plastic 
than logic tokens.

You can build a factory where everything is made top-down, constructed with 
full attention to all its details. But if you try to build a farm that way, 
you'll do a huge amount of work and not get much -- your crops and livestock 
have to grow for themselves (and it's still a huge amount of work!).

I think that the intermediate levels in the brain are built of robotic body 
controllers, mechanism with a flavor much like cybernetics, simply because 
that's what evolution had to work with. That's my working assumption in my 
experiments, anyway.

Josh


On Monday 11 June 2007 04:41:13 am Joshua Fox wrote:
 Josh,
 
 Your point about layering makes perfect sense.
 
 I just ordered your book, but, impatient as I am, could I ask a question
 about this, though I've asked a similar question before: Why have not the
 elite of intelligent and open-minded leading AI researchers not attempted a
 multi-layered approach?
 
 Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread James Ratcliff
Interesting points, but I believe you can get around alot of the problems with 
two additional factors, 
a. using either large quantities of quality text, (ie novels, newspapers) or 
similar texts like newspapers.
b. using a interactive built in 'checker' system, assisted learning where the 
AI could consult with humans in a simple way.

Using something like this, you could check 
The moon is a dog  and see that it has a really low probabilty, and if 
something else was possibly untrue, it could ask a few humans, and poll for the 
answer
Is the moon a dog?

This should allow for a large amount of basic information to be quickly 
gathered, and of a fairly high quality.

James

Matt Mahoney [EMAIL PROTECTED] wrote: 
--- Charles D Hixson  wrote:

 Mark Waser wrote:
   The problem of logical reasoning in natural language is a pattern 
  recognition
   problem (like natural language recognition in general).  For example:
 
   - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
   - Cities have tall buildings.  New York is a city.  Therefore New 
  York has
   tall buildings.
   - Summers are hot.  July is in the summer.  Therefore July is hot.
 
   After many examples, you learn the pattern and you can solve novel 
  logic
   problems of the same form.  Repeat for many different patterns.
   
  Your built in assumptions make you think that.  There are NO readily 
  obvious patterns is the examples you gave except on obvious example of 
  standard logical inference.  Note:
 
  * In the first clause, the only repeating words are green and
Kermit.  Maybe I'd let you argue the plural of frog.
  * In the second clause, the only repeating words are tall
buildings and New York.  I'm not inclined to give you the plural
of city.  There is also the minor confusion that tall buildings
and New York are multiple words.
  * In the third clause, the only repeating words are hot and July. 
Okay, you can argue summers.
  * Across sentences, I see a regularity between the first and the
third of As are B.  C is A.  Therefore, C is B.
 
  Looks far more to me like you picked out one particular example of 
  logical inference and called it pattern matching. 
   
  I don't believe that your theory works for more than a few very small, 
  toy examples.  Further, even if it did work, there are so many 
  patterns that approaching it this way would be computationally 
  intractable without a lot of other smarts.
   
  
 It's worse than that.  Frogs are green. is a generically true 
 statement, that isn't true in most particular cases.  E.g., some frogs 
 are yellow, red, and black without any trace of green on them that I've 
 noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
 basically green, but with black spots.
 
 Worse, although Kermit is identified as a frog, Kermit is actually a 
 cartoon character.  As such, Kermit can be run over by a tank without 
 being permanently damaged.  This is not true of actual frogs.
 
 OTOH, there *IS* a pattern matching going on.  It's just not evident at 
 the level of structure (or rather only partially evident).
 
 Were I to rephrase the sentences more exactly they would go something 
 like this:
 Kermit is a representation of a frog.
 Frogs are typically thought of as being green.
 Therefore, Kermit will be displayed as largely greenish in overall hue, 
 to enhance the representation.
 
 Note that one *could* use similar logic to deduce that Miss Piggy is 
 more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
 what is being discussed here is not mandatory characteristics, but 
 representational features selected to harmonize an image with both it's 
 setting and internal symbolisms.  As such, only artistically selected 
 features are chosen to highlight, and other features are either 
 suppressed, or overridden by other artistic choices.  What is being 
 created is a dreamscape rather than a realistic image.
 
 On to the second example.  Here again one is building a dreamscape, 
 selecting harmonious imagery.  Note that it's quite possible to build a 
 dreamscape city where there are not tall buildings...or only one.  
 (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
 District of San Francisco.  Facing in many directions you can't see a 
 single building more than two stories tall.)  But it's also quite 
 realistic to imagine tall buildings.  By specifying tall buildings, one 
 filters out a different set of harmonious city images.
 
 What these patterns do is enable one to filter out harmonious images, 
 etc. from the databank of past experiences.

These are all valid criticisms.  They explain why logical reasoning in natural
language is an unsolved problem.  Obviously simple string matching won't work.
 The system must also recognize sentence structure, word associations,

Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Mike Dougherty

On 6/11/07, James Ratcliff [EMAIL PROTECTED] wrote:

Interesting points, but I believe you can get around alot of the problems
with two additional factors,
a. using either large quantities of quality text, (ie novels, newspapers) or
similar texts like newspapers.
b. using a interactive built in 'checker' system, assisted learning where
the AI could consult with humans in a simple way.


I would hope that a candidate AGI would have the capability of
emailing anyone who has ever talked with it.  ex:  After a few
minutes' chat, the AI asks the human for their email in case there it
has any follow up questions - the same way any human interviewer
might.  If 10 humans are asked the same question, the statistically
oddball response can probably be ignored (or reduced in weight) to
clarify the answer.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread James Ratcliff
Correct, but I don't believe that systems (like Cyc) are doing this type of 
Active learning now, and it would help to gather quality information and 
fact-check it.  

Cyc does have some interesting projects where it takes a proposed statment and 
when a engineer is working with it, will go out and do a text match search in 
Google to check the validity of a statement, so would do soemthing like 
google search the moon is a dog returning 1/4bill  so very unlikely.

This goes one step towards my thoughts, but of course the Internet as a whole 
is not a trusted source for quality information, and would need to use a more 
refined base.

Also OpenMind Common Sense (site down) is a very interesting project which does 
some information gathering using humans who log into the system and check and 
input information.  It produced some intersting results, though on a limited 
basis.


James


Mike Dougherty [EMAIL PROTECTED] wrote: On 6/11/07, James Ratcliff  wrote:
 Interesting points, but I believe you can get around alot of the problems
 with two additional factors,
 a. using either large quantities of quality text, (ie novels, newspapers) or
 similar texts like newspapers.
 b. using a interactive built in 'checker' system, assisted learning where
 the AI could consult with humans in a simple way.

I would hope that a candidate AGI would have the capability of
emailing anyone who has ever talked with it.  ex:  After a few
minutes' chat, the AI asks the human for their email in case there it
has any follow up questions - the same way any human interviewer
might.  If 10 humans are asked the same question, the statistically
oddball response can probably be ignored (or reduced in weight) to
clarify the answer.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



___
James Ratcliff - http://falazar.com
Looking for something...
 
-
Be a PS3 game guru.
Get your game face on with the latest PS3 news and previews at Yahoo! Games.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-11 Thread Joshua Fox

Josh,

Thanks for that answer on the layering of mind.



It's not that any existing level is wrong, but there aren't enough of

them, so

that the higher ones aren't being built on the right primitives in current
systems. Word-level concepts in the mind are much more elastic and plastic
than logic tokens.


Could I ask also that you take a stab at a psychological/sociological
question:  Why have not the leading minds of AI (considering for this
purpose only the true creative thinkers with status in the community,
however small a fraction that may be) taken a sufficiently multi-layered,
grounded  approach up to now? Isn't the need for grounding and deep-layering
obvious to the most open-minded and intelligent of researchers?

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-11 Thread J Storrs Hall, PhD
On Monday 11 June 2007 02:06:35 pm Joshua Fox wrote:
...
 Could I ask also that you take a stab at a psychological/sociological
 question:  Why have not the leading minds of AI (considering for this
 purpose only the true creative thinkers with status in the community,
 however small a fraction that may be) taken a sufficiently multi-layered,
 grounded  approach up to now? Isn't the need for grounding and deep-layering
 obvious to the most open-minded and intelligent of researchers?

Well, for one thing, the depth of the problem wasn't understood, and it a 
large extent one of the major contributions of the 50-year history of AI is 
to plumb it and give us a perspective. Today, AI along with cogsci and 
neuroscience, has given us a much better handle, I would venture to claim, on 
the scope of the problem.

Second, it's not clear that the Newell  Simon types won't ultimately be 
right. Our bodies are built around a flexible backbone, though no sane 
engineer would design an upright biped that way. We're built that way because 
we evolved from fish. There are probably plenty of backbones in the mind, 
which we can find more efficient replacements for once we've figured out how 
the whole business really works. But there will have to be a decade or two of 
experience with *working* AGI before it gets optimized to that point.

Third, it's not clear that the top minds don't understand the problem 
perfectly well, but they have to work with what they've got, to advance the 
field to the point where they have something better. One could very 
reasonably characterize the last couple of decades' disregard of the goal of 
the integrated AI and instead the thrust of AI into COLT and modal logic and 
so forth as an attempt to build up the infrastructure -- and in fact it has 
been a very productive period, as far as the kinds of algorithms that are now 
available to build an AGI on are concerned.

1960s -- feet of clay
1970s -- legs of iron
1980s -- loins of bronze
1990s -- breast of brass
2000s -- head of silver

Are we ready for the crown of gold yet? I like to think we're getting 
close :-)

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: Reasoning in natural language (was Re: [agi] Books)

2007-06-11 Thread Matt Mahoney

--- James Ratcliff [EMAIL PROTECTED] wrote:

 Interesting points, but I believe you can get around alot of the problems
 with two additional factors, 
 a. using either large quantities of quality text, (ie novels, newspapers) or
 similar texts like newspapers.
 b. using a interactive built in 'checker' system, assisted learning where
 the AI could consult with humans in a simple way.

But that is not the problem I am trying to get around.  A system that learns
to solve logical word problems should be trainable on text like:

- A greeb is a floogle.  All floogles are blorg.  Therefore...

simply because it is something the human brain can do.


 
 Using something like this, you could check 
 The moon is a dog  and see that it has a really low probabilty, and if
 something else was possibly untrue, it could ask a few humans, and poll for
 the answer
 Is the moon a dog?
 
 This should allow for a large amount of basic information to be quickly
 gathered, and of a fairly high quality.
 
 James
 
 Matt Mahoney [EMAIL PROTECTED] wrote: 
 --- Charles D Hixson  wrote:
 
  Mark Waser wrote:
The problem of logical reasoning in natural language is a pattern 
   recognition
problem (like natural language recognition in general).  For example:
  
- Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
- Cities have tall buildings.  New York is a city.  Therefore New 
   York has
tall buildings.
- Summers are hot.  July is in the summer.  Therefore July is hot.
  
After many examples, you learn the pattern and you can solve novel 
   logic
problems of the same form.  Repeat for many different patterns.

   Your built in assumptions make you think that.  There are NO readily 
   obvious patterns is the examples you gave except on obvious example of 
   standard logical inference.  Note:
  
   * In the first clause, the only repeating words are green and
 Kermit.  Maybe I'd let you argue the plural of frog.
   * In the second clause, the only repeating words are tall
 buildings and New York.  I'm not inclined to give you the plural
 of city.  There is also the minor confusion that tall buildings
 and New York are multiple words.
   * In the third clause, the only repeating words are hot and July. 
 Okay, you can argue summers.
   * Across sentences, I see a regularity between the first and the
 third of As are B.  C is A.  Therefore, C is B.
  
   Looks far more to me like you picked out one particular example of 
   logical inference and called it pattern matching. 

   I don't believe that your theory works for more than a few very small, 
   toy examples.  Further, even if it did work, there are so many 
   patterns that approaching it this way would be computationally 
   intractable without a lot of other smarts.

   
  It's worse than that.  Frogs are green. is a generically true 
  statement, that isn't true in most particular cases.  E.g., some frogs 
  are yellow, red, and black without any trace of green on them that I've 
  noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
  basically green, but with black spots.
  
  Worse, although Kermit is identified as a frog, Kermit is actually a 
  cartoon character.  As such, Kermit can be run over by a tank without 
  being permanently damaged.  This is not true of actual frogs.
  
  OTOH, there *IS* a pattern matching going on.  It's just not evident at 
  the level of structure (or rather only partially evident).
  
  Were I to rephrase the sentences more exactly they would go something 
  like this:
  Kermit is a representation of a frog.
  Frogs are typically thought of as being green.
  Therefore, Kermit will be displayed as largely greenish in overall hue, 
  to enhance the representation.
  
  Note that one *could* use similar logic to deduce that Miss Piggy is 
  more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
  what is being discussed here is not mandatory characteristics, but 
  representational features selected to harmonize an image with both it's 
  setting and internal symbolisms.  As such, only artistically selected 
  features are chosen to highlight, and other features are either 
  suppressed, or overridden by other artistic choices.  What is being 
  created is a dreamscape rather than a realistic image.
  
  On to the second example.  Here again one is building a dreamscape, 
  selecting harmonious imagery.  Note that it's quite possible to build a 
  dreamscape city where there are not tall buildings...or only one.  
  (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
  District of San Francisco.  Facing in many directions you can't see a 
  single building more than two stories tall.)  But it's also quite 
  realistic to imagine tall buildings.  By specifying tall buildings, one 
  filters 

Re: [agi] Books

2007-06-10 Thread Mike Tintner
Josh: If you want to understand why existing approaches to AI haven't 
worked, try

Beyond AI by yours truly

Any major point or points worth raising here? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-10 Thread Mark Waser
Josh: If you want to understand why existing approaches to AI haven't 
worked, try Beyond AI by yours truly

Any major point or points worth raising here?


Yo, troll,

   If you're really interested, then go get the book and stop wasting 
bandwidth.


   If you had any clue about AGI, you'd realize that any decent explanation 
is going to *have* to require a decent amount of text (since AI researchers 
haven't been totally clueless and running down alleys that could be 
disproved with a mere hundred words) and that the best (and most efficient) 
way to transfer that explanation is for you to just go get the book.


   (And Amazon e-mailed me yesterday that they had just/finally shipped my 
copy -- so it *is* available now) 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-10 Thread J Storrs Hall, PhD
Here's a big one: Levels of abstraction.
I assume many of you are using a GUI mail client to read this. You're 
interacting with it in terms of windows, panels, boxes, buttons, menus, 
dragging and dropping.
The GUI was written in terms of a toolkit that implements those concepts on 
top of an ontology involving events, queues, processes, locks, mutexes, and 
so forth. The program using the toolkit uses other libraries that are about 
rfc822-format messages, mime extensions, POP mailboxes, and the like.
Typically, the programs and many of the libraries are written in programming 
languages which offer a model providing concepts like objects, methods, and 
functions. These in turn are based on lower-level languages where records, 
pointers, and memory allocation are the order of the day. In order to write 
code in any of this you have to understand, at least implicitly, the syntax 
of the language and use the translator that reads your code and compiles it 
into some internal form, using (most likely) an automatically generated 
shift-reduce parser. At some stage further down, the result will be assembly 
language for the machine you're running on, and then binary machine language.
(And note that I somehow managed to leave out the entire level of the OS and 
hardware drivers and interrupt-level programming).
There's just a big a stack of abstractions standing between the machine 
language and the transistors, in the machine architecture. 

Most AI (including a lot of what gets talked about here) is the equivalent of 
trying to implement the mail-reader directly in machine code (or transistors, 
for connectionists). Why people can't get the notion that the brain is going 
to be at least as ontologically deep as a desktop GUI is beyond me, but it's 
pretty much universal.

Josh

On Sunday 10 June 2007 05:49:36 am Mike Tintner wrote:
 Josh: If you want to understand why existing approaches to AI haven't 
 worked, try
 Beyond AI by yours truly
 
 Any major point or points worth raising here? 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-10 Thread Mike Tintner

Josh:
Most AI (including a lot of what gets talked about here) is the equivalent 
of
trying to implement the mail-reader directly in machine code (or 
transistors,
for connectionists). Why people can't get the notion that the brain is 
going
to be at least as ontologically deep as a desktop GUI is beyond me, but 
it's

pretty much universal.



Josh,

Are you talking about levels of instruction (about how to handle the data) 
or levels of representation - that are ignored by AI? (As you may remember, 
I'm interested in the latter, and believe the brain processes info. 
simultaneously on at least 3 levels of abstractness/ concreteness).


And what for you is the worst example of AI ignoring these levels of 
abstraction? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-09 Thread Mark Waser
 The problem of logical reasoning in natural language is a pattern recognition
 problem (like natural language recognition in general).  For example:

 - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
 - Cities have tall buildings.  New York is a city.  Therefore New York has
 tall buildings.
 - Summers are hot.  July is in the summer.  Therefore July is hot.

 After many examples, you learn the pattern and you can solve novel logic
 problems of the same form.  Repeat for many different patterns.

Your built in assumptions make you think that.  There are NO readily obvious 
patterns is the examples you gave except on obvious example of standard logical 
inference.  Note:
  a.. In the first clause, the only repeating words are green and Kermit.  
Maybe I'd let you argue the plural of frog.
  b.. In the second clause, the only repeating words are tall buildings and New 
York.  I'm not inclined to give you the plural of city.  There is also the 
minor confusion that tall buildings and New York are multiple words.
  c.. In the third clause, the only repeating words are hot and July.  Okay, 
you can argue summers.
  d.. Across sentences, I see a regularity between the first and the third of 
As are B.  C is A.  Therefore, C is B.
Looks far more to me like you picked out one particular example of logical 
inference and called it pattern matching.  

I don't believe that your theory works for more than a few very small, toy 
examples.  Further, even if it did work, there are so many patterns that 
approaching it this way would be computationally intractable without a lot of 
other smarts.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Charles D Hixson

Mark Waser wrote:
 The problem of logical reasoning in natural language is a pattern 
recognition

 problem (like natural language recognition in general).  For example:

 - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
 - Cities have tall buildings.  New York is a city.  Therefore New 
York has

 tall buildings.
 - Summers are hot.  July is in the summer.  Therefore July is hot.

 After many examples, you learn the pattern and you can solve novel 
logic

 problems of the same form.  Repeat for many different patterns.
 
Your built in assumptions make you think that.  There are NO readily 
obvious patterns is the examples you gave except on obvious example of 
standard logical inference.  Note:


* In the first clause, the only repeating words are green and
  Kermit.  Maybe I'd let you argue the plural of frog.
* In the second clause, the only repeating words are tall
  buildings and New York.  I'm not inclined to give you the plural
  of city.  There is also the minor confusion that tall buildings
  and New York are multiple words.
* In the third clause, the only repeating words are hot and July. 
  Okay, you can argue summers.

* Across sentences, I see a regularity between the first and the
  third of As are B.  C is A.  Therefore, C is B.

Looks far more to me like you picked out one particular example of 
logical inference and called it pattern matching. 
 
I don't believe that your theory works for more than a few very small, 
toy examples.  Further, even if it did work, there are so many 
patterns that approaching it this way would be computationally 
intractable without a lot of other smarts.
 

It's worse than that.  Frogs are green. is a generically true 
statement, that isn't true in most particular cases.  E.g., some frogs 
are yellow, red, and black without any trace of green on them that I've 
noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
basically green, but with black spots.


Worse, although Kermit is identified as a frog, Kermit is actually a 
cartoon character.  As such, Kermit can be run over by a tank without 
being permanently damaged.  This is not true of actual frogs.


OTOH, there *IS* a pattern matching going on.  It's just not evident at 
the level of structure (or rather only partially evident).


Were I to rephrase the sentences more exactly they would go something 
like this:

Kermit is a representation of a frog.
Frogs are typically thought of as being green.
Therefore, Kermit will be displayed as largely greenish in overall hue, 
to enhance the representation.


Note that one *could* use similar logic to deduce that Miss Piggy is 
more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
what is being discussed here is not mandatory characteristics, but 
representational features selected to harmonize an image with both it's 
setting and internal symbolisms.  As such, only artistically selected 
features are chosen to highlight, and other features are either 
suppressed, or overridden by other artistic choices.  What is being 
created is a dreamscape rather than a realistic image.


On to the second example.  Here again one is building a dreamscape, 
selecting harmonious imagery.  Note that it's quite possible to build a 
dreamscape city where there are not tall buildings...or only one.  
(Think of the Emerald City of Oz.  Or for that matter of the Sunset 
District of San Francisco.  Facing in many directions you can't see a 
single building more than two stories tall.)  But it's also quite 
realistic to imagine tall buildings.  By specifying tall buildings, one 
filters out a different set of harmonious city images.


What these patterns do is enable one to filter out harmonious images, 
etc. from the databank of past experiences.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

I've ended up with the following list. What do you think?

*  Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity
and Its Applications, Springer Verlag 1997
   * Marcus Hutter, Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability, Springer Verlag 2004
   * Vladimir Vapnik, Statistical Learning Theory, Wiley-Interscience 1998
   * Pedro Larrañaga, José A. Lozano (Editors), Estimation of
Distribution Algorithms: A New Tool for Evolutionary Computation,
Springer 2001
   * Ben Goertzel, Cassio Pennachin (Editors), Artificial General
Intelligence (Cognitive Technologies), Springer 2007
   * Pei Wang, Rigid Flexibility: The Logic of Intelligence, Springer 2006
   * Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
Probabilistic Logic Networks, in preparation
   * Juyang Weng et al., SAIL and Dav Developmental Robot Projects:
the Developmental Approach to Machine Intelligence, publication list
   * Ralf Herbrich, Learning Kernel Classifiers: Theory and
Algorithms, MIT Press 2001
   * Eric Baum, What is Thought?, MIT Press 2004
   * Marvin Minsky, The Emotion Machine: Commonsense Thinking,
Artificial Intelligence, and the Future of the Human Mind, Simon 
Schuster 2006
   * Ben Goertzel, The Hidden Pattern: A Patternist Philosophy of
Mind, Brown Walker Press 2006
   * Ronald Brachman, Hector Levesque, Knowledge Representation and
Reasoning, Morgan Kaufmann 2004
   * Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought,
MIT Press 2004
   * Wayne D. Gray (Editor), Integrated Models of Cognitive
Systems, Oxford University Press 2007
   * Logica Universalis, Birkhäuser Basel, January 2007

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Reasoning in natural language (was Re: [agi] Books)

2007-06-09 Thread Matt Mahoney

--- Charles D Hixson [EMAIL PROTECTED] wrote:

 Mark Waser wrote:
   The problem of logical reasoning in natural language is a pattern 
  recognition
   problem (like natural language recognition in general).  For example:
 
   - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
   - Cities have tall buildings.  New York is a city.  Therefore New 
  York has
   tall buildings.
   - Summers are hot.  July is in the summer.  Therefore July is hot.
 
   After many examples, you learn the pattern and you can solve novel 
  logic
   problems of the same form.  Repeat for many different patterns.
   
  Your built in assumptions make you think that.  There are NO readily 
  obvious patterns is the examples you gave except on obvious example of 
  standard logical inference.  Note:
 
  * In the first clause, the only repeating words are green and
Kermit.  Maybe I'd let you argue the plural of frog.
  * In the second clause, the only repeating words are tall
buildings and New York.  I'm not inclined to give you the plural
of city.  There is also the minor confusion that tall buildings
and New York are multiple words.
  * In the third clause, the only repeating words are hot and July. 
Okay, you can argue summers.
  * Across sentences, I see a regularity between the first and the
third of As are B.  C is A.  Therefore, C is B.
 
  Looks far more to me like you picked out one particular example of 
  logical inference and called it pattern matching. 
   
  I don't believe that your theory works for more than a few very small, 
  toy examples.  Further, even if it did work, there are so many 
  patterns that approaching it this way would be computationally 
  intractable without a lot of other smarts.
   
  
 It's worse than that.  Frogs are green. is a generically true 
 statement, that isn't true in most particular cases.  E.g., some frogs 
 are yellow, red, and black without any trace of green on them that I've 
 noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
 basically green, but with black spots.
 
 Worse, although Kermit is identified as a frog, Kermit is actually a 
 cartoon character.  As such, Kermit can be run over by a tank without 
 being permanently damaged.  This is not true of actual frogs.
 
 OTOH, there *IS* a pattern matching going on.  It's just not evident at 
 the level of structure (or rather only partially evident).
 
 Were I to rephrase the sentences more exactly they would go something 
 like this:
 Kermit is a representation of a frog.
 Frogs are typically thought of as being green.
 Therefore, Kermit will be displayed as largely greenish in overall hue, 
 to enhance the representation.
 
 Note that one *could* use similar logic to deduce that Miss Piggy is 
 more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
 what is being discussed here is not mandatory characteristics, but 
 representational features selected to harmonize an image with both it's 
 setting and internal symbolisms.  As such, only artistically selected 
 features are chosen to highlight, and other features are either 
 suppressed, or overridden by other artistic choices.  What is being 
 created is a dreamscape rather than a realistic image.
 
 On to the second example.  Here again one is building a dreamscape, 
 selecting harmonious imagery.  Note that it's quite possible to build a 
 dreamscape city where there are not tall buildings...or only one.  
 (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
 District of San Francisco.  Facing in many directions you can't see a 
 single building more than two stories tall.)  But it's also quite 
 realistic to imagine tall buildings.  By specifying tall buildings, one 
 filters out a different set of harmonious city images.
 
 What these patterns do is enable one to filter out harmonious images, 
 etc. from the databank of past experiences.

These are all valid criticisms.  They explain why logical reasoning in natural
language is an unsolved problem.  Obviously simple string matching won't work.
 The system must also recognize sentence structure, word associations,
different word forms, etc.  Doing this requires a lot of knowledge about
language and about the world.  After those patterns are learned (and there are
hundreds of thousands of them), then it will be possible to learn the more
complex patterns associated with reasoning.

The other criticism is that the statements are not precisely true.  (July is
cold in Australia).  But the logic is still valid.  It should be possible to
train a purely logical system on examples using obviously false statements,
like:

- The moon is a dog.  All dogs are made of green cheese.  Therefore the moon
is made of green cheese.

The reasoning is correct, but confusing to many people.  This fact argues (to
me anyway) that logical 

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/9/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

I've ended up with the following list. What do you think?



I would like to add Locus Solum by Girard to this list, and then is
seems to collapse into a black hole... Don't care?


*  Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity
and Its Applications, Springer Verlag 1997
* Marcus Hutter, Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability, Springer Verlag 2004
* Vladimir Vapnik, Statistical Learning Theory, Wiley-Interscience 1998
* Pedro Larrañaga, José A. Lozano (Editors), Estimation of
Distribution Algorithms: A New Tool for Evolutionary Computation,
Springer 2001
* Ben Goertzel, Cassio Pennachin (Editors), Artificial General
Intelligence (Cognitive Technologies), Springer 2007
* Pei Wang, Rigid Flexibility: The Logic of Intelligence, Springer 2006
* Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
Probabilistic Logic Networks, in preparation
* Juyang Weng et al., SAIL and Dav Developmental Robot Projects:
the Developmental Approach to Machine Intelligence, publication list
* Ralf Herbrich, Learning Kernel Classifiers: Theory and
Algorithms, MIT Press 2001
* Eric Baum, What is Thought?, MIT Press 2004
* Marvin Minsky, The Emotion Machine: Commonsense Thinking,
Artificial Intelligence, and the Future of the Human Mind, Simon 
Schuster 2006
* Ben Goertzel, The Hidden Pattern: A Patternist Philosophy of
Mind, Brown Walker Press 2006
* Ronald Brachman, Hector Levesque, Knowledge Representation and
Reasoning, Morgan Kaufmann 2004
* Peter Gärdenfors, Conceptual Spaces: The Geometry of Thought,
MIT Press 2004
* Wayne D. Gray (Editor), Integrated Models of Cognitive
Systems, Oxford University Press 2007
* Logica Universalis, Birkhäuser Basel, January 2007



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/9/07, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


I'm not aware of any book on pattern recognition with a view on AGI, except
The Pattern Recognition Basis of Artificial Intelligence by Don Tveter
(1998):
http://www.dontveter.com/basisofai/basisofai.html

You may look at The Cambridge Hankbook of Thinking and Reasoning first,
especially the chapters on similarity and analogy.


Thanks, it's interesting.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e


Re: [agi] Books

2007-06-08 Thread YKY (Yan King Yin)

On 6/7/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:

Reasoning about Uncertainty (Paperback)
by Joseph Y. Halpern

BTW, the .chm version of this book can be easily obtained on the net, as are
many others you listed...
I also recommand J Pearl's 2 books (Probabilistic Reasoning and Causality).


Pattern Recognition, Third Edition (Hardcover)
by Sergios Theodoridis (Author), Konstantinos Koutroumbas (Author)

I have this one too, but the question is, how to apply pattern recognition
in a logic-based setting?


Knowledge Representation and Reasoning (The Morgan Kaufmann Series in
Artificial Intelligence)
by Ronald Brachman (Author), Hector Levesque (Author)

A very good intro for anyone interested in logic-based AI.  Two of the main
points are:  don't reinvent the wheel of KR;   the tradeoff between KR
expressiveness and efficiency of inference.


Learning Kernel Classifiers: Theory and Algorithms (Adaptive
Computation and Machine Learning) (Hardcover)
by Ralf Herbrich (Author)

I don't know how kernel methods can be applied in a logic-based setting.
The math level of this one is also quite beyond me.


Conceptual Spaces: The Geometry of Thought (Bradford Books) (Paperback)
by Peter Gärdenfors (Author)


I forgot what this book was about, will check it out again.  Did you know
that Gardenfors is very influential in the logic-based belief revision
theory, the AGM (G=him) postulates?

I'm not aware of any book on pattern recognition with a view on AGI, except
*The Pattern Recognition Basis of Artificial Intelligence* by Don Tveter
(1998):
http://www.dontveter.com/basisofai/basisofai.html

You may look at *The Cambridge Hankbook of Thinking and Reasoning* first,
especially the chapters on similarity and analogy.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e

Re: [agi] Books

2007-06-08 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:
 On 6/7/07, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
  Pattern Recognition, Third Edition (Hardcover)
  by Sergios Theodoridis (Author), Konstantinos Koutroumbas (Author)

 I have this one too, but the question is, how to apply pattern recognition
 in a logic-based setting?

The problem of logical reasoning in natural language is a pattern recognition
problem (like natural language recognition in general).  For example:

- Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
- Cities have tall buildings.  New York is a city.  Therefore New York has
tall buildings.
- Summers are hot.  July is in the summer.  Therefore July is hot.

After many examples, you learn the pattern and you can solve novel logic
problems of the same form.  Repeat for many different patterns.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=e9e40a7e