Re: [agi] AGI Consortium

2007-06-09 Thread Mark Waser
>> Think:  if you have contributed something, it'd be in your best interest to 
>> give accurate estimates rather than exaggerate or depreciate them

Why wouldn't it be to my advantage to exaggerate my contributions?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Mark Waser
>> The problem of logical reasoning in natural language is a pattern recognition
>> problem (like natural language recognition in general).  For example:

>> - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
>> - Cities have tall buildings.  New York is a city.  Therefore New York has
>> tall buildings.
>> - Summers are hot.  July is in the summer.  Therefore July is hot.

>> After many examples, you learn the pattern and you can solve novel logic
>> problems of the same form.  Repeat for many different patterns.

Your built in assumptions make you think that.  There are NO readily obvious 
patterns is the examples you gave except on obvious example of standard logical 
inference.  Note:
  a.. In the first clause, the only repeating words are green and Kermit.  
Maybe I'd let you argue the plural of frog.
  b.. In the second clause, the only repeating words are tall buildings and New 
York.  I'm not inclined to give you the plural of city.  There is also the 
minor confusion that tall buildings and New York are multiple words.
  c.. In the third clause, the only repeating words are hot and July.  Okay, 
you can argue summers.
  d.. Across sentences, I see a regularity between the first and the third of 
"As are B.  C is A.  Therefore, C is B."
Looks far more to me like you picked out one particular example of logical 
inference and called it pattern matching.  

I don't believe that your theory works for more than a few very small, toy 
examples.  Further, even if it did work, there are so many patterns that 
approaching it this way would be computationally intractable without a lot of 
other smarts.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Charles D Hixson

Mark Waser wrote:
>> The problem of logical reasoning in natural language is a pattern 
recognition

>> problem (like natural language recognition in general).  For example:

>> - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
>> - Cities have tall buildings.  New York is a city.  Therefore New 
York has

>> tall buildings.
>> - Summers are hot.  July is in the summer.  Therefore July is hot.

>> After many examples, you learn the pattern and you can solve novel 
logic

>> problems of the same form.  Repeat for many different patterns.
 
Your built in assumptions make you think that.  There are NO readily 
obvious patterns is the examples you gave except on obvious example of 
standard logical inference.  Note:


* In the first clause, the only repeating words are green and
  Kermit.  Maybe I'd let you argue the plural of frog.
* In the second clause, the only repeating words are tall
  buildings and New York.  I'm not inclined to give you the plural
  of city.  There is also the minor confusion that tall buildings
  and New York are multiple words.
* In the third clause, the only repeating words are hot and July. 
  Okay, you can argue summers.

* Across sentences, I see a regularity between the first and the
  third of "As are B.  C is A.  Therefore, C is B."

Looks far more to me like you picked out one particular example of 
logical inference and called it pattern matching. 
 
I don't believe that your theory works for more than a few very small, 
toy examples.  Further, even if it did work, there are so many 
patterns that approaching it this way would be computationally 
intractable without a lot of other smarts.
 

It's worse than that.  "Frogs are green." is a generically true 
statement, that isn't true in most particular cases.  E.g., some frogs 
are yellow, red, and black without any trace of green on them that I've 
noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
basically green, but with black spots.


Worse, although Kermit is identified as a frog, Kermit is actually a 
cartoon character.  As such, Kermit can be run over by a tank without 
being permanently damaged.  This is not true of actual frogs.


OTOH, there *IS* a pattern matching going on.  It's just not evident at 
the level of structure (or rather only partially evident).


Were I to rephrase the sentences more exactly they would go something 
like this:

Kermit is a representation of a frog.
Frogs are typically thought of as being green.
Therefore, Kermit will be displayed as largely greenish in overall hue, 
to enhance the representation.


Note that one *could* use similar "logic" to deduce that Miss Piggy is 
more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
what is being discussed here is not mandatory characteristics, but 
representational features selected to harmonize an image with both it's 
setting and internal symbolisms.  As such, only artistically selected 
features are chosen to highlight, and other features are either 
suppressed, or overridden by other artistic choices.  What is being 
created is a "dreamscape" rather than a realistic image.


On to the second example.  Here again one is building a dreamscape, 
selecting harmonious imagery.  Note that it's quite possible to build a 
dreamscape city where there are not tall buildings...or only one.  
(Think of the Emerald City of Oz.  Or for that matter of the Sunset 
District of San Francisco.  Facing in many directions you can't see a 
single building more than two stories tall.)  But it's also quite 
realistic to imagine tall buildings.  By specifying tall buildings, one 
filters out a different set of harmonious city images.


What these patterns do is enable one to filter out harmonious images, 
etc. from the databank of past experiences.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

I've ended up with the following list. What do you think?

*  Ming Li and Paul Vitanyi, "An Introduction to Kolmogorov Complexity
and Its Applications", Springer Verlag 1997
   * Marcus Hutter, "Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability", Springer Verlag 2004
   * Vladimir Vapnik, "Statistical Learning Theory", Wiley-Interscience 1998
   * Pedro Larrañaga, José A. Lozano (Editors), "Estimation of
Distribution Algorithms: A New Tool for Evolutionary Computation",
Springer 2001
   * Ben Goertzel, Cassio Pennachin (Editors), "Artificial General
Intelligence (Cognitive Technologies)", Springer 2007
   * Pei Wang, "Rigid Flexibility: The Logic of Intelligence", Springer 2006
   * Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
"Probabilistic Logic Networks", in preparation
   * Juyang Weng et al., "SAIL and Dav Developmental Robot Projects:
the Developmental Approach to Machine Intelligence", publication list
   * Ralf Herbrich, "Learning Kernel Classifiers: Theory and
Algorithms", MIT Press 2001
   * Eric Baum, "What is Thought?", MIT Press 2004
   * Marvin Minsky, "The Emotion Machine: Commonsense Thinking,
Artificial Intelligence, and the Future of the Human Mind", Simon &
Schuster 2006
   * Ben Goertzel, "The Hidden Pattern: A Patternist Philosophy of
Mind", Brown Walker Press 2006
   * Ronald Brachman, Hector Levesque, "Knowledge Representation and
Reasoning", Morgan Kaufmann 2004
   * Peter Gärdenfors, "Conceptual Spaces: The Geometry of Thought",
MIT Press 2004
   * Wayne D. Gray (Editor), "Integrated Models of Cognitive
Systems", Oxford University Press 2007
   * "Logica Universalis", Birkhäuser Basel, January 2007

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Reasoning in natural language (was Re: [agi] Books)

2007-06-09 Thread Matt Mahoney

--- Charles D Hixson <[EMAIL PROTECTED]> wrote:

> Mark Waser wrote:
> > >> The problem of logical reasoning in natural language is a pattern 
> > recognition
> > >> problem (like natural language recognition in general).  For example:
> >
> > >> - Frogs are green.  Kermit is a frog.  Therefore Kermit is green.
> > >> - Cities have tall buildings.  New York is a city.  Therefore New 
> > York has
> > >> tall buildings.
> > >> - Summers are hot.  July is in the summer.  Therefore July is hot.
> >
> > >> After many examples, you learn the pattern and you can solve novel 
> > logic
> > >> problems of the same form.  Repeat for many different patterns.
> >  
> > Your built in assumptions make you think that.  There are NO readily 
> > obvious patterns is the examples you gave except on obvious example of 
> > standard logical inference.  Note:
> >
> > * In the first clause, the only repeating words are green and
> >   Kermit.  Maybe I'd let you argue the plural of frog.
> > * In the second clause, the only repeating words are tall
> >   buildings and New York.  I'm not inclined to give you the plural
> >   of city.  There is also the minor confusion that tall buildings
> >   and New York are multiple words.
> > * In the third clause, the only repeating words are hot and July. 
> >   Okay, you can argue summers.
> > * Across sentences, I see a regularity between the first and the
> >   third of "As are B.  C is A.  Therefore, C is B."
> >
> > Looks far more to me like you picked out one particular example of 
> > logical inference and called it pattern matching. 
> >  
> > I don't believe that your theory works for more than a few very small, 
> > toy examples.  Further, even if it did work, there are so many 
> > patterns that approaching it this way would be computationally 
> > intractable without a lot of other smarts.
> >  
> > 
> It's worse than that.  "Frogs are green." is a generically true 
> statement, that isn't true in most particular cases.  E.g., some frogs 
> are yellow, red, and black without any trace of green on them that I've 
> noticed.  Most frogs may be predominately green (e.g., leopard frogs are 
> basically green, but with black spots.
> 
> Worse, although Kermit is identified as a frog, Kermit is actually a 
> cartoon character.  As such, Kermit can be run over by a tank without 
> being permanently damaged.  This is not true of actual frogs.
> 
> OTOH, there *IS* a pattern matching going on.  It's just not evident at 
> the level of structure (or rather only partially evident).
> 
> Were I to rephrase the sentences more exactly they would go something 
> like this:
> Kermit is a representation of a frog.
> Frogs are typically thought of as being green.
> Therefore, Kermit will be displayed as largely greenish in overall hue, 
> to enhance the representation.
> 
> Note that one *could* use similar "logic" to deduce that Miss Piggy is 
> more than 10 times as tall as Kermit.  This would be incorrect.   Thus, 
> what is being discussed here is not mandatory characteristics, but 
> representational features selected to harmonize an image with both it's 
> setting and internal symbolisms.  As such, only artistically selected 
> features are chosen to highlight, and other features are either 
> suppressed, or overridden by other artistic choices.  What is being 
> created is a "dreamscape" rather than a realistic image.
> 
> On to the second example.  Here again one is building a dreamscape, 
> selecting harmonious imagery.  Note that it's quite possible to build a 
> dreamscape city where there are not tall buildings...or only one.  
> (Think of the Emerald City of Oz.  Or for that matter of the Sunset 
> District of San Francisco.  Facing in many directions you can't see a 
> single building more than two stories tall.)  But it's also quite 
> realistic to imagine tall buildings.  By specifying tall buildings, one 
> filters out a different set of harmonious city images.
> 
> What these patterns do is enable one to filter out harmonious images, 
> etc. from the databank of past experiences.

These are all valid criticisms.  They explain why logical reasoning in natural
language is an unsolved problem.  Obviously simple string matching won't work.
 The system must also recognize sentence structure, word associations,
different word forms, etc.  Doing this requires a lot of knowledge about
language and about the world.  After those patterns are learned (and there are
hundreds of thousands of them), then it will be possible to learn the more
complex patterns associated with reasoning.

The other criticism is that the statements are not precisely true.  (July is
cold in Australia).  But the logic is still valid.  It should be possible to
train a purely logical system on examples using obviously false statements,
like:

- The moon is a dog.  All dogs are made of green cheese.  There

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/9/07, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:

I've ended up with the following list. What do you think?



I would like to add "Locus Solum" by Girard to this list, and then is
seems to collapse into a black hole... Don't care?


*  Ming Li and Paul Vitanyi, "An Introduction to Kolmogorov Complexity
and Its Applications", Springer Verlag 1997
* Marcus Hutter, "Universal Artificial Intelligence: Sequential
Decisions Based On Algorithmic Probability", Springer Verlag 2004
* Vladimir Vapnik, "Statistical Learning Theory", Wiley-Interscience 1998
* Pedro Larrañaga, José A. Lozano (Editors), "Estimation of
Distribution Algorithms: A New Tool for Evolutionary Computation",
Springer 2001
* Ben Goertzel, Cassio Pennachin (Editors), "Artificial General
Intelligence (Cognitive Technologies)", Springer 2007
* Pei Wang, "Rigid Flexibility: The Logic of Intelligence", Springer 2006
* Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
"Probabilistic Logic Networks", in preparation
* Juyang Weng et al., "SAIL and Dav Developmental Robot Projects:
the Developmental Approach to Machine Intelligence", publication list
* Ralf Herbrich, "Learning Kernel Classifiers: Theory and
Algorithms", MIT Press 2001
* Eric Baum, "What is Thought?", MIT Press 2004
* Marvin Minsky, "The Emotion Machine: Commonsense Thinking,
Artificial Intelligence, and the Future of the Human Mind", Simon &
Schuster 2006
* Ben Goertzel, "The Hidden Pattern: A Patternist Philosophy of
Mind", Brown Walker Press 2006
* Ronald Brachman, Hector Levesque, "Knowledge Representation and
Reasoning", Morgan Kaufmann 2004
* Peter Gärdenfors, "Conceptual Spaces: The Geometry of Thought",
MIT Press 2004
* Wayne D. Gray (Editor), "Integrated Models of Cognitive
Systems", Oxford University Press 2007
* "Logica Universalis", Birkhäuser Basel, January 2007



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/9/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote:


I'm not aware of any book on pattern recognition with a view on AGI, except
The Pattern Recognition Basis of Artificial Intelligence by Don Tveter
(1998):
http://www.dontveter.com/basisofai/basisofai.html

You may look at The Cambridge Hankbook of Thinking and Reasoning first,
especially the chapters on "similarity" and "analogy".


Thanks, it's interesting.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] Books

2007-06-09 Thread J Storrs Hall, PhD
For what purpose? 

If you're looking for a doorstop, Wolfram's A New Kind of Science would be 
superior to any one on your list. If you want something to read to your 
grandchildren, Winnie the Pooh by A A Milne is infinitely better. If you work 
with real matter in a lab, the CRC Handbook is indispensable. For just plain 
fun, The Code of the Woosters by P. G. Wodehouse. 

If you want to understand why existing approaches to AI haven't worked, try 
Beyond AI by yours truly. 

Josh

On Saturday 09 June 2007 05:03:16 pm Lukasz Stafiniak wrote:
> I've ended up with the following list. What do you think?
> 
> *  Ming Li and Paul Vitanyi, "An Introduction to Kolmogorov Complexity
> and Its Applications", Springer Verlag 1997
> * Marcus Hutter, "Universal Artificial Intelligence: Sequential
> Decisions Based On Algorithmic Probability", Springer Verlag 2004
> * Vladimir Vapnik, "Statistical Learning Theory", Wiley-Interscience 
1998
> * Pedro Larrañaga, José A. Lozano (Editors), "Estimation of
> Distribution Algorithms: A New Tool for Evolutionary Computation",
> Springer 2001
> * Ben Goertzel, Cassio Pennachin (Editors), "Artificial General
> Intelligence (Cognitive Technologies)", Springer 2007
> * Pei Wang, "Rigid Flexibility: The Logic of Intelligence", Springer 
2006
> * Ben Goertzel, Matt Ikle', Izabela Goertzel, Ari Heljakka
> "Probabilistic Logic Networks", in preparation
> * Juyang Weng et al., "SAIL and Dav Developmental Robot Projects:
> the Developmental Approach to Machine Intelligence", publication list
> * Ralf Herbrich, "Learning Kernel Classifiers: Theory and
> Algorithms", MIT Press 2001
> * Eric Baum, "What is Thought?", MIT Press 2004
> * Marvin Minsky, "The Emotion Machine: Commonsense Thinking,
> Artificial Intelligence, and the Future of the Human Mind", Simon &
> Schuster 2006
> * Ben Goertzel, "The Hidden Pattern: A Patternist Philosophy of
> Mind", Brown Walker Press 2006
> * Ronald Brachman, Hector Levesque, "Knowledge Representation and
> Reasoning", Morgan Kaufmann 2004
> * Peter Gärdenfors, "Conceptual Spaces: The Geometry of Thought",
> MIT Press 2004
> * Wayne D. Gray (Editor), "Integrated Models of Cognitive
> Systems", Oxford University Press 2007
> * "Logica Universalis", Birkhäuser Basel, January 2007
> 
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
> 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e

Re: [agi] Books

2007-06-09 Thread Lukasz Stafiniak

On 6/10/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:


If you want to understand why existing approaches to AI haven't worked, try
Beyond AI by yours truly.


OK.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e


Re: [agi] AGI Consortium

2007-06-09 Thread Joel Pitt

On 6/9/07, Mark Waser <[EMAIL PROTECTED]> wrote:

...Same goes for most software developed by this method–almost
all the great open source apps are me-too knockoffs of innovative
proprietary programs, and those that are original were almost always created
under the watchful eye of a passionate, insightful overseer or organization.


Obviously the author hasn't bothered looking at many open source projects.

There are swaths of innovative usable open source projects. The thing
is, they are often not noticed, because innovative does not
necessarily mean useful to everyone who owns a computer.

I'm also more convinced that the opposite is true: open source
innovation leads to commercial knock-offs. e.g. iTunes is a piece of
crap in comparison to amarok.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a7e