Re: [agi] would anyone want to use a commonsense KB?

2008-03-05 Thread YKY (Yan King Yin)
On 3/5/08, david cash [EMAIL PROTECTED] wrote:
 In my opinion, instead of having to cherry-pick desirable and
undesirable traits in an unconscious AGI entity, that we, of course, wish to
have consciousness and cognitive abilites like reasoning, deductive and
inductive logic comprehension skills, emotional traits, compassion, ethics,
street smarts, and the like, requires the following protocols:  We model AGI
after the most successful living (and non-living) humans.  That way, we will
be able to pretty much predict how they will react in various situations;
for example,  we would not want to make a George W. Bush AGI.  We would,
however, make the Bill Clinton, David m. Cash, Angelina Jolie, and so
on.  Why reinvent the wheel when we have perfect examples of successful
intelligence in front of us everyday and in our history books.  There will
have to be some ground rules though: no major world religious figures, no
rapists or child molestors, etc.  I will begin work on compiling a list of
likely candidates for the new project, which is already underway in Finland
with several of my colleagues.  By the way, my name is David M. Cash,
otherwise known on the Internet as DC or DMC.  On a final note, I feel
that each AGI should have an independent personality, not some slavelike
borgish slave mind that is easily manipulated or controlled.  Furthermore,
in creation of this industry standard framework for the AGI mind, I elect we
only use Commercial Open Source Software or regular Open Source Software.
This project is too important to be left in the hands of just one or a few
corporations (i.e. Microsoft, Cisco, Sun) who would inevitably taint the
project with corporate greed and shareholder interests, which would hamper
and ultimately destroy the proper development of the new Atlantis
Project.  So let's call it Project Atlantis from now on if that is ok with
everyone...

Calm down -- no one is a rapist here (AFAIK).  I just used that as an
example.

Secondly, a commercial AGI scenario may not be that bad, it may even be more
stable than a scenario where AGI is free for all.  The latter is not very
likely since the AGI needs to run on hardware, and hardware is not free.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-05 Thread YKY (Yan King Yin)
On 3/4/08, Mark Waser [EMAIL PROTECTED] wrote:


  But the question is whether the internal knowledge representation of
the AGI needs to allow ambiguities, or should we use an ambiguity-free
representation.  It seems that the latter choice is better.

 An excellent point.  But what if the representation is natural language
with pointers to the specific intended meaning of any words that are
possibly ambiguous?  That would seem to be the best of both worlds.

Yes, that's the very same strategy I can think of.  Not sure if there're
better ways.

The problem here is that the decompression algorithm seems to be very
complex.  The algorithm to compute the combination of two concepts A and B
goes like this:

1.  generate random sentences containing A and B
2.  test these sentences to see if they make sense (to make sense means
to be supported by, or be consistent with, other facts/rules).

Such an algorithm may be very time-consuming -- this may explain why
*reading* NL is a slow task for humans -- we need to find abductive
explanations for the texts.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-05 Thread YKY (Yan King Yin)
On 3/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Rather, I think the right goal is to create an AGI that, in each
 context, can be as ambiguous as it wants/needs to be in its
 representation of a given piece of information.

 Ambiguity allows compactness, and can be very valuable in this regard.

What exactly does ambiguity mean?  If it means that sentences in the KR have
multiple meanings, then all KRs are necessarily ambiguous.  This is because
information content of the world  information content of the
representation.

The question is whether we should use a NL-like KR.  If we use NL as the
KR, every thought would involve abductive interpretation, which is very
time-consuming.  An alternative is to decompress NL texts and then compress
the information again using *another* format.  This new format may be easier
to decompress later.  (NL is difficult to decompress).

Maybe the human brain's KR is also different from NL...

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread Mark Waser
 But the question is whether the internal knowledge representation of the AGI 
 needs to allow ambiguities, or should we use an ambiguity-free 
 representation.  It seems that the latter choice is better. 

An excellent point.  But what if the representation is natural language with 
pointers to the specific intended meaning of any words that are possibly 
ambiguous?  That would seem to be the best of both worlds.
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Monday, March 03, 2008 5:03 PM
  Subject: Re: [agi] would anyone want to use a commonsense KB?


  On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote: 

   Good example, but how about: language is open-ended, period and capable of 
infinite rather than myriad interpretations - and that open-endedness is the 
whole point of it?.

   Simple example much like yours : handle. You can attach words for objects 
ad infinitum to form different sentences  - 

   handle an egg/ spear/ pen/ snake, stream of water etc.  -  

   the hand shape referred to will keep changing - basically because your hand 
is capable of an infinity of shapes and ways of handling an infinity of 
different objects. . 

   And the next sentence after that first one, may require that the reader 
know exactly which shape the hand took.

   But if you avoid natural language, and its open-endedness then you are 
surely avoiding AGI.  It's that capacity for open-ended concepts that is 
central to a true AGI (like a human or animal). It enables us to keep coming up 
with new ways to deal with new kinds of problems and situations   - new ways to 
handle any problem. (And it also enables us to keep recognizing new kinds of 
objects that might classify as a knife - as well as new ways of handling them 
- which could be useful, for example, when in danger).


  Sure, AGI needs to handle NL in an open-ended way.  But the question is 
whether the internal knowledge representation of the AGI needs to allow 
ambiguities, or should we use an ambiguity-free representation.  It seems that 
the latter choice is better.  Otherwise, the knowledge stored in episodic 
memory would be open to interpretations and may need to errors in recall, and 
similar problems.

  YKY

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread Bob Mottram
On 04/03/2008, Mark Waser [EMAIL PROTECTED] wrote:

   But the question is whether the internal knowledge representation of
 the AGI needs to allow ambiguities, or should we use an ambiguity-free
 representation.  It seems that the latter choice is better.

 An excellent point.  But what if the representation is natural language
 with pointers to the specific intended meaning of any words that are
 possibly ambiguous?  That would seem to be the best of both worlds.



This is fine provided that the AGI lives inside a chess-like ambiguity free
world, which could be a simulation or maybe some abstract data mining
environment.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread Mike Tintner
Dead right (in an ambiguous way :) )

Basically an AGI without open-ended concepts will never live in the real world. 
I should add that I don't believe early, true  AGI's *will* be anywhere near 
capable of natural language. All they will need is one or more systems of 
open-ended concepts.

Emotions are one such system. A drive/urge of hunger for food is an open-ended 
concept that allows an animal to seek/eat any of a whole range of foods, 
(unless you're pregnant and it's 1 a.m. and only one particular form of 
chocolate will do - but even then it could be any of many brands).

A body can be regarded as itself a system of open-ended concepts - for 
effecting concepts of how to seek goals.  Hands and other limbs and indeed a 
torso offer a potentially infinite range of ways to effect commands to handle 
objects. They offer a roboticist many degrees of freedom - true mobility.

(Do you think about the body, natural or robotic, from this POV, Bob?)
  Bob M: Mark Waser [EMAIL PROTECTED] wrote:
 But the question is whether the internal knowledge representation of the 
AGI needs to allow ambiguities, or should we use an ambiguity-free 
representation.  It seems that the latter choice is better. 

An excellent point.  But what if the representation is natural language 
with pointers to the specific intended meaning of any words that are possibly 
ambiguous?  That would seem to be the best of both worlds.


  This is fine provided that the AGI lives inside a chess-like ambiguity free 
world, which could be a simulation or maybe some abstract data mining 
environment.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread david cash
In my opinion, instead of having to cherry-pick desirable and undesirable
traits in an unconscious AGI entity, that we, of course, wish to have
consciousness and cognitive abilites like reasoning, deductive and inductive
logic comprehension skills, emotional traits, compassion, ethics, street
smarts, and the like, requires the following protocols:  We model AGI after
the most successful living (and non-living) humans.  That way, we will be
able to pretty much predict how they will react in various situations; for
example,  we would not want to make a George W. Bush AGI.  We would,
however, make the Bill Clinton, David m. Cash, Angelina Jolie, and so
on.  Why reinvent the wheel when we have perfect examples of successful
intelligence in front of us everyday and in our history books.  There will
have to be some ground rules though: no major world religious figures, no
rapists or child molestors, etc.  I will begin work on compiling a list of
likely candidates for the new project, which is already underway in Finland
with several of my colleagues.  By the way, my name is David M. Cash,
otherwise known on the Internet as DC or DMC.  On a final note, I feel
that each AGI should have an independent personality, not some slavelike
borgish slave mind that is easily manipulated or controlled.  Furthermore,
in creation of this industry standard framework for the AGI mind, I elect we
only use Commercial Open Source Software or regular Open Source Software.
This project is too important to be left in the hands of just one or a few
corporations (i.e. Microsoft, Cisco, Sun) who would inevitably taint the
project with corporate greed and shareholder interests, which would hamper
and ultimately destroy the proper development of the new Atlantis
Project.  So let's call it Project Atlantis from now on if that is ok with
everyone...

On Tue, Mar 4, 2008 at 7:58 AM, Bob Mottram [EMAIL PROTECTED] wrote:

 On 04/03/2008, Mark Waser [EMAIL PROTECTED] wrote:

But the question is whether the internal knowledge representation of
  the AGI needs to allow ambiguities, or should we use an ambiguity-free
  representation.  It seems that the latter choice is better.
 
  An excellent point.  But what if the representation is natural language
  with pointers to the specific intended meaning of any words that are
  possibly ambiguous?  That would seem to be the best of both worlds.
 


 This is fine provided that the AGI lives inside a chess-like ambiguity
 free world, which could be a simulation or maybe some abstract data mining
 environment.

  --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] would anyone want to use a commonsense KB?

2008-03-04 Thread David Clark
I have been an admirer of your robotic works for many years but I just
couldn't help ask you a question.

 

I have created a few robots using sonar and other sensors and I totally
agree that the real world is very messy.  However, most of my thinking is at
a much higher level than what my senses provide.   The messy world is turned
into my symbolic world through a complex maze of systems but in the end, at
least my conscious brain, uses the symbols created, not messy reality.

 

The idea that the whole AGI puzzle can be solved by exclusively using
predicate logic, I agree doesn't fit with what we both know about sensor
data in the real world.   On the other hand, I don't see why a huge chunk
(and I would argue the more difficult part) of the intelligence part of an
AGI can't be done using words, numbers and pictures (idealized pictures I
believe are more valuable than real ones) in a vast series of models.

 

I can't see why AGI researchers would have to take an either/or approach to
constructing their AGI when dealing with the real world AND intelligently
correlating the symbols created.  Having both seems like an obviously better
approach.  Neither approach by itself seems even remotely plausible to me.
If both areas can't be had in a single researcher or group then I see no
reason why each group can't do what they do best and then develop an
interface so that the final AGI has the benefit of both.

 

I am not proposing that all AGI designs are equal or useful and I am not
proposing an AGI should be created using an amalgamation of all types of
AGI.  I am specifically referring to only symbolic intelligence (in some
form) and the systems that would turn messy real world data into symbols
that could be used by the symbolic system.

 

I have great respect for people that are working on turning pictures and
vision into useful symbolic objects.  I believe this and speech recognition
etc are greatly needed and would be very helpful to an AGI but I don't
believe these problems need to be solved as a first step to AGI or that they
are entirely necessary to at least get close to human level intelligence.
Is a blind man who is also a paraplegic necessarily considered less
intelligent than an able bodied person?

 

David Clark

 

From: Bob Mottram [mailto:[EMAIL PROTECTED] 
Sent: March-04-08 8:58 AM
To: agi@v2.listbox.com
Subject: Re: [agi] would anyone want to use a commonsense KB?

 

On 04/03/2008, Mark Waser [EMAIL PROTECTED] wrote:

 But the question is whether the internal knowledge representation of the
AGI needs to allow ambiguities, or should we use an ambiguity-free
representation.  It seems that the latter choice is better. 

 

An excellent point.  But what if the representation is natural language with
pointers to the specific intended meaning of any words that are possibly
ambiguous?  That would seem to be the best of both worlds.



This is fine provided that the AGI lives inside a chess-like ambiguity free
world, which could be a simulation or maybe some abstract data mining
environment.

 

  _  


agi |  http://www.listbox.com/member/archive/303/=now Archives
http://www.listbox.com/member/archive/rss/303/ |
http://www.listbox.com/member/?;
Modify Your Subscription

 http://www.listbox.com 

 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread YKY (Yan King Yin)
On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote:

  I think Ben's text mining approach has one big flaw:  it can
only reason about existing knowledge, but cannot generate new ideas using
words / concepts

 There is a substantial amount of literature that claims that *humans*
can't generate new ideas de novo either -- and that they can only build up
new ideas from existing pieces.

That's fine, but the way our language builds up new ideas seems to be very
complex, and it makes natural language a bad knowledge representation for
AGI.

For example:
An apple pie is a pie with apple fillings.
A door knob is a knob attached to a door.
A street prostitute is prostitute working in the streets.

So the meaning of AB depends on the *interactions* of A and B, and it
violates the principle of compositionality -- where the meaning of AB would
be somehow combined from A and B in a *fixed* way.

An even more complex example:
spread the jam with a knife
draw a circle with a knife
cut the cake with a knife
rape the girl with a knife
stop the train with a knife (with unclear meaning)

So the simple concept do X with a knife can be interpreted in myriad ways
-- it generates new ideas in complex ways.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Mike Tintner
YKY: the way our language builds up new ideas seems to be very complex, and it 
makes natural language a bad knowledge representation for AGI.
An even more complex example:
spread the jam with a knife
draw a circle with a knife
cut the cake with a knife
rape the girl with a knife
stop the train with a knife (with unclear meaning)
So the simple concept do X with a knife can be interpreted in myriad ways -- 
it generates new ideas in complex ways.

YKY,

Good example, but how about: language is open-ended, period and capable of 
infinite rather than myriad interpretations - and that open-endedness is the 
whole point of it?.

Simple example much like yours : handle. You can attach words for objects ad 
infinitum to form different sentences  - 

handle an egg/ spear/ pen/ snake, stream of water etc.  -  

the hand shape referred to will keep changing - basically because your hand is 
capable of an infinity of shapes and ways of handling an infinity of different 
objects. . 

And the next sentence after that first one, may require that the reader know 
exactly which shape the hand took.

But if you avoid natural language, and its open-endedness then you are surely 
avoiding AGI.  It's that capacity for open-ended concepts that is central to a 
true AGI (like a human or animal). It enables us to keep coming up with new 
ways to deal with new kinds of problems and situations   - new ways to handle 
any problem. (And it also enables us to keep recognizing new kinds of objects 
that might classify as a knife - as well as new ways of handling them - which 
could be useful, for example, when in danger).

  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Monday, March 03, 2008 7:14 PM
  Subject: Re: [agi] would anyone want to use a commonsense KB?




  On 2/28/08, Mark Waser [EMAIL PROTECTED] wrote: 
   
I think Ben's text mining approach has one big flaw:  it can only reason 
about existing knowledge, but cannot generate new ideas using words / concepts

   There is a substantial amount of literature that claims that *humans* can't 
generate new ideas de novo either -- and that they can only build up new 
ideas from existing pieces.


  That's fine, but the way our language builds up new ideas seems to be very 
complex, and it makes natural language a bad knowledge representation for AGI.

  For example:
  An apple pie is a pie with apple fillings.
  A door knob is a knob attached to a door.
  A street prostitute is prostitute working in the streets.

  So the meaning of AB depends on the *interactions* of A and B, and it 
violates the principle of compositionality -- where the meaning of AB would be 
somehow combined from A and B in a *fixed* way.

  An even more complex example:
  spread the jam with a knife
  draw a circle with a knife
  cut the cake with a knife
  rape the girl with a knife
  stop the train with a knife (with unclear meaning)

  So the simple concept do X with a knife can be interpreted in myriad ways 
-- it generates new ideas in complex ways.

  YKY

--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.516 / Virus Database: 269.21.3/1308 - Release Date: 3/3/2008 
10:01 AM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread YKY (Yan King Yin)
On 3/4/08, Mike Tintner [EMAIL PROTECTED] wrote:

 Good example, but how about: language is open-ended, period and capable of
infinite rather than myriad interpretations - and that open-endedness is
the whole point of it?.

 Simple example much like yours : handle. You can attach words for
objects ad infinitum to form different sentences  -

 handle an egg/ spear/ pen/ snake, stream of water etc.  -

 the hand shape referred to will keep changing - basically because your
hand is capable of an infinity of shapes and ways of handling an infinity of
different objects. .

 And the next sentence after that first one, may require that the
reader know exactly which shape the hand took.

 But if you avoid natural language, and its open-endedness then you are
surely avoiding AGI.  It's that capacity for open-ended concepts that is
central to a true AGI (like a human or animal). It enables us to keep coming
up with new ways to deal with new kinds of problems and situations   - new
ways to handle any problem. (And it also enables us to keep recognizing
new kinds of objects that might classify as a knife - as well as new ways
of handling them - which could be useful, for example, when in danger).

Sure, AGI needs to handle NL in an open-ended way.  But the question is
whether the internal knowledge representation of the AGI needs to allow
ambiguities, or should we use an ambiguity-free representation.  It seems
that the latter choice is better.  Otherwise, the knowledge stored in
episodic memory would be open to interpretations and may need to errors in
recall, and similar problems.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Ben Goertzel
 Sure, AGI needs to handle NL in an open-ended way.  But the question is
 whether the internal knowledge representation of the AGI needs to allow
 ambiguities, or should we use an ambiguity-free representation.  It seems
 that the latter choice is better.  Otherwise, the knowledge stored in
 episodic memory would be open to interpretations and may need to errors in
 recall, and similar problems.

Rather, I think the right goal is to create an AGI that, in each
context, can be as ambiguous as it wants/needs to be in its
representation of a given piece of information.

Ambiguity allows compactness, and can be very valuable in this regard.

Guidance on this issue is provided by the Lojban language.  Lojban
allows extremely precise expression, but also allows ambiguity as
desired.  What one finds when speaking Lojban is that sometimes one
chooses ambiguity because it lets one make ones utterances shorter.  I
think the same thing holds in terms of an AGI's memory.  An AGI with
finite memory resources must sometimes choose to represent relatively
unimportant information ambiguously rather than precisely so as to
conserve memory.

For instance, storing the information

A is associated with B

is highly ambiguous, but takes little memory.  Storing logical
information regarding the precise relationship between A and B may
take one or more orders of magnitude more information.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-29 Thread Charles D Hixson

Ben Goertzel wrote:

  yet I still feel you dismiss the text-mining approach too glibly...

 No, but text mining requires a language model that learns while mining.  You
 can't mine the text first.



Agreed ... and this gets into subtle points.  Which aspects of the
language model
need to be adapted while mining, and which can remain fixed?  Answering this
question the right way may make all the difference in terms of the viability of
the approach...

ben
  
Given the history of evolution of language... ALL aspects of the 
language model need to be adaptive, but some need to be more easily 
adapted than others.  E.g., adding words needs to be something that's 
easy to do.  Combining words and eliding pieces more difficult (but 
that's how languages transition from forms without verb endings to forms 
with verb endings).


E.g., the -ed past tense suffix of verbs is derived from the word did 
(as in derive did instead of derived in the previous sentence).


If you go looking you find transitions where the order of subject, verb 
and object flip, and many other permutations.  If you don't find a 
permutation, this doesn't mean it never happened and will never happen, 
but rather that most of the evidence is missing, so many rare events 
aren't recorded.  There probably actually *are* some transitions that 
have zero probability, but we don't know what they are.  So just make 
some transitions extremely improbable.  (Who would have predicted 
l33tspeak ahead of time?)


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread YKY (Yan King Yin)
My latest thinking tends to agree with Matt that language and common sense
are best learnt together.  (Learning langauge before common sense
is impossible / senseless).

I think Ben's text mining approach has one big flaw:  it can only reason
about existing knowledge, but cannot generate new ideas using words /
concepts.  I want to stress that AGI needs to be able to think at
the WORD/CONCEPT level.  In order to do this, we need some rules that
*rewrite* sentences made up of words, such that the AGI can reason from one
sentence to another.  Such rewrite rules are very numerous and can be very
complex -- for example rules for auxillary words and prepositions, etc.  I'm
not even sure that such rules can be expressed in FOL easily -- let alone
learn them!

The embodiment approach provides an environment for learning qualitative
physics, but it's still different from the common sense domain where
knowledge is often verbally expressed.  In fact, it's not the environment
that matters, it's the knowledge representation (whether it's expressive
enough) and the learning algorithm (how sophisticated it is).

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread Ben Goertzel
Hi,

 I think Ben's text mining approach has one big flaw:  it can only reason
 about existing knowledge, but cannot generate new ideas using words /
 concepts.

Text mining is not an AGI approach, it's merely a possible way of getting
knowledge into an AGI.

Whether the AGI can generate new ideas is independent of whether it
gets knowledge via text mining or via some other means...

 I want to stress that AGI needs to be able to think at the
 WORD/CONCEPT level.  In order to do this, we need some rules that *rewrite*
 sentences made up of words, such that the AGI can reason from one sentence
 to another.  Such rewrite rules are very numerous and can be very complex --
 for example rules for auxillary words and prepositions, etc.  I'm not even
 sure that such rules can be expressed in FOL easily -- let alone learn them!

This seems off somehow -- I don't think reasoning should be implemented
on the level of linguistic surface forms.

 The embodiment approach provides an environment for learning qualitative
 physics, but it's still different from the common sense domain where
 knowledge is often verbally expressed.

I don't get your point...

Most of common sense is about the world in which we live, as embodied
social organisms...  Embodiment buys you a lot more than qualitative
physics.  It buys you richly shared social experience, among other things.

 In fact, it's not the environment
 that matters, it's the knowledge representation (whether it's expressive
 enough) and the learning algorithm (how sophisticated it is).

I think that all three of these things matter a lot, along with the
overall cognitive
architecture.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread Mark Waser
 I think Ben's text mining approach has one big flaw:  it can only reason 
 about existing knowledge, but cannot generate new ideas using words / 
 concepts

There is a substantial amount of literature that claims that *humans* can't 
generate new ideas de novo either -- and that they can only build up new 
ideas from existing pieces.

 Such rewrite rules are very numerous and can be very complex -- for example 
 rules for auxiliary words and prepositions, etc

The epicycles that the sun performs as it moves around the Earth are also very 
numerous and complex -- until you decide that maybe you should view it as the 
Earth moving around the sun instead.  Read some Pinker -- the rules of language 
tell us *a lot* about the tough-to-discern foundations of human cognition.
  - Original Message - 
  From: YKY (Yan King Yin) 
  To: agi@v2.listbox.com 
  Sent: Thursday, February 28, 2008 4:37 AM
  Subject: Re: [agi] would anyone want to use a commonsense KB?



  My latest thinking tends to agree with Matt that language and common sense 
are best learnt together.  (Learning langauge before common sense is 
impossible / senseless).

  I think Ben's text mining approach has one big flaw:  it can only reason 
about existing knowledge, but cannot generate new ideas using words / concepts. 
 I want to stress that AGI needs to be able to think at the WORD/CONCEPT level. 
 In order to do this, we need some rules that *rewrite* sentences made up of 
words, such that the AGI can reason from one sentence to another.  Such rewrite 
rules are very numerous and can be very complex -- for example rules for 
auxillary words and prepositions, etc.  I'm not even sure that such rules can 
be expressed in FOL easily -- let alone learn them!

  The embodiment approach provides an environment for learning qualitative 
physics, but it's still different from the common sense domain where knowledge 
is often verbally expressed.  In fact, it's not the environment that matters, 
it's the knowledge representation (whether it's expressive enough) and the 
learning algorithm (how sophisticated it is).

  YKY

--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread YKY (Yan King Yin)
On 2/27/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 YKY

 I thought you were   talking about the extraction of information that
 is explicitly stated in online text.

 Of course, inference is a separate process (though it may also play a
 role in direct information extraction).

 I don't think the rules of inference per se need to be learned.  In
 our book on PLN we outline a complete set of probabilistic logic
 inference rules, for example.

 What needs to be learned via experience is how to appropriately bias
 inference control -- how to sensibly prune the inference tree.

 So, one needs an inference engine that can adaptively learn better and
 better inference control as it carries out inferences.  We designed
 and partially implemented this feature in the NCE but never completed
 the work due to other priorities ... but I hope this can get done in
 NM or OpenCog sometime in late 2008..

I'm not talking about inference control here -- I assume that inference
control is done in a proper way, and there will still be a problem.  You
seem to assume that all knowledge = what is explicitly stated in online
texts.  So you deny that there is a large body of implicit knowledge other
than inference control rules (which are few in comparison).

I think that if your AGI doesn't have the implicit knowledge, it'd only be
able to perform simple inferences about statistical events -- for
example, calculating the probability of (lung cancer | smoking).

The kind of reasoning I'm interested in is more sophisticated.  For example,
I may ask the AGI to open a file and print the 100th line (in Java or C++,
say).  The AGI should be able to use a loop to read and discard the first 99
lines.  We need a step like:  read 99 lines - use a loop but such a step
must be based on even simpler *concepts* of repetition and using loops.
What I'm saying is that your AGI does NOT have such rules and would be
incapable of thinking about such things.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
 I'm not talking about inference control here -- I assume that inference
 control is done in a proper way, and there will still be a problem.  You
 seem to assume that all knowledge = what is explicitly stated in online
 texts.  So you deny that there is a large body of implicit knowledge other
 than inference control rules (which are few in comparison).

 I think that if your AGI doesn't have the implicit knowledge, it'd only be
 able to perform simple inferences about statistical events -- for example,
 calculating the probability of (lung cancer | smoking).

For instance, suppose you ask an AI if chocolate makes a person more
alert.

It might read one article saying that coffee makes people more alert,
and another article saying that chocolate contains theobromine, and another
article saying that theobromine is related to caffeine, and another article
saying that coffee contains caffeine ... and then put the pieces together to
answer YES

This kind of reasoning
may sound simple but getting it to work systematically on the large
scale based on text mining has not been done...

And it does seem w/in the grasp of current tech without any breakthroughs...

 The kind of reasoning I'm interested in is more sophisticated.  For example,
 I may ask the AGI to open a file and print the 100th line (in Java or C++,
 say).  The AGI should be able to use a loop to read and discard the first 99
 lines.  We need a step like:  read 99 lines - use a loop but such a step
 must be based on even simpler *concepts* of repetition and using loops.
 What I'm saying is that your AGI does NOT have such rules and would be
 incapable of thinking about such things.

Being incapable of thinking about such things is way too strong a statement --
that has to do with the AI's learning/reasoning algorithms rather than about the
knowledge it has.

I think there would be a viable path to AGI via

1)
Filling a KB up w/ commensense knowledge via text mining and simple inference,
as I described above

2)
Building an NL conversation system utilizing the KB created in 1

3)
Teaching the AGI the implicit knowledge you suggest via conversing with it

As noted I prefer to introduce embodiment into the mix, though, for a variety
of reasons...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Matt Mahoney
--- Ben Goertzel [EMAIL PROTECTED] wrote:
 For instance, suppose you ask an AI if chocolate makes a person more
 alert.
 
 It might read one article saying that coffee makes people more alert,
 and another article saying that chocolate contains theobromine, and another
 article saying that theobromine is related to caffeine, and another article
 saying that coffee contains caffeine ... and then put the pieces together to
 answer YES
 
 This kind of reasoning
 may sound simple but getting it to work systematically on the large
 scale based on text mining has not been done...

 And it does seem w/in the grasp of current tech without any breakthroughs...

It could be done with a simple chain of word associations mined from a text
corpus: alert - coffee - caffeine - theobromine - chocolate.

But that is not the problem.  The problem is that the reasoning would be
faulty, even with a more sophisticated analysis.  By a similar analysis you
could reason:

- coffee makes you alert.
- coffee contains water.
- water (H20) is related to hydrogen sulfide (H2S).
- rotten eggs produce hydrogen sulfide.
- therefore rotten eggs make you alert.

Long chains of logical reasoning are not very useful outside of mathematics.

 I think there would be a viable path to AGI via
 
 1)
 Filling a KB up w/ commensense knowledge via text mining and simple
 inference,
 as I described above
 
 2)
 Building an NL conversation system utilizing the KB created in 1
 
 3)
 Teaching the AGI the implicit knowledge you suggest via conversing with it

I think adding common sense knowledge before language is the wrong approach. 
It didn't work for Cyc.

Natural language evolves to the easiest form for humans to learn, because if a
language feature is hard to learn, people will stop using it because they
aren't understood.  We would be wise to study language learning in humans and
model the process.  The fact is that children learn language in spite of a
lack of common sense.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
  It could be done with a simple chain of word associations mined from a text
  corpus: alert - coffee - caffeine - theobromine - chocolate.

That approach yields way, way, way too much noise.  Try it.

  But that is not the problem.  The problem is that the reasoning would be
  faulty, even with a more sophisticated analysis.  By a similar analysis you
  could reason:

  - coffee makes you alert.
  - coffee contains water.
  - water (H20) is related to hydrogen sulfide (H2S).
  - rotten eggs produce hydrogen sulfide.
  - therefore rotten eggs make you alert.

There is a produce predicate in here which throws off the chain of
reasoning wildly.

And, nearly every food contains water, so the application of Bayes
rule within this inference chain of yours will yield a conclusion with
essentially zero confidence.  Since fewer foods contain caffeine or
theobromine, the inference trail I suggested will not have this
problem.

In short, I claim your similar analysis is only similar at a very
crude level of analysis, and is not similar when you look at the
actual probabilistic inference steps involved.

  Long chains of logical reasoning are not very useful outside of mathematics.

But the inference chain I gave as an example is NOT very long. The
problem is actually that outside of math, chains of inference (long or
short) require contextualization...

   I think there would be a viable path to AGI via
  
   1)
   Filling a KB up w/ commensense knowledge via text mining and simple
   inference,
   as I described above
  
   2)
   Building an NL conversation system utilizing the KB created in 1
  
   3)
   Teaching the AGI the implicit knowledge you suggest via conversing with 
 it

  I think adding common sense knowledge before language is the wrong approach.
  It didn't work for Cyc.

I agree it's not the best approach.

I also think, though, that one unsuccessful attempt should not be taken to damn
the whole approach.

The failure of explicit knowledge encoding by humans, does not straightforwardly
imply the failure of knowledge extraction via text mining (as approaches to AGI)

  Natural language evolves to the easiest form for humans to learn, because if 
 a
  language feature is hard to learn, people will stop using it because they
  aren't understood.  We would be wise to study language learning in humans and
  model the process.  The fact is that children learn language in spite of a
  lack of common sense.

Actually, they seem to acquire language and common sense together.

But, wild children and apes learn common sense, but never learn
language beyond
the proto-language level.

But I agree, study of human dev psych is one thing that has inclined
me toward the
embodied approach ...

yet I still feel you dismiss the text-mining approach too glibly...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

   It could be done with a simple chain of word associations mined from a
 text
   corpus: alert - coffee - caffeine - theobromine - chocolate.
 
 That approach yields way, way, way too much noise.  Try it.

I agree that it does to the point of uselessness.

 
   But that is not the problem.  The problem is that the reasoning would be
   faulty, even with a more sophisticated analysis.  By a similar analysis
 you
   could reason:
 
   - coffee makes you alert.
   - coffee contains water.
   - water (H20) is related to hydrogen sulfide (H2S).
   - rotten eggs produce hydrogen sulfide.
   - therefore rotten eggs make you alert.
 
 There is a produce predicate in here which throws off the chain of
 reasoning wildly.
 
 And, nearly every food contains water, so the application of Bayes
 rule within this inference chain of yours will yield a conclusion with
 essentially zero confidence.  Since fewer foods contain caffeine or
 theobromine, the inference trail I suggested will not have this
 problem.

But you cannot conclude from
- coffee contains caffeine
- coffee makes you alert
that caffeine makes you alert either.

   I think adding common sense knowledge before language is the wrong
 approach.
   It didn't work for Cyc.
 
 I agree it's not the best approach.
 
 I also think, though, that one unsuccessful attempt should not be taken to
 damn
 the whole approach.

Well, there are many examples of AI failures where we add knowledge first,
then  language (BASEBALL, Eliza, SHRDLU, hundreds of expert systems, etc).  We
do this because abstract knowledge is easy to represent efficiently on a
computer.  But the brain doesn't work this way.  A language model has to work
like the brain, because language is adapted to it.  We don't do it that way
because it requires such a huge amount of computation.

 yet I still feel you dismiss the text-mining approach too glibly...

No, but text mining requires a language model that learns while mining.  You
can't mine the text first.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
   yet I still feel you dismiss the text-mining approach too glibly...

  No, but text mining requires a language model that learns while mining.  You
  can't mine the text first.

Agreed ... and this gets into subtle points.  Which aspects of the
language model
need to be adapted while mining, and which can remain fixed?  Answering this
question the right way may make all the difference in terms of the viability of
the approach...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread YKY (Yan King Yin)
On 2/25/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi,

 There is no good overview of SMT so far as I know, just some technical
 papers... but SAT solvers are not that deep and are well reviewed in
 this book...

 http://www.sls-book.net/

But that's *propositional* satisfiability, the results may not extend to
first-order SAT -- I've no idea.

Secondly, the learning of an entire KB from text corpus is much, much harder
than SAT.  Even the learning of a single hypothesis from examples with
background knowledge (ie the problem of inductive logic programming) is
harder than SAT.  Now you're talking about inducing the entire KB, and
possibly involving theory revision -- this is VERY impractical.

I guess I'd focus on learning simple rules, one at a time, from NL
instructions.  IMO this is one of the most feasible ways of acquiring the
AGI KB.  But it also involves the AGI itself in the acquisition process, not
just a passive collection of facts like MindPixel...

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread Ben Goertzel
Obviously, extracting knowledge from the Web using a simplistic SAT
approach is infeasible

However, I don't think it follows from this that extracting rich
knowledge from the Web is infeasible

It would require a complex system involving at least

1)
An NLP engine that maps each sentence into a menu of probabilistically
weighted logical interpretations of the sentence (including links into
other sentences built using anaphor resolution heuristics).  This
involves a dozen conceptually distinct components and is not at all
trivial to design, build or tune.

2)
Use of probabilistic inference rules to create implication links
between the different interpretations of the different sentences

3)
Use of an optimization algorithm (which could be a clever use of SAT
or SMT, or something else) to utilize the links formed in step 2, to
select the right interpretation(s) for each sentence


The job of the optimization algorithm is hard but not THAT hard
because the choice of the interpretation of one sentence is only
tightly linked to the choice of interpretation of a relatively small
set of other sentences (ones that are closely related syntactically,
semantically, or in terms of proximity in the same document, etc.).

I don't know any way to tell how well this would work, except to try.

My own approach, cast in these terms, would be to

-- use virtual-world grounding to help with the probabilistic
weighting in step 1 and the link building in step 2

-- use other heuristics besides SAT/SMT in step 3 ... but, using these
techniques within NM/OpenCog is also a possibility down the road, I've
been studying the possibility...


-- Ben





On Tue, Feb 26, 2008 at 6:56 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:


 On 2/25/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  Hi,
 
  There is no good overview of SMT so far as I know, just some technical
   papers... but SAT solvers are not that deep and are well reviewed in
  this book...
 
  http://www.sls-book.net/


 But that's *propositional* satisfiability, the results may not extend to
 first-order SAT -- I've no idea.

 Secondly, the learning of an entire KB from text corpus is much, much harder
 than SAT.  Even the learning of a single hypothesis from examples with
 background knowledge (ie the problem of inductive logic programming) is
 harder than SAT.  Now you're talking about inducing the entire KB, and
 possibly involving theory revision -- this is VERY impractical.

 I guess I'd focus on learning simple rules, one at a time, from NL
 instructions.  IMO this is one of the most feasible ways of acquiring the
 AGI KB.  But it also involves the AGI itself in the acquisition process, not
 just a passive collection of facts like MindPixel...

 YKY


  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread YKY (Yan King Yin)
On 2/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Obviously, extracting knowledge from the Web using a simplistic SAT
 approach is infeasible

 However, I don't think it follows from this that extracting rich
 knowledge from the Web is infeasible

 It would require a complex system involving at least

 1)
 An NLP engine that maps each sentence into a menu of probabilistically
 weighted logical interpretations of the sentence (including links into
 other sentences built using anaphor resolution heuristics).  This
 involves a dozen conceptually distinct components and is not at all
 trivial to design, build or tune.

 2)
 Use of probabilistic inference rules to create implication links
 between the different interpretations of the different sentences

 3)
 Use of an optimization algorithm (which could be a clever use of SAT
 or SMT, or something else) to utilize the links formed in step 2, to
 select the right interpretation(s) for each sentence

Gosh, I think you've missed something of critical importance...

The problem you stated above is about choosing the correct interpretation of
a bunch of sentences.  The problem we should tackle instead, is learning the
rules that make up the KB.

To see the difference, let's consider this example:

Suppose I solve a problem (eg a programming exercise), and to illustrate my
train of thoughts I clearly write down all the steps.  So I have, in
English, a bunch of sentences A,B,C,...,Z where Z is the final conclusion
sentence.

Now the AGI can translate sentences A-Z into logical form.  You claim that
this problem is hard because of multiple interpretations.  But I think
that's relatively unimportant compared to the real problem we face.  So
let's assume that we successfully -- correctly -- translate the NL sentences
into logic.

Now let's imagine that the AGI is doing the exercise, not me.  Then it
should have a train of inference that goes from A to B to C ... and so
on... to Z.  But, the AGI would NOT be able to make such a train of
thoughts.  All it has is just a bunch of *static* sentences from A-Z.

What is missing?  What would allow the AGI to actually conduct the inference
from A-Z?

The missing ingredient is a bunch of rules.  These are the invisible glue
that links the thoughts between the lines.  This is the knowledge that I
think should be learned, and would be very difficult to learn.

You know what I'm talking about??

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread Ben Goertzel
YKY

I thought you were   talking about the extraction of information that
is explicitly stated in online text.

Of course, inference is a separate process (though it may also play a
role in direct information extraction).

I don't think the rules of inference per se need to be learned.  In
our book on PLN we outline a complete set of probabilistic logic
inference rules, for example.

What needs to be learned via experience is how to appropriately bias
inference control -- how to sensibly prune the inference tree.

So, one needs an inference engine that can adaptively learn better and
better inference control as it carries out inferences.  We designed
and partially implemented this feature in the NCE but never completed
the work due to other priorities ... but I hope this can get done in
NM or OpenCog sometime in late 2008..

-- Ben

On Tue, Feb 26, 2008 at 3:02 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:


 On 2/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  Obviously, extracting knowledge from the Web using a simplistic SAT
  approach is infeasible
  
  However, I don't think it follows from this that extracting rich
  knowledge from the Web is infeasible
 
  It would require a complex system involving at least
 
  1)
   An NLP engine that maps each sentence into a menu of probabilistically
  weighted logical interpretations of the sentence (including links into
  other sentences built using anaphor resolution heuristics).  This
   involves a dozen conceptually distinct components and is not at all
  trivial to design, build or tune.
 
  2)
  Use of probabilistic inference rules to create implication links
  between the different interpretations of the different sentences
  
  3)
  Use of an optimization algorithm (which could be a clever use of SAT
  or SMT, or something else) to utilize the links formed in step 2, to
  select the right interpretation(s) for each sentence


 Gosh, I think you've missed something of critical importance...

 The problem you stated above is about choosing the correct interpretation of
 a bunch of sentences.  The problem we should tackle instead, is learning the
 rules that make up the KB.

 To see the difference, let's consider this example:

 Suppose I solve a problem (eg a programming exercise), and to illustrate my
 train of thoughts I clearly write down all the steps.  So I have, in
 English, a bunch of sentences A,B,C,...,Z where Z is the final conclusion
 sentence.

 Now the AGI can translate sentences A-Z into logical form.  You claim that
 this problem is hard because of multiple interpretations.  But I think
 that's relatively unimportant compared to the real problem we face.  So
 let's assume that we successfully -- correctly -- translate the NL sentences
 into logic.

 Now let's imagine that the AGI is doing the exercise, not me.  Then it
 should have a train of inference that goes from A to B to C ... and so on...
 to Z.  But, the AGI would NOT be able to make such a train of thoughts.  All
 it has is just a bunch of *static* sentences from A-Z.

 What is missing?  What would allow the AGI to actually conduct the inference
 from A-Z?

 The missing ingredient is a bunch of rules.  These are the invisible glue
 that links the thoughts between the lines.  This is the knowledge that I
 think should be learned, and would be very difficult to learn.

 You know what I'm talking about??



 YKY
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] would anyone want to use a commonsense KB?

2008-02-25 Thread Ed Porter
Ben, 

Thanks for the info.  

If you knew of anything relatively simple that was on-line that would be
preferred.  But, if not, I guess I could try to Google for something myself.
(Of if I wait long enough there probably will be a simple Wikipedia
explanation.)

Ed Porter

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Sunday, February 24, 2008 9:13 PM
To: agi@v2.listbox.com
Subject: Re: [agi] would anyone want to use a commonsense KB?

Hi,

There is no good overview of SMT so far as I know, just some technical
papers... but SAT solvers are not that deep and are well reviewed in
this book...

http://www.sls-book.net/

-- Ben

On Sun, Feb 24, 2008 at 4:38 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben or anyone,

  Do you know of an explanation or reference that is a for Dummies
explanation
  of how SAT (or SMT) handles computations in spaces with and 100,000
  variables and/or 10^300 states in practically computable time.

  I assume it is by focusing only on that part of the space through which
  relevant and/or relatively short inferences paths pass, or something like
  that.

  Ed Porter


  -Original Message-
  From: Ben Goertzel [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, February 20, 2008 5:54 PM
  To: agi@v2.listbox.com

 Subject: Re: [agi] would anyone want to use a commonsense KB?



  And I seriously doubt that a general SMT solver +
prob. theory is going to beat a custom probabilistic logic solver.

  My feeling is that an SMT solver plus appropriate subsets of prob theory
  can be a very powerful component of a general probabilistic inference
  framework...

  I can back this up with some details but that would get too thorny
  for this list...

  ben


 ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;


 Powered by Listbox: http://www.listbox.com

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Jim Bromer


YKY (Yan King Yin) [EMAIL PROTECTED] wrote: 
 Comparing the problem at hand with SAT may not be very accurate.  First, we 
need to formulate the problem more clearly -- what exactly are we trying to do. 
 Then we can estimate whether it's feasible with available computing power.
   
 Also, modern complexity theory has moved beyond P and NP -- to things that I'm 
still struggling to learn.  For example, there are complexity classes beyond 
NP, in the polynomial hierarchy.  Also there is the analytical hierarchy which 
is different from the polynomial one.  Very often I see logical problems being 
described in the analytical hierarchy.  For example, abduction and induction 
are both higher in the analytical hierarchy than SAT.
   
 I guess our problem would involve abductive and inductive learning, so it 
would be strictly harder than SAT.  No doubt that we'd employ heuristics, so 
the worst-case complexity is not a show-stopper.  But still, there is the 
possibility that the problem would be too hard using realistic computing power.
   
 YKY

AGI might even turn out to be impossible, but I feel that there is a greater 
chance that eventually a program that is built to handle more complexity, in 
just the right way, will succeed.

I had pretty much rejected the possibility that more logical or rational 
systems would result in a significant leap in the field until I became 
interested in the feasibility of a general polytime SAT solver.  Now I see it 
as a logical next step in the trial and error method of research.  In other 
words, something has been missing in the field logical-rational methods.  The 
lack of a more efficient general solver actually represents an unbalance in the 
use of logical methods in computing.  Most of us did not think of in just this 
way, but if a reasonable polytime general solver is feasible then it means that 
that we can significantly boost computing power through software.  Even if this 
doesn't produce a significant leap in AI it might produce the overdue next step.

Jim Bromer

   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


RE: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Ed Porter
Ben or anyone,

Do you know of an explanation or reference that is a for Dummies explanation
of how SAT (or SMT) handles computations in spaces with and 100,000
variables and/or 10^300 states in practically computable time.  

I assume it is by focusing only on that part of the space through which
relevant and/or relatively short inferences paths pass, or something like
that.

Ed Porter

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, February 20, 2008 5:54 PM
To: agi@v2.listbox.com
Subject: Re: [agi] would anyone want to use a commonsense KB?

 And I seriously doubt that a general SMT solver +
  prob. theory is going to beat a custom probabilistic logic solver.

My feeling is that an SMT solver plus appropriate subsets of prob theory
can be a very powerful component of a general probabilistic inference
framework...

I can back this up with some details but that would get too thorny
for this list...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Anna Salamon

Pp

Sent from my phone.

On Feb 24, 2008, at 1:38 PM, Ed Porter [EMAIL PROTECTED] wrote:


Ben or anyone,

Do you know of an explanation or reference that is a for Dummies  
explanation

of how SAT (or SMT) handles computations in spaces with and 100,000
variables and/or 10^300 states in practically computable time.

I assume it is by focusing only on that part of the space through  
which
relevant and/or relatively short inferences paths pass, or something  
like

that.

Ed Porter

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED]
Sent: Wednesday, February 20, 2008 5:54 PM
To: agi@v2.listbox.com
Subject: Re: [agi] would anyone want to use a commonsense KB?


And I seriously doubt that a general SMT solver +
prob. theory is going to beat a custom probabilistic logic solver.


My feeling is that an SMT solver plus appropriate subsets of prob  
theory

can be a very powerful component of a general probabilistic inference
framework...

I can back this up with some details but that would get too thorny
for this list...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Ben Goertzel
Hi,

There is no good overview of SMT so far as I know, just some technical
papers... but SAT solvers are not that deep and are well reviewed in
this book...

http://www.sls-book.net/

-- Ben

On Sun, Feb 24, 2008 at 4:38 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben or anyone,

  Do you know of an explanation or reference that is a for Dummies explanation
  of how SAT (or SMT) handles computations in spaces with and 100,000
  variables and/or 10^300 states in practically computable time.

  I assume it is by focusing only on that part of the space through which
  relevant and/or relatively short inferences paths pass, or something like
  that.

  Ed Porter


  -Original Message-
  From: Ben Goertzel [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, February 20, 2008 5:54 PM
  To: agi@v2.listbox.com

 Subject: Re: [agi] would anyone want to use a commonsense KB?



  And I seriously doubt that a general SMT solver +
prob. theory is going to beat a custom probabilistic logic solver.

  My feeling is that an SMT solver plus appropriate subsets of prob theory
  can be a very powerful component of a general probabilistic inference
  framework...

  I can back this up with some details but that would get too thorny
  for this list...

  ben


 ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;


 Powered by Listbox: http://www.listbox.com

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-21 Thread Bob Mottram
On 20/02/2008, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 So, looking at the moon, what color would you say it was?


As Edwin Land showed colour perception does not just depend upon the
wavelength of light, but is a subjective property actively constructed
by the brain.

http://en.wikipedia.org/wiki/Color_constancy

http://youtube.com/watch?v=ZiTg4kRt13w

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread YKY (Yan King Yin)
On 2/19/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 If we need a KB orders of magnitude larger to make that approach work,
 doesn't that mean we should use another approach?

But do you agree that a KB orders of magnitude larger is required for all
AGI, regardless of *how* the knowledge is acquired?  The debate is on how to
acquire the knowledge.


 Like, er, embodied learning or NL information extraction / conversation
...
 which have the potential to allow rules to be learned implicitly from
 experience rather than explicitly via human hard-coding...

Let me list all the ways of AGI knowledge acquisition:

A)  manual encoding in logical form
B)  manual teaching in NL and pictures
C)  learning in virtual reality (eg Second Life)
D)  embodied learning (eg computer vision)
E)  inductive learning / extraction from existing texts

I'm originally proposing A, but I'm also considering B, as Bob suggests.

C is not very viable as of now.  The physics in Second Life is simply not
*rich* enough.  SL is mainly a space for humans to socialize, so the physics
will not get much richer in the near future -- is anyone interested in
emulating cigarette smoke in SL?

D is hard.  I think I know how to do it, but it'd require $$$.

E is also hard, but you seem to be *unaware* of its difficulty.  In fact,
the problem with E is the same as that with AIXI -- the thoery is elegant,
but the actual learning would take forever.  Can you explain, in broad
terms, how the AGI is to know that water runs downhill instead of up, and
that the moon is not blue, but a greyish color?


 It just doesn't seem a pragmatically feasible approach, setting aside
 all my doubts about the AI viability of it (i.e., I'm not so sure that
even
 if you spent a billion dollars on hand-coding of rules, this would be
 all that helpful for AGI, in the absence of a learning engine radically
 different in nature from typical logical reasoning engines...)


The KB can be used in conjunction with any learning algorithm you have.  In
fact, all A-E can be used together to build the KB.  You seem to think that
method E is so superior that you don't need other sources of knowledge, but
the problem is the efficiency of the learning algorithm.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Bob Mottram
On 20/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 E is also hard, but you seem to be *unaware* of its difficulty.  In fact,
 the problem with E is the same as that with AIXI -- the thoery is elegant,
 but the actual learning would take forever.  Can you explain, in broad
 terms, how the AGI is to know that water runs downhill instead of up, and
 that the moon is not blue, but a greyish color?


You might be able to extract the knowledge that the moon is grey from text
analysis, but ultimately this knowledge comes from D.  Also, there is
nothing intrinsically grey about the moon.  It's only grey in the context of
a certain kind of physical system observing it.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
 C is not very viable as of now.  The physics in Second Life is simply not
 *rich* enough.  SL is mainly a space for humans to socialize, so the physics
 will not get much richer in the near future -- is anyone interested in
 emulating cigarette smoke in SL?

Second Life will soon be integrating the Havok 4 physics engine.

I agree that game-world physics is not yet very realistic, but it's improving
fast, due to strong economics in the MMOG industry.

 E is also hard, but you seem to be *unaware* of its difficulty.  In fact,
 the problem with E is the same as that with AIXI -- the thoery is elegant,
 but the actual learning would take forever.  Can you explain, in broad
 terms, how the AGI is to know that water runs downhill instead of up, and
 that the moon is not blue, but a greyish color?

Water does not always run downhill, sometimes it runs uphill.

To learn commonsense information from text requires parsing the text
and mapping the parse-trees into semantic relationships, which are then
reasoned on by a logical reasoning engine.  There is nothing easy about this,
and there is a hard problem of semantic disambiguation of relationships.
Whether the disambiguation problem can be solved via statistical/inferential
integration of masses of extracted relationships, remains to be seen.

Virtual embodiment coupled with NL conversation is the approach I
currently favor, but I think that large-scale NL information extraction can
also play an important helper role.  And I think that as robotics tech
develops, it can play a big role too.

I think we can take all approaches at once within an integrative framework
like Novamente or OpenCog, but if I have to pick a single focus it will
be virtual embodiment, with the other aspects as helpers...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Mark Waser

Water does not always run downhill, sometimes it runs uphill.


But never without a reason.


- Original Message - 
From: Ben Goertzel [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, February 20, 2008 9:47 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?



C is not very viable as of now.  The physics in Second Life is simply not
*rich* enough.  SL is mainly a space for humans to socialize, so the 
physics

will not get much richer in the near future -- is anyone interested in
emulating cigarette smoke in SL?


Second Life will soon be integrating the Havok 4 physics engine.

I agree that game-world physics is not yet very realistic, but it's 
improving

fast, due to strong economics in the MMOG industry.


E is also hard, but you seem to be *unaware* of its difficulty.  In fact,
the problem with E is the same as that with AIXI -- the thoery is 
elegant,

but the actual learning would take forever.  Can you explain, in broad
terms, how the AGI is to know that water runs downhill instead of up, and
that the moon is not blue, but a greyish color?


Water does not always run downhill, sometimes it runs uphill.

To learn commonsense information from text requires parsing the text
and mapping the parse-trees into semantic relationships, which are then
reasoned on by a logical reasoning engine.  There is nothing easy about 
this,

and there is a hard problem of semantic disambiguation of relationships.
Whether the disambiguation problem can be solved via 
statistical/inferential

integration of masses of extracted relationships, remains to be seen.

Virtual embodiment coupled with NL conversation is the approach I
currently favor, but I think that large-scale NL information extraction 
can

also play an important helper role.  And I think that as robotics tech
develops, it can play a big role too.

I think we can take all approaches at once within an integrative framework
like Novamente or OpenCog, but if I have to pick a single focus it will
be virtual embodiment, with the other aspects as helpers...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Matt Mahoney
--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Let me list all the ways of AGI knowledge acquisition:
 
 A)  manual encoding in logical form
 B)  manual teaching in NL and pictures
 C)  learning in virtual reality (eg Second Life)
 D)  embodied learning (eg computer vision)
 E)  inductive learning / extraction from existing texts

In the distributed query/message posting service I have proposed in
http://www.mattmahoney.net/agi.html all of these methods could be used.  The
system creates a marketplace that rewards intelligence and cooperation with
computing resources -- storage and bandwidth.  Personally, I believe the
system will favor extracting information from existing data on the internet
and from interacting with humans in natural language, plus some manual
encoding of knowledge.

The requirement that peers communicate in natural language is not onerous.  It
is not necessary for each peer to fully solve the problem.  Your calculator
uses natural language in that it uses and understands symbols like 3 and
+. Likewise, experts in 1959 baseball statistics (the latest year available
at the time) have been written to understand queries like how many games did
the Yankees win in July? [1].

Nor is it hard for each peer to route messages to the right experts.  The
system would work (albeit very inefficiently) if peers simply broadcast
messages to every other peer that it knows about.  A better, but still simple
strategy would be to match terms to previously stored messages and forward to
the peers that sent them.  Peers will reward peers by prioritizing the
messages of those that broadcast more selectively, thus increasing their own
reputations.  There will be an economic pressure to increase intelligence with
better language models, i.e. a thesaurus or parsing to improve message
understanding.

Although I described the protocol using text, it could be extended to speech
and images.  For example, a peer would be rewarded if it could match a picture
of a baseball player to an expert on baseball.

I guess the question YKY would have is, how can I make money from this?  Well,
nobody would have control over it, but it does present opportunities for
profit in a market that doesn't yet exist.  I think it is worthwhile to
investigate it and at least have a head start.

References

1. Bert F. Green Jr., Alice K. Wolf, Carol Chomsky, and Kenneth Laughery,
Baseball: An Automatic Question Answerer, Proceedings of the Western Joint
Computer Conference, 19:219-224, 1961.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Looking at the moon won't help --

of course it helps, it tells you that something odd is with the expression,
as opposed to say yellow sun ...

it might be the case that it described a
 particular appearance that only had a slight resemblance to other blue things
 (as in red hair), for example. There are some rare conditions (high
 stratospheric dust) which can make the moon look actually blue.

 In fact blue moon is generally taken to mean, metaphorically, something very
 rare (or even impossible) or the second full moon in a given month (which
 happens about every two-and-a-half years on the average).

 ask someone is of course what human kids do a lot of. An AI could do this,
 or look it up in Wikipedia, or the like. All of which are heuristics to
 reduce the ambiguity/generality in the information stream.
 The question is do enough heuristics make an autogenous AI or is there
 something more fundamental to its structure?


 On Wednesday 20 February 2008 12:27:59 pm, Ben Goertzel wrote:

  The trick to understanding once in a blue moon is to either
 
  -- look at the moon
 
  or
 
  -- ask someone
 


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
Looking at the moon won't help -- it might be the case that it described a 
particular appearance that only had a slight resemblance to other blue things 
(as in red hair), for example. There are some rare conditions (high 
stratospheric dust) which can make the moon look actually blue.

In fact blue moon is generally taken to mean, metaphorically, something very 
rare (or even impossible) or the second full moon in a given month (which 
happens about every two-and-a-half years on the average).

ask someone is of course what human kids do a lot of. An AI could do this, 
or look it up in Wikipedia, or the like. All of which are heuristics to 
reduce the ambiguity/generality in the information stream.
The question is do enough heuristics make an autogenous AI or is there 
something more fundamental to its structure?


On Wednesday 20 February 2008 12:27:59 pm, Ben Goertzel wrote:

 The trick to understanding once in a blue moon is to either
 
 -- look at the moon
 
 or
 
 -- ask someone
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
So, looking at the moon, what color would you say it was?

Here's what text mining might give you (Google hits):

blue moon 11,500,000
red moon 1,670,000
silver moon 1,320,000
yellow moon 712,000
white moon 254,000
golden moon 163,000
orange moon 122,000
green moon 105,000
gray moon 9,460

To me, the moon varies from a deep orange to brilliant white depending on 
atmospheric conditions and time of night... none of which would help me 
understand the text references.



On Wednesday 20 February 2008 02:02:52 pm, Ben Goertzel wrote:
 On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
  Looking at the moon won't help --
 
 of course it helps, it tells you that something odd is with the expression,
 as opposed to say yellow sun ...
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
There seems to be an assumption in this thread that NLP analysis
of text is restricted to simple statistical extraction of word-sequences...

This is not the case...

If there were to be a hope for AGI based on text analysis, it would have
to be based on systems that parse linguistic expressions into logical
relationships, and combine these logical relationships via reasoning.

Assessing metaphoric versus literal mentions would be part of that reasoning.

Critiquing NLP-based AGI based on Google is a lot like critiquing robotics-
based AGI based on the Roomba.

Google is a good product implemented very scalably, but in its linguistic
sophistication, it is nowhere near the best research systems out there.
Let alone what would be possible with further research.

I stress that this is not my favored approach to AGI, but I think these
discussions based on Google are unfairly dismissing NLP-based
AGI by using Google as a straw man.

I note also that a web-surfing AGI could resolve the color of the moon
quite easily by analyzing online pictures -- though this isn't pure
text mining, it's in the same spirit...

ben



On Feb 20, 2008 2:30 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 So, looking at the moon, what color would you say it was?

 Here's what text mining might give you (Google hits):

 blue moon 11,500,000
 red moon 1,670,000
 silver moon 1,320,000
 yellow moon 712,000
 white moon 254,000
 golden moon 163,000
 orange moon 122,000
 green moon 105,000
 gray moon 9,460

 To me, the moon varies from a deep orange to brilliant white depending on
 atmospheric conditions and time of night... none of which would help me
 understand the text references.



 On Wednesday 20 February 2008 02:02:52 pm, Ben Goertzel wrote:
  On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
   Looking at the moon won't help --
 
  of course it helps, it tells you that something odd is with the expression,
  as opposed to say yellow sun ...
 


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
On Wednesday 20 February 2008 02:58:54 pm, Ben Goertzel wrote:
 I note also that a web-surfing AGI could resolve the color of the moon
 quite easily by analyzing online pictures -- though this isn't pure
 text mining, it's in the same spirit...

U -- I just typed moon into google and at the top of the page it gives 
three pictures. Two are thin sliver crescents. The third, of a full moon, is 
distinctly blue.

 There seems to be an assumption in this thread that NLP analysis
 of text is restricted to simple statistical extraction of word-sequences...

I certainly make no such assumption. I offered the stats to point out the kind 
of traps that lie in wait for the hapless text-miner.

As I am sure you are fully aware, you can't parse English without a knowledge 
of the meanings involved. (The council opposed the demonstrators because 
they (feared/advocated) violence.) So how are you going to learn meanings 
before you can parse, or how are you going to parse before you learn 
meanings? They have to be interleaved in a non-trivial way. 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
  As I am sure you are fully aware, you can't parse English without a knowledge
  of the meanings involved. (The council opposed the demonstrators because
  they (feared/advocated) violence.) So how are you going to learn meanings
  before you can parse, or how are you going to parse before you learn
  meanings? They have to be interleaved in a non-trivial way.

True indeed!

Feeding all the ambiguous interpretations of a load of sentences into
a probabilistic
logic network, and letting them get resolved by reference to each
other, is a sort of
search for the most likely solution of a huge system of simultaneous
equations ...
i.e. one needs to let each, of a huge set of ambiguities, be resolved
by the other ones...

This is not an easy problem, but it's not on the face of it unsolvable...

But I think the solution will be easier with info from direct
experience to nudge the
process in the right direction...

Ben





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Matt Mahoney

--- Ben Goertzel [EMAIL PROTECTED] wrote:

   As I am sure you are fully aware, you can't parse English without a
 knowledge
   of the meanings involved. (The council opposed the demonstrators because
   they (feared/advocated) violence.) So how are you going to learn
 meanings
   before you can parse, or how are you going to parse before you learn
   meanings? They have to be interleaved in a non-trivial way.
 
 True indeed!
 
 Feeding all the ambiguous interpretations of a load of sentences into
 a probabilistic
 logic network, and letting them get resolved by reference to each
 other, is a sort of
 search for the most likely solution of a huge system of simultaneous
 equations ...
 i.e. one needs to let each, of a huge set of ambiguities, be resolved
 by the other ones...
 
 This is not an easy problem, but it's not on the face of it unsolvable...
 
 But I think the solution will be easier with info from direct
 experience to nudge the
 process in the right direction...

Children solve the problem by learning semantics before grammar.  Statistical
language models do the same thing.  Models like LSA and vector spaces (used
for search) do not depend on word order.



-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
Yes, of course, but no human except an expert in lunar astronomy would have
a definitive answer to the question either

The issue at hand is really how a text-analysis based AGI would distinguish
literal from metaphoric text, and how it would understand the context in which
a statement is implicitly intended by the speaker/writinger.

These are hard problems, which are being worked on by many individuals
in the computational linguistics community.

I tend to think that introducing (real or virtual) embodiment will make the
solution of these problems easier...

-- Ben

On Wed, Feb 20, 2008 at 3:45 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Ben Goertzel [EMAIL PROTECTED] wrote:

   I note also that a web-surfing AGI could resolve the color of the moon
   quite easily by analyzing online pictures -- though this isn't pure
   text mining, it's in the same spirit...

  Not really.  You can get a better answer to what color is the moon? if you
  google what color is the moon?.  Better, but not definitive.  Even the
  photos are not in agreement.  Some photos show a mix of orange and blue.  If
  you stood on the moon, it would look black next to your feet, but white in
  contrast to the even darker sky.  During tonight's eclipse, it should look
  reddish brown.



  -- Matt Mahoney, [EMAIL PROTECTED]

  ---


 agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Matt Mahoney
--- Ben Goertzel [EMAIL PROTECTED] wrote:

 I note also that a web-surfing AGI could resolve the color of the moon
 quite easily by analyzing online pictures -- though this isn't pure
 text mining, it's in the same spirit...

Not really.  You can get a better answer to what color is the moon? if you
google what color is the moon?.  Better, but not definitive.  Even the
photos are not in agreement.  Some photos show a mix of orange and blue.  If
you stood on the moon, it would look black next to your feet, but white in
contrast to the even darker sky.  During tonight's eclipse, it should look
reddish brown.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine 
you have a heuristic that takes the problem down from NP-complete (which it 
almost certainly is) to a linear system, so there is an N^3 algorithm for 
solving it. We're talking order 1e27 ops.

Now using HEPP = 1e16 x 30 years = 1e9 secs, you get a total crunch for the 
human of 1e25 ops. That's close enough to call even, I think.  Learning order 
is easily worth a couple orders of magnitude in problem complexity.

Let's build a big cluster...

On Wednesday 20 February 2008 03:51:28 pm, Ben Goertzel wrote:
 Feeding all the ambiguous interpretations of a load of sentences into
 a probabilistic
 logic network, and letting them get resolved by reference to each
 other, is a sort of
 search for the most likely solution of a huge system of simultaneous
 equations ...
 i.e. one needs to let each, of a huge set of ambiguities, be resolved
 by the other ones...
 
 This is not an easy problem, but it's not on the face of it unsolvable...
 
 But I think the solution will be easier with info from direct
 experience to nudge the
 process in the right direction...
 
 Ben
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine
  you have a heuristic that takes the problem down from NP-complete (which it
  almost certainly is) to a linear system, so there is an N^3 algorithm for
  solving it. We're talking order 1e27 ops.

That's kind of specious, since modern SAT and SMT solvers can solve many
realistic instances of NP-complete problems for large n, surprisingly quickly...

and without linearizing anything...

Worst-case complexity doesn't mean much...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread YKY (Yan King Yin)
On 2/21/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Feeding all the ambiguous interpretations of a load of sentences into
 a probabilistic
 logic network, and letting them get resolved by reference to each
 other, is a sort of
 search for the most likely solution of a huge system of simultaneous
 equations ...
 i.e. one needs to let each, of a huge set of ambiguities, be resolved
 by the other ones...

 This is not an easy problem, but it's not on the face of it unsolvable...

 But I think the solution will be easier with info from direct
 experience to nudge the
 process in the right direction...

Excellent analogy, I'd like to work on such a system =)

Note that it's not just ambiguities that need to be resolved;  some facts
need to be explained away, for example, water may uphill because of some
abnormal conditions.  The system needs to be able to find explanations for
things...

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
A PROBABILISTIC logic network is a lot more like a numerical problem than a 
SAT problem.

On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote:
 On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  OK, imagine a lifetime's experience is a billion symbol-occurences. 
Imagine
   you have a heuristic that takes the problem down from NP-complete (which 
it
   almost certainly is) to a linear system, so there is an N^3 algorithm for
   solving it. We're talking order 1e27 ops.
 
 That's kind of specious, since modern SAT and SMT solvers can solve many
 realistic instances of NP-complete problems for large n, surprisingly 
quickly...
 
 and without linearizing anything...
 
 Worst-case complexity doesn't mean much...
 
 ben
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
Not necessarily, because

--- one can encode a subset of the rules of probability as a theory in
SMT, and use an SMT solver

-- one can use probabilities to guide the search within an SAT or SMT solver...

ben

On Wed, Feb 20, 2008 at 5:00 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 A PROBABILISTIC logic network is a lot more like a numerical problem than a
  SAT problem.



  On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote:
   On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
  wrote:
OK, imagine a lifetime's experience is a billion symbol-occurences.
  Imagine
 you have a heuristic that takes the problem down from NP-complete (which
  it
 almost certainly is) to a linear system, so there is an N^3 algorithm 
 for
 solving it. We're talking order 1e27 ops.
  
   That's kind of specious, since modern SAT and SMT solvers can solve many
   realistic instances of NP-complete problems for large n, surprisingly
  quickly...
  
   and without linearizing anything...
  
   Worst-case complexity doesn't mean much...
  
   ben
  

  ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription:
  http://www.listbox.com/member/?;


  Powered by Listbox: http://www.listbox.com
  


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Jim Bromer


Ben Goertzel [EMAIL PROTECTED] wrote: On Wed, Feb 20, 2008 at 4:27 PM, J 
Storrs Hall, PhD  wrote:
 OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine
  you have a heuristic that takes the problem down from NP-complete (which it
  almost certainly is) to a linear system, so there is an N^3 algorithm for
  solving it. We're talking order 1e27 ops.

That's kind of specious, since modern SAT and SMT solvers can solve many
realistic instances of NP-complete problems for large n, surprisingly quickly...
and without linearizing anything...
Worst-case complexity doesn't mean much...
ben
I think you may both be off the track here.  Although many difficult problems 
for large n can be solved with contemporary solvers (and approximations to 
those that cannot be can be used in their stead) many of the kinds of problems 
that need to be solved cannot be approximated.

I don't think that J Storrs Hall example of a lifetime's experience (a billion 
symbols) is truly relevant. No one actually believes that the human mind is 
capable of analyzing reality into a monotonic (pure hierarchal) logic and there 
is not much reason to believe that an effective AGI would have to be.  But 
there is a lot of reason to believe that increasing the capacity of logic 
solvers dramatically would be useful.

To get back to Ben's statement: Is the computer chip industry happy with 
contemporary SAT solvers or would a general solver that is capable of beating 
n^4 time be of some use to them?  If it would be useful, then there is a reason 
to believe that it might be useful to AGI.

Jim Bromer

   
-
Never miss a thing.   Make Yahoo your homepage.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
 To get back to Ben's statement: Is the computer chip industry happy with
 contemporary SAT solvers

Well they are using them, but of course there is loads of room for improvement!!

or would a general solver that is capable of
 beating n^4 time be of some use to them?  If it would be useful, then there
 is a reason to believe that it might be useful to AGI.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
 And I seriously doubt that a general SMT solver +
  prob. theory is going to beat a custom probabilistic logic solver.

My feeling is that an SMT solver plus appropriate subsets of prob theory
can be a very powerful component of a general probabilistic inference
framework...

I can back this up with some details but that would get too thorny
for this list...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread J Storrs Hall, PhD
It's probably not worth too much taking this a lot further, since we're 
talking in analogies and metaphors. However, it's my intuition that the 
connectivity in a probabilistic formulation is going to produce a much denser 
graph (less sparse matrix) than what you find in the SAT problems that the 
solvers do so well on. And I seriously doubt that a general SMT solver + 
prob. theory is going to beat a custom probabilistic logic solver.


On Wednesday 20 February 2008 05:31:59 pm, Ben Goertzel wrote:
 Not necessarily, because
 
 --- one can encode a subset of the rules of probability as a theory in
 SMT, and use an SMT solver
 
 -- one can use probabilities to guide the search within an SAT or SMT 
solver...
 
 ben
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread YKY (Yan King Yin)
On 2/19/08, Stephen Reed [EMAIL PROTECTED] wrote:
 Pei: Resolution-based FOL on a huge KB is intractable.

 Agreed.

 However Cycorp spend a great deal of programming effort (i.e. many
man-years) finding deep inference paths for common queries.  The strategies
were:


 prune the rule set according to the context
 substitute procedural code for modus ponens in common query paths (e.g.
isa-links inferred via graph traversal)
 structure the inference engine as a nested set of iterators so that easy
answers are returned immediately, and harder-to-find answers trickle out
later.
 establish a battery of inference engine controls (e.g. time bounds, speed
vs. completeness - whether to employ expensive inference strategies for
greater coverage of answers) and have the inference engine automatically
apply the optimal control configuration for queries
 determine rule utility via machine learning and apply prioritized
inference modules within the given time constraints
 My last in-house talk at Cycorp, in the summer of 2006, described a notion
of mine that Cyc's deductive inference engine behaves as an interpreter, and
that for a certain set of queries, a dramatic speed improvement (e.g. four
orders of magnitude) could be achieved by compiling the query, and possibly
preprocessing incoming facts to suit expected queries.   The queries that
interested me were those embedded in an intelligent application, and which
could be viewed as a query template with parameters.  The compilation
process I described would explore the parameter space with programmer-chosen
query examples.  Then the resulting proof trees would be compiled into
executable code - avoiding entirely the time consuming candidate rule search
and their application when the query executes.  My notion for Cyc's
deductive inference engine optimization is analogous to SQL query
optimization technology.

 I expect to use this technique in the Texai project at the point when I
need a deductive inference engine.

Thanks a lot for the info.  These are very important speed-up strategies.  I
have not yet studied this aspect in detail.

Can you explain what you mean by deep inference?

I think resolution theorem proving provides a way to answer yes/no queries
in a KB.  I take it as a starting point, and try to think of ways to speed
it up and to expand its abilities (answering what/where/when/who/how
queries).

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread YKY (Yan King Yin)
On 2/19/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Why would this approach succeed where Cyc failed?  Cyc paid people to
build
 the knowledge base.  Then when they couldn't sell it, the tried giving it
 away.  Still, nobody used it.

 For an AGI to be useful, people have to be able to communicate with it in
 natural language.  It is easy to manipulate formulas like if P then
Q.  It
 is much harder to explain how this knowledge is represented and learned in
a
 language model.  Cyc did not solve this problem, and we see the result.

I think Cyc failed mainly because their KB is not large enough to make
useful inferences.  We need a huge KB indeed.  Automatic rule generalization
can make the KB smaller.  Translating NL into the KB form is one way to
collect facts/rules easier.  But I still think the knowledge
representation should be logic, instead of natural language.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread YKY (Yan King Yin)
On 2/19/08, Pei Wang [EMAIL PROTECTED] wrote:
 A purely resolution-based inference engine is mathematically elegant,
 but completely impractical, because after all the knowledge are
 transformed into the clause form required by resolution, most of the
 semantic information in the knowledge structure is gone, and the
 result is equivalent to the original knowledge in truth-value only.
 It is hard to control the direction of the inference without semantic
 information.

I wonder how you can preserve structural information in NARS?

If I say eating sweets will cause cavities it will be translated in clause
form as
~ eat_sweets V cavities
so the direction of causation is lost.  If this directional info is
needed, we attach additional information to the clause.  Truth maintenance
systems do something like that.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread Pei Wang
On Feb 19, 2008 8:49 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 On 2/19/08, Pei Wang [EMAIL PROTECTED] wrote:
  A purely resolution-based inference engine is mathematically elegant,
  but completely impractical, because after all the knowledge are
   transformed into the clause form required by resolution, most of the
  semantic information in the knowledge structure is gone, and the
  result is equivalent to the original knowledge in truth-value only.
   It is hard to control the direction of the inference without semantic
  information.


 I wonder how you can preserve structural information in NARS?

By supporting various compound terms/statements.

 If I say eating sweets will cause cavities it will be translated in clause
 form as
 ~ eat_sweets V cavities
 so the direction of causation is lost.  If this directional info is needed,
 we attach additional information to the clause.

How?

 Truth maintenance systems do something like that.

No. That is for something else --- update management.

BTW, classical resolution can be used to answer wh questions using
unification.

Pei


 YKY




  

  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread Lukasz Stafiniak
On Feb 19, 2008 2:41 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 I think resolution theorem proving provides a way to answer yes/no queries
 in a KB.  I take it as a starting point, and try to think of ways to speed
 it up and to expand its abilities (answering what/where/when/who/how
 queries).

Oh my, resolution answers wh-questions as well as decision questions
in FOL. You just record the answer substitution. (BTW, Prolog is a
positive resolution.) We need to be more technical here.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread Stephen Reed
According to the in-house Cycorp jargon, deep inference begins at approximately 
four backchain steps in a deductive inference.  As most here know, there is an 
exponential fanout in the number of separate inference paths with each 
backchain step, given a large candidate rule set and a large set of facts.

One or two backchain steps can usually be accomplished in seconds by Cyc.  Deep 
inference, let's say six backchain steps, may require hours if it completes at 
all.   Cyc is designed to make the best use of the time allocated for answering 
a query, and the iterative deepening stategy was proposed as a query 
configuration alternative.   It is possible that Cycorp uses it now - I no 
longer have access to its source code.
 
Although I agree that a resolution based refutation proof may be more efficient 
for answering a yes/no query, I think, as did Cycorp, that effort is better 
spent on a single inference engine that operates in a constructive manner, like 
an ordinary SQL query engine, but with heuristic rule application included.

-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, February 19, 2008 7:41:06 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?

[snip]Thanks a lot for the info.  These are very important speed-up strategies. 
 I have not yet studied this aspect in detail.
  
 Can you explain what you mean by deep inference?
  
 I think resolution theorem proving provides a way to answer yes/no queries in 
a KB.  I take it as a starting point, and try to think of ways to speed it up 
and to expand its abilities (answering what/where/when/who/how queries).
   
 YKY
  agi | Archives   | Modify  Your Subscription  
 
 






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-19 Thread Bob Mottram
On 19/02/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
 If we need a KB orders of magnitude larger to make that approach work,
 doesn't that mean we should use another approach?

Yes.


 Like, er, embodied learning or NL information extraction / conversation ...
 which have the potential to allow rules to be learned implicitly from
 experience rather than explicitly via human hard-coding...


A compromise scenario might be to go back to the very early days of AI
and have the goal of an AGI learning to read books intended for young
children containing pictures and relatively simple sentences.  This
would involve a combination of both automatic knowledge extraction and
explicit teaching by a supervisor.  The key to success would be
establishing some system which allowed teacher and learner to interact
in a manner which facilitates efficient and flexible knowledge
transfer.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 So far I've been using resolution-based FOL, so there's only 1 inference
 rule and this is not a big issue.  If you're using nonstandard inference
 rules, perhaps even approximate ones, I can see that this distinction is
 important.

Resolution-based FOL on a huge KB is intractable.

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.

Pei

On Feb 17, 2008 10:04 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 Yesterday I didn't give a clear explanation of what I mean by rules, so
 here is a better try:

 1.  If I see a turkey inside the microwave, I immediately draw the
 conclusion that it's NOT empty.
 2.  However, if I see some katchup on the inside walls of the microwave, I'd
 say it's dirty but it's empty.
 3.  If I see the rotating plate inside the microwave, I'd still say it's
 empty 'cause the plate is part of the microwave.
 etc etc

 So the AGI may have a rule that sounds like:
 if X is an object inside the microwave, and X satisfies some criteria,
 then the microwave is NOT empty.

 But it would be a very dumb AGI if it has this rule specifically for
 microwave ovens, and then some other rules for washing machines, bottles,
 book shelves, and other containers.  It would be necessary for the AGI to
 have a general rule for emptiness for all containers.  So I'd say a washing
 machine with a sock inside is not empty, but if it's just some lint then
 it's empty.

 Such a general rule for emptiness is certainly not available on the net, at
 least not explicitly expressed.  One solution is to manually encode them
 (perhaps with some machine assistance), which is the approach of Cyc.
 Another solution is to induce them from existing texts on the web -- Ben's
 suggestion.

 If given a large enough corpus and a long enough learning period, Ben's
 solution may work.  The key issue is how to speed up the inductive learning
 of rules.



 YKY
  

  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Bob Mottram
On 18/02/2008, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 Well, the idea is to ask lots of people to contribute to the KB, and pay
 them with virtual credits.  (I expect such people to have a little knowledge
 in logic or Prolog, so they can enter complex rules.  Also, they can be
 assisted by inductive learning algorithms.)  The income of the KB will be
 given back to them.  I'll take a bit of administrative fees =)



In principle this sounds ok, but this is almost exactly the same as the
Mindpixel business model.  Once an element of payment is involved (usually
with some kind of shares in future profits) participants tend to expect that
they're going to be able to realise that value within a relatively short
time, like a few years.  Inevitably when expectations aren't met things get
sticky.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mark Waser

All of these rules have exception or implicit condition. If you
treat them as default rules, you run into multiple extension
problem, which has no domain-independent solution in binary logic ---
read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
details.


Pei,

   Do you have a PDF version?  Thanks!

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
Just put one at http://nars.wang.googlepages.com/wang.reference_classes.pdf

On Feb 18, 2008 9:01 AM, Mark Waser [EMAIL PROTECTED] wrote:
  All of these rules have exception or implicit condition. If you
  treat them as default rules, you run into multiple extension
  problem, which has no domain-independent solution in binary logic ---
  read http://www.cogsci.indiana.edu/pub/wang.reference_classes.ps for
  details.

 Pei,

 Do you have a PDF version?  Thanks!


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mike Tintner
I believe I offered the beginning of a v. useful way to conceive of this 
whole area in an earlier post.


The key concept is inventory of the world.

First of all, what is actually being talked about here is only a 
VERBAL/SYMBOLIC KB.


One of the grand illusions of a literature culture is that words/symbols 
refer to everything. The reality is that we have a v. limited verbal 
inventory of the world. Words do not describe most parts of your body, for 
example, only certain key divisions. Check over your hand for a start and 
see how many bits you can name - minute bit by bit.  When it comes to the 
movements of objects, our vocabulary is breathtakingly limited.


In fact, our verbal/symbolic inventory of the world (as provided for by our 
existing cultural vocabulary - for all its millions of words) is, I suggest, 
only a tiny fraction of our COMMON SENSE KB/ inventory of the world - i.e. 
that knowledge we hold purely in sensory image form - and indeed in 
common-sense form (since as Tye points out, we never actually 
experience/operate one sense in isolation - even though we have the 
intellectual illusion that we do).


When we learn to respect the extent of our true common sense knowledge of 
the world as distinct from our formal, verbal knowledge of the world, we 
will realise another major reason why CYC like projects are doomed. They 
have nothing to do with common sense. Of course they will never be able to 
work out, pace Minsky, whether you can whistle and eat at the same time, or 
whether you can push or pull an object with a string. This is true common 
sense knowledge.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?.. p.s.

2008-02-18 Thread Mike Tintner
I should add to the idea of our common sense knowledge inventory of the 
world - because my talk of objects and movements may make it all sound v. 
physical and external. That common sense inventory also includes a vast 
amount of non-verbal knowledge, paradoxically, about how we think and 
communicate with and understand others.The paradoxical part is that a lot of 
this will be common sense about we use words themselves. Hence it is that 
experts have immense difficulties describing how they think about problems. 
They don't have the words for how they use their words. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread YKY (Yan King Yin)
On 2/18/08, Mike Tintner [EMAIL PROTECTED] wrote:
 I believe I offered the beginning of a v. useful way to conceive of this
 whole area in an earlier post.

 The key concept is inventory of the world.

 First of all, what is actually being talked about here is only a
 VERBAL/SYMBOLIC KB.

 One of the grand illusions of a literature culture is that words/symbols
 refer to everything. The reality is that we have a v. limited verbal
 inventory of the world. Words do not describe most parts of your body, for
 example, only certain key divisions. Check over your hand for a start and
 see how many bits you can name - minute bit by bit.  When it comes to the
 movements of objects, our vocabulary is breathtakingly limited.

 In fact, our verbal/symbolic inventory of the world (as provided for by
our
 existing cultural vocabulary - for all its millions of words) is, I
suggest,
 only a tiny fraction of our COMMON SENSE KB/ inventory of the world - i.e.
 that knowledge we hold purely in sensory image form - and indeed in
 common-sense form (since as Tye points out, we never actually
 experience/operate one sense in isolation - even though we have the
 intellectual illusion that we do).

 When we learn to respect the extent of our true common sense knowledge of
 the world as distinct from our formal, verbal knowledge of the world, we
 will realise another major reason why CYC like projects are doomed. They
 have nothing to do with common sense. Of course they will never be able to
 work out, pace Minsky, whether you can whistle and eat at the same time,
or
 whether you can push or pull an object with a string. This is true common
 sense knowledge.

I can give labels to every tiny sub-section of my hand, thus increasing the
resolution of the symbolic description.  If I give labels to each
very small visual features of my hand, then the distinction between visual
representation and symbolic representation disappears.  Therefore, I think
symbolic KBs like Cyc's is not doomed -- the symbolic KB can merge with
perceptual grounding in a continuum fashion.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Mike Tintner
This raises another  v. interesting dimension of KB's and why they are limited. 
The social dimension. You might, purely for argument's sake, be able to name a 
vast amount of unnamed parts of the world. But you would then have to secure 
social agreement for them to become practically useful. Not realistic  - if you 
were say to add even scores let alone thousands of names for each bit of your 
hand.

And even when there are a set of agreed words - and this is a problem that 
absolutely plagues all of us on this board - there may still not be an agreed 
terminology. For example, we are having massive problems as a community, along 
with our society, with what words like intelligence, AGI, symbol, image 
image schema, etc etc. mean. We may agree broadly on the words that are 
relevant in a given area, but we have no agreed terminology as to which of 
those words should be used when, and what they mean.. And actually, now that I 
think of it, the more carefully intellectuals define their words, the MORE 
disagreements and misunderstandings you often get. Words like free and 
determined for philosophers and scientists (and all of us here) are absolute 
minefields.


  MT: I believe I offered the beginning of a v. useful way to conceive of this
   whole area in an earlier post.
   
   The key concept is inventory of the world.
   
   First of all, what is actually being talked about here is only a
   VERBAL/SYMBOLIC KB.
   
   One of the grand illusions of a literature culture is that words/symbols
   refer to everything. The reality is that we have a v. limited verbal
   inventory of the world. Words do not describe most parts of your body, for
   example, only certain key divisions. Check over your hand for a start and
   see how many bits you can name - minute bit by bit.  When it comes to the
   movements of objects, our vocabulary is breathtakingly limited.
   
   In fact, our verbal/symbolic inventory of the world (as provided for by our
   existing cultural vocabulary - for all its millions of words) is, I suggest,
   only a tiny fraction of our COMMON SENSE KB/ inventory of the world - i.e.
   that knowledge we hold purely in sensory image form - and indeed in
   common-sense form (since as Tye points out, we never actually
   experience/operate one sense in isolation - even though we have the
   intellectual illusion that we do).
   
   When we learn to respect the extent of our true common sense knowledge of
   the world as distinct from our formal, verbal knowledge of the world, we
   will realise another major reason why CYC like projects are doomed. They
   have nothing to do with common sense. Of course they will never be able to
   work out, pace Minsky, whether you can whistle and eat at the same time, or
   whether you can push or pull an object with a string. This is true common
   sense knowledge.


  I can give labels to every tiny sub-section of my hand, thus increasing the 
resolution of the symbolic description.  If I give labels to each very small 
visual features of my hand, then the distinction between visual representation 
and symbolic representation disappears.  Therefore, I think symbolic KBs like 
Cyc's is not doomed -- the symbolic KB can merge with perceptual grounding in a 
continuum fashion.

  YKY

--
agi | Archives  | Modify Your Subscription  



--


  No virus found in this incoming message.
  Checked by AVG Free Edition. 
  Version: 7.5.516 / Virus Database: 269.20.7/1285 - Release Date: 2/18/2008 
5:50 AM

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Stephen Reed
Pei: Resolution-based  FOL  on  a  huge  KB  is  intractable.


Agreed.  

However Cycorp spend a great deal of programming effort (i.e. many man-years) 
finding deep inference paths for common queries.  The strategies were:
prune the rule set according to the contextsubstitute procedural code for modus 
ponens in common query paths (e.g. isa-links inferred via graph 
traversal)structure the inference engine as a nested set of iterators so that
easy answers are returned immediately, and harder-to-find answers
trickle out later.establish a battery of inference engine controls (e.g. time 
bounds, speed vs. completeness - whether to employ expensive inference 
strategies for greater coverage of answers) and have the inference engine 
automatically apply the optimal control configuration for queriesdetermine rule 
utility via machine learning and apply prioritized inference modules within the 
given time constraints
My last in-house talk at Cycorp, in the summer of 2006, described a notion of 
mine that Cyc's deductive inference engine behaves as an interpreter, and that 
for a certain set of queries, a dramatic speed improvement (e.g. four orders of 
magnitude) could be achieved by compiling the query, and possibly preprocessing 
incoming facts to suit expected queries.   The queries that interested me were 
those embedded in an intelligent application, and which could be viewed as a 
query template with parameters.  The compilation process I described would 
explore the parameter space with programmer-chosen query examples.  Then the 
resulting proof trees would be compiled into executable code - avoiding 
entirely the time consuming candidate rule search and their application when 
the query executes.  My notion for Cyc's deductive inference engine 
optimization is analogous to SQL query optimization technology.

I expect to use this technique in the Texai project at the point when I need a 
deductive inference engine.
 
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, February 18, 2008 6:17:59 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?

 On  Feb  17,  2008  9:42  PM,  YKY  (Yan  King  Yin)
[EMAIL PROTECTED]  wrote:

  So  far  I've  been  using  resolution-based  FOL,  so  there's  only  1  
 inference
  rule  and  this  is  not  a  big  issue.   If  you're  using  nonstandard  
 inference
  rules,  perhaps  even  approximate  ones,  I  can  see  that  this  
 distinction  is
  important.

Resolution-based  FOL  on  a  huge  KB  is  intractable.

Pei

---
agi
Archives:  http://www.listbox.com/member/archive/303/=now
RSS  Feed:  http://www.listbox.com/member/archive/rss/303/
Modify  Your  Subscription:  http://www.listbox.com/member/?;
Powered  by  Listbox:  http://www.listbox.com







  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
Steve,

I also agree with what you said, and what Cyc uses is no longer pure
resolution-based FOL.

A purely resolution-based inference engine is mathematically elegant,
but completely impractical, because after all the knowledge are
transformed into the clause form required by resolution, most of the
semantic information in the knowledge structure is gone, and the
result is equivalent to the original knowledge in truth-value only.
It is hard to control the direction of the inference without semantic
information.

Pei

On Feb 18, 2008 11:13 AM, Stephen Reed [EMAIL PROTECTED] wrote:

 Pei: Resolution-based FOL on a huge KB is intractable.

 Agreed.

 However Cycorp spend a great deal of programming effort (i.e. many
 man-years) finding deep inference paths for common queries.  The strategies
 were:

 prune the rule set according to the context
 substitute procedural code for modus ponens in common query paths (e.g.
 isa-links inferred via graph traversal)
 structure the inference engine as a nested set of iterators so that easy
 answers are returned immediately, and harder-to-find answers trickle out
 later.
 establish a battery of inference engine controls (e.g. time bounds, speed
 vs. completeness - whether to employ expensive inference strategies for
 greater coverage of answers) and have the inference engine automatically
 apply the optimal control configuration for queries
 determine rule utility via machine learning and apply prioritized inference
 modules within the given time constraints
 My last in-house talk at Cycorp, in the summer of 2006, described a notion
 of mine that Cyc's deductive inference engine behaves as an interpreter, and
 that for a certain set of queries, a dramatic speed improvement (e.g. four
 orders of magnitude) could be achieved by compiling the query, and possibly
 preprocessing incoming facts to suit expected queries.   The queries that
 interested me were those embedded in an intelligent application, and which
 could be viewed as a query template with parameters.  The compilation
 process I described would explore the parameter space with programmer-chosen
 query examples.  Then the resulting proof trees would be compiled into
 executable code - avoiding entirely the time consuming candidate rule search
 and their application when the query executes.  My notion for Cyc's
 deductive inference engine optimization is analogous to SQL query
 optimization technology.

 I expect to use this technique in the Texai project at the point when I need
 a deductive inference engine.

 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860



 - Original Message 
 From: Pei Wang [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Monday, February 18, 2008 6:17:59 AM
 Subject: Re: [agi] would anyone want to use a commonsense KB?

  On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 
  So far I've been using resolution-based FOL, so there's only 1 inference
  rule and this is not a big issue.  If you're using nonstandard inference
  rules, perhaps even approximate ones, I can see that this distinction is
  important.

 Resolution-based FOL on a huge KB is intractable.

 Pei

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  
 Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
 now.
  


  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Matt Mahoney

--- YKY (Yan King Yin) [EMAIL PROTECTED] wrote:

 On 2/18/08, Matt Mahoney [EMAIL PROTECTED] wrote:
  Heh... I think you could give away read-only access and charge people to
  update it.  Information has negative value, you know.
 
 Well, the idea is to ask lots of people to contribute to the KB, and pay
 them with virtual credits.  (I expect such people to have a little knowledge
 in logic or Prolog, so they can enter complex rules.  Also, they can be
 assisted by inductive learning algorithms.)  The income of the KB will be
 given back to them.  I'll take a bit of administrative fees =)

Why would this approach succeed where Cyc failed?  Cyc paid people to build
the knowledge base.  Then when they couldn't sell it, the tried giving it
away.  Still, nobody used it.

For an AGI to be useful, people have to be able to communicate with it in
natural language.  It is easy to manipulate formulas like if P then Q.  It
is much harder to explain how this knowledge is represented and learned in a
language model.  Cyc did not solve this problem, and we see the result.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Stephen Reed
Pei, 

Another issue with a KB inference engine as contrasted with a FOL theorem 
prover is that the former seeks answers to queries, and the latter often seeks 
to disprove the negation of the theorem by finding a contradiction.   Cycorp 
therefore could not reuse much of the research from the automatic theorem 
proving community.   And on the other hand the database community commonly did 
not investigate deep inference.

As the Semantic Web community continues to develop new deductive inference 
engines tuned to inference (ie. query answering) over large RDF KBs , I expect 
to see open-source forward-chaining, and backward-chaining inference engines 
that can be optimized in the same way that I described for Cyc. 
 
-Steve


Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, February 18, 2008 10:47:43 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?

 Steve,

I  also  agree  with  what  you  said,  and  what  Cyc  uses  is  no  longer  
pure
resolution-based  FOL.

A  purely  resolution-based  inference  engine  is  mathematically  elegant,
but  completely  impractical,  because  after  all  the  knowledge  are
transformed  into  the  clause  form  required  by  resolution,  most  of  the
semantic  information  in  the  knowledge  structure  is  gone,  and  the
result  is  equivalent  to  the  original  knowledge  in  truth-value  only.
It  is  hard  to  control  the  direction  of  the  inference  without  semantic
information.

Pei

On  Feb  18,  2008  11:13  AM,  Stephen  Reed  [EMAIL PROTECTED]  wrote:

  Pei:  Resolution-based  FOL  on  a  huge  KB  is  intractable.

  Agreed.

  However  Cycorp  spend  a  great  deal  of  programming  effort  (i.e.  many
  man-years)  finding  deep  inference  paths  for  common  queries.   The  
 strategies
  were:

  prune  the  rule  set  according  to  the  context
  substitute  procedural  code  for  modus  ponens  in  common  query  paths  
 (e.g.
  isa-links  inferred  via  graph  traversal)
  structure  the  inference  engine  as  a  nested  set  of  iterators  so  
 that  easy
  answers  are  returned  immediately,  and  harder-to-find  answers  trickle  
 out
  later.
  establish  a  battery  of  inference  engine  controls  (e.g.  time  bounds, 
  speed
  vs.  completeness  -  whether  to  employ  expensive  inference  strategies  
 for
  greater  coverage  of  answers)  and  have  the  inference  engine  
 automatically
  apply  the  optimal  control  configuration  for  queries
  determine  rule  utility  via  machine  learning  and  apply  prioritized  
 inference
  modules  within  the  given  time  constraints
  My  last  in-house  talk  at  Cycorp,  in  the  summer  of  2006,  described 
  a  notion
  of  mine  that  Cyc's  deductive  inference  engine  behaves  as  an  
 interpreter,  and
  that  for  a  certain  set  of  queries,  a  dramatic  speed  improvement  
 (e.g.  four
  orders  of  magnitude)  could  be  achieved  by  compiling  the  query,  and 
  possibly
  preprocessing  incoming  facts  to  suit  expected  queries. The  
 queries  that
  interested  me  were  those  embedded  in  an  intelligent  application,  
 and  which
  could  be  viewed  as  a  query  template  with  parameters.   The  
 compilation
  process  I  described  would  explore  the  parameter  space  with  
 programmer-chosen
  query  examples.   Then  the  resulting  proof  trees  would  be  compiled  
 into
  executable  code  -  avoiding  entirely  the  time  consuming  candidate  
 rule  search
  and  their  application  when  the  query  executes.   My  notion  for  Cyc's
  deductive  inference  engine  optimization  is  analogous  to  SQL  query
  optimization  technology.

  I  expect  to  use  this  technique  in  the  Texai  project  at  the  point 
  when  I  need
  a  deductive  inference  engine.

  -Steve

  Stephen  L.  Reed

  Artificial  Intelligence  Researcher
  http://texai.org/blog
  http://texai.org
  3008  Oak  Crest  Ave.
  Austin,  Texas,  USA  78704
  512.791.7860



  -  Original  Message  
  From:  Pei  Wang  [EMAIL PROTECTED]
  To:  agi@v2.listbox.com
  Sent:  Monday,  February  18,  2008  6:17:59  AM
  Subject:  Re:  [agi]  would  anyone  want  to  use  a  commonsense  KB?

   On  Feb  17,  2008  9:42  PM,  YKY  (Yan  King  Yin)
  [EMAIL PROTECTED]  wrote:
  
So  far  I've  been  using  resolution-based  FOL,  so  there's  only  1  
 inference
rule  and  this  is  not  a  big  issue.   If  you're  using  nonstandard 
  inference
rules,  perhaps  even  approximate  ones,  I  can  see  that  this  
 distinction  is
important.

  Resolution-based  FOL  on  a  huge  KB  is  intractable.

  Pei

  ---
  agi
  Archives:  http://www.listbox.com/member/archive/303

Re: [agi] would anyone want to use a commonsense KB?

2008-02-18 Thread Pei Wang
On Feb 18, 2008 12:37 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 Pei,

 Another issue with a KB inference engine as contrasted with a FOL theorem
 prover is that the former seeks answers to queries, and the latter often
 seeks to disprove the negation of the theorem by finding a contradiction.
 Cycorp therefore could not reuse much of the research from the automatic
 theorem proving community.   And on the other hand the database community
 commonly did not investigate deep inference.

The automatic theorem proving community does that because resolution
by itself is not complete, while resolution-refutation is complete.

Pei

 As the Semantic Web community continues to develop new deductive inference
 engines tuned to inference (ie. query answering) over large RDF KBs , I
 expect to see open-source forward-chaining, and backward-chaining inference
 engines that can be optimized in the same way that I described for Cyc.

 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860



 - Original Message 
 From: Pei Wang [EMAIL PROTECTED]
 To: agi@v2.listbox.com

 Sent: Monday, February 18, 2008 10:47:43 AM
 Subject: Re: [agi] would anyone want to use a commonsense KB?

  Steve,

 I also agree with what you said, and what Cyc uses is no longer pure
 resolution-based FOL.

 A purely resolution-based inference engine is mathematically elegant,
 but completely impractical, because after all the knowledge are
 transformed into the clause form required by resolution, most of the
 semantic information in the knowledge structure is gone, and the
 result is equivalent to the original knowledge in truth-value only.
 It is hard to control the direction of the inference without semantic
 information.

 Pei

 On Feb 18, 2008 11:13 AM, Stephen Reed [EMAIL PROTECTED] wrote:
 
  Pei: Resolution-based FOL on a huge KB is intractable.
 
  Agreed.
 
  However Cycorp spend a great deal of programming effort (i.e. many
  man-years) finding deep inference paths for common queries.  The
 strategies
  were:
 
  prune the rule set according to the context
  substitute procedural code for modus ponens in common query paths (e.g.
  isa-links inferred via graph traversal)
  structure the inference engine as a nested set of iterators so that easy
  answers are returned immediately, and harder-to-find answers trickle out
  later.
  establish a battery of inference engine controls (e.g. time bounds, speed
  vs. completeness - whether to employ expensive inference strategies for
  greater coverage of answers) and have the inference engine automatically
  apply the optimal control configuration for queries
  determine rule utility via machine learning and apply prioritized
 inference
  modules within the given time constraints
  My last in-house talk at Cycorp, in the summer of 2006, described a notion
  of mine that Cyc's deductive inference engine behaves as an interpreter,
 and
  that for a certain set of queries, a dramatic speed improvement (e.g. four
  orders of magnitude) could be achieved by compiling the query, and
 possibly
  preprocessing incoming facts to suit expected queries.  The queries that
  interested me were those embedded in an intelligent application, and which
  could be viewed as a query template with parameters.  The compilation
  process I described would explore the parameter space with
 programmer-chosen
  query examples.  Then the resulting proof trees would be compiled into
  executable code - avoiding entirely the time consuming candidate rule
 search
  and their application when the query executes.  My notion for Cyc's
  deductive inference engine optimization is analogous to SQL query
  optimization technology.
 
  I expect to use this technique in the Texai project at the point when I
 need
  a deductive inference engine.
 
  -Steve
 
  Stephen L. Reed
 
  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860
 
 
 
  - Original Message 
  From: Pei Wang [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, February 18, 2008 6:17:59 AM
  Subject: Re: [agi] would anyone want to use a commonsense KB?
 
   On Feb 17, 2008 9:42 PM, YKY (Yan King Yin)

  [EMAIL PROTECTED] wrote:
  
   So far I've been using resolution-based FOL, so there's only 1 inference
   rule and this is not a big issue.  If you're using nonstandard inference
   rules, perhaps even approximate ones, I can see that this distinction is
   important.
 
  Resolution-based FOL on a huge KB is intractable.
 
  Pei
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;

 
  Powered by Listbox: http://www.listbox.com

Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Russell Wallace
On Feb 17, 2008 9:56 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 I'm planning to collect commonsense knowledge into a large KB, in the form
 of first-order logic, probably very close to CycL.

Before you embark on such a project, it might be worth first looking
closely at the question of why Cyc hasn't been useful, so that you
don't end up making the same mistakes. There's a school of thought, to
which I subscribe, that it's because Cyc's knowledge base isn't
grounded. Are you instead taking the view that Cyc's fundamental
approach is correct, and it just needs a somewhat different choice of
logical axioms or whatnot?

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Vladimir Nesov
On Feb 17, 2008 4:11 PM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 9:56 AM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 
  I'm planning to collect commonsense knowledge into a large KB, in the form
  of first-order logic, probably very close to CycL.

 Before you embark on such a project, it might be worth first looking
 closely at the question of why Cyc hasn't been useful, so that you
 don't end up making the same mistakes. There's a school of thought, to
 which I subscribe, that it's because Cyc's knowledge base isn't
 grounded. Are you instead taking the view that Cyc's fundamental
 approach is correct, and it just needs a somewhat different choice of
 logical axioms or whatnot?


It might be considered 'grounded' in some sense, but the problem is
that it isn't used to derive other statements that are grounded in the
same sense. Semantics for which Cyc database is grounded (human
knowledge of word usage) is different from semantics that is used for
inference, so in semantics used for inference it's ungrounded. But it
might be possible to make inference engine that will use Cyc database
in grounded way. Another question is if Cyc database will be useful
for such inference engine.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Lukasz Stafiniak
On Feb 17, 2008 2:11 PM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 9:56 AM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 
  I'm planning to collect commonsense knowledge into a large KB, in the form
  of first-order logic, probably very close to CycL.

 Before you embark on such a project, it might be worth first looking
 closely at the question of why Cyc hasn't been useful, so that you
 don't end up making the same mistakes.

This is perhaps a good opportunity to poll you on why do you think Cyc
KB hasn't been useful / successful, I'm interested in grounded
opinions (Stephen?), and not about Cyc as an AGI but about Cyc KB as
what it was supposed to be (e.g. a universal backbone so that expert
systems didn't fall off the knowledge cliff).

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Russell Wallace
On Feb 17, 2008 1:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 It might be considered 'grounded' in some sense

Well, the intent of my statement is this: Maybe somewhere in the Cyc
knowledge base there's the assertion Eat(Cats, Mice) or equivalent,
but if you show Cyc a picture of a cat, a picture of a mouse, then two
candidate successor pictures, one of a mouse stuffed into a cat's
mouth and the other of a cat and a mouse talking philosophy over a
pint of beer, and ask Cyc which is the more likely successor state, it
won't have a clue. That's what I mean by grounded, and it's the reason
Cyc isn't useful.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Pei Wang
As Lukasz just pointed out, there are two topics:
1. Cyc as an AGI project
2. Cyc as a knowledge base useful for AGI systems.

The grounding problem you raised maybe an issue for 1 (even that
depending on what intelligence is understood, and Lenat will argue
otherwise), but it is much less an issue for 2, because that function,
if it exists in the system, is usually not carried out mainly by the
knowledge base.

There have been many criticisms to Cyc on 1, and I agree with Lukasz
that it may be more fruitful to discuss 2, which is also more relevant
to the original question of YKY.

My own opinion is: Cyc is too close to first-order predicate logic,
which is a good formal language/logic for mathematical knowledge, but
not for commonsense knowledge, and minor revisions are not enough. A
more detailed discussion is in
http://nars.wang.googlepages.com/wang.cognitive_mathematical.pdf

Pei

On Feb 17, 2008 10:02 AM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 1:49 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  It might be considered 'grounded' in some sense

 Well, the intent of my statement is this: Maybe somewhere in the Cyc
 knowledge base there's the assertion Eat(Cats, Mice) or equivalent,
 but if you show Cyc a picture of a cat, a picture of a mouse, then two
 candidate successor pictures, one of a mouse stuffed into a cat's
 mouth and the other of a cat and a mouse talking philosophy over a
 pint of beer, and ask Cyc which is the more likely successor state, it
 won't have a clue. That's what I mean by grounded, and it's the reason
 Cyc isn't useful.


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/17/08, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 2:11 PM, Russell Wallace [EMAIL PROTECTED]
wrote:
  Before you embark on such a project, it might be worth first looking
  closely at the question of why Cyc hasn't been useful, so that you
  don't end up making the same mistakes.

 This is perhaps a good opportunity to poll you on why do you think Cyc
 KB hasn't been useful / successful, I'm interested in grounded
 opinions (Stephen?), and not about Cyc as an AGI but about Cyc KB as
 what it was supposed to be (e.g. a universal backbone so that expert
 systems didn't fall off the knowledge cliff).
Yes, I'd like to hear others' opinion on Cyc.  Personally I don't think it's
the perceptual grounding issue -- grounding can be added incrementally
later.  I think Cyc (the KB) is on the right track, but it doesn't have
enough rules.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Pei Wang
In that sense, is a Wikipedia article grounded, if it doesn't
contain a photo? Can it be useful for us?

I mean, symbol grounding is indeed an important issue, but it
doesn't show up everywhere. A Cyc-like KB can be useful, if it use a
proper formal language, which allows its concepts to be related to
each other, as well as to other items outside the KB, such as
sensorimotor mechanism.

It would be nice to have a public KB in which the concepts are already
linked to images and operations, but since sensorimotor tends to be
highly system-dependent, I cannot expect that in the near future. On
the other hand, a commonly accepted formal language is much more
plausible to get, even though it doesn't provide all necessary
knowledge for an AGI.

What is in your mind as a grounded KB?

Pei

On Feb 17, 2008 11:30 AM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 3:34 PM, Pei Wang [EMAIL PROTECTED] wrote:
  As Lukasz just pointed out, there are two topics:
  1. Cyc as an AGI project
  2. Cyc as a knowledge base useful for AGI systems.

 Well, I'm talking about Cyc (and similar systems) as useful for
 anything at all (other than experience to tell us what doesn't work
 and why not). But if it's proposed that such a system might be a
 useful knowledge base for something, then the something will have to
 have solved the grounding problem, right? And what I'm saying is, I
 wouldn't start off building a Cyc-like knowledge base and assume the
 grounding problem will be solved later. I'd start off with the
 grounding problem.


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
 Yes, I'd like to hear others' opinion on Cyc.  Personally I don't think it's
 the perceptual grounding issue -- grounding can be added incrementally
 later.  I think Cyc (the KB) is on the right track, but it doesn't have
 enough rules.

I do think it's possible a Cyc approach could work if one had a few
billion rules in there -- but so what?  (Work meaning: together with a logic
engine, serve as the seed for an AGI that really learns and understands)

It's clear that the mere millions of rules in their KB now are VASTLY
inadequate in terms of scale...

Similarly, AIXItl or related approaches
could work for AGI if one had an insanely powerful computer -- but so what

AGI approaches that could work, in principle if certain wildly infeasible
conditions were met, are not hard to come by ... ;=)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
1)
While in my own AI projects I am currently gravitating toward an approach
involving virtual-worlds grounding, as a general rule I don't think it's obvious
that sensorimotor grounding is needed for AGI.  Certainly it's very useful, but
there is no strong argument that it's required.  The human path to AGI is not
the only one.

2)
I think that, potentially, building a KB could be part of an approach to
solving the grounding problem.  Encode some simple knowledge, instruct
the system in how to ground it in its sensorimotor experience ... then encode
some more (slightly more complex) knowledge ... etc.   I'm not saying this is
the best way but it seems a viable approach.  Thus, even if you want to take
a grounding-focused approach, it doesn't follow that fully solving the grounding
problem must precede the creation and utilization of a KB.  Rather, there could
be a solution to the grounding problem that couples a KB with other aspects.


In the NM approach, we could proceed with or without a KB, and with or
without sensorimotor grounding; and I believe NARS has that same property...

My feeling is that sensorimotor grounding is an Extremely Nice to Have
whereas a KB is just a Sort of Nice to Have, but I don't have a rigorous
demonstration of that

-- Ben G


On Feb 17, 2008 11:30 AM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 3:34 PM, Pei Wang [EMAIL PROTECTED] wrote:
  As Lukasz just pointed out, there are two topics:
  1. Cyc as an AGI project
  2. Cyc as a knowledge base useful for AGI systems.

 Well, I'm talking about Cyc (and similar systems) as useful for
 anything at all (other than experience to tell us what doesn't work
 and why not). But if it's proposed that such a system might be a
 useful knowledge base for something, then the something will have to
 have solved the grounding problem, right? And what I'm saying is, I
 wouldn't start off building a Cyc-like knowledge base and assume the
 grounding problem will be solved later. I'd start off with the
 grounding problem.


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Russell Wallace
On Feb 17, 2008 4:48 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 1)
 While in my own AI projects I am currently gravitating toward an approach
 involving virtual-worlds grounding

And I think that's a very good idea.

 as a general rule I don't think it's obvious
 that sensorimotor grounding is needed for AGI.

Well I wouldn't say it's obvious - it took me a good while to figure
it out, after all :) Just true. Then again, AI is a hard problem; very
few true things about it are obvious.

 The human path to AGI is not
 the only one.

Oh indeed - as I said before, I'm not expecting anything like
human-equivalent AGI in the foreseeable future. But I still think
grounding is central for making useful AI programs. It's an example of
the heuristic that applies to software in general: Internal
computation is easy. Interfaces are most of the difficulty and most of
the value. I don't _want_ to believe that, mind you. Internal
computation is much more fun. But reality's rubbed my nose in itself
on that one too many times for me to ignore.

 2)
 I think that, potentially, building a KB could be part of an approach to
 solving the grounding problem.  Encode some simple knowledge, instruct
 the system in how to ground it in its sensorimotor experience ... then encode
 some more (slightly more complex) knowledge ... etc.   I'm not saying this is
 the best way but it seems a viable approach.  Thus, even if you want to take
 a grounding-focused approach, it doesn't follow that fully solving the 
 grounding
 problem must precede the creation and utilization of a KB.  Rather, there 
 could
 be a solution to the grounding problem that couples a KB with other aspects.

I agree, that might be a viable approach. But the key phrase is
Encode some simple knowledge, instruct the system in how to ground it
in its sensorimotor experience - i.e. you're _not_ spending a decade
writing a million assertions and _then_ looking for the first time at
the grounding problem. Instead grounding is addressed, if not as step
1, then at least as step 1.001.

 My feeling is that sensorimotor grounding is an Extremely Nice to Have
 whereas a KB is just a Sort of Nice to Have, but I don't have a rigorous
 demonstration of that

Heck, I don't have a rigorous demonstration of any nontrivial fact
about any program longer than ten lines, except that any working
program provides a rigorous existence proof that the methods it used
_can_ solve the problem it solves.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
 I agree, that might be a viable approach. But the key phrase is
 Encode some simple knowledge, instruct the system in how to ground it
 in its sensorimotor experience - i.e. you're _not_ spending a decade
 writing a million assertions and _then_ looking for the first time at
 the grounding problem. Instead grounding is addressed, if not as step
 1, then at least as step 1.001.

Well, I find that grounding-based AGI is the kind I can think about most
easily, since that's how human intelligence works...

But I'm less confident that it's the only possible kind of AGI...

I've got to wonder if the masses of text on the Internet could, in themselves,
display a sufficient richness of patterns to obviate the need for grounding
in another domain like a physical or virtual world, or mathematics.

In other words, maybe what you think needs to be gotten from grounding
in a nonlinguistic domain, could somehow be gotten indirectly via grounding
in masses of text?

I am not confident this is feasible, nor that it isn't ... and it's
not the approach
I'm following ... but I'm uncomfortable dismissing it out of hand...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Russell Wallace
On Feb 17, 2008 5:27 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 I've got to wonder if the masses of text on the Internet could, in themselves,
 display a sufficient richness of patterns to obviate the need for grounding
 in another domain like a physical or virtual world, or mathematics.

 In other words, maybe what you think needs to be gotten from grounding
 in a nonlinguistic domain, could somehow be gotten indirectly via grounding
 in masses of text?

 I am not confident this is feasible, nor that it isn't ... and it's
 not the approach
 I'm following ... but I'm uncomfortable dismissing it out of hand...

*nods* I'm comfortable dismissing it out of hand, for several reasons,
not least of which is that we humans do not and cannot do anything
remotely resembling what you're proposing.

It's been described as the equivalent of trying to learn Chinese
equipped only with a Chinese-Chinese dictionary - something already
hopelessly impossible for humans.

It's actually much worse than that, because we'd start off knowing our
own native language, and that Chinese is spoken by humans who have
mostly the same concepts as we have.

It's actually much worse even than a newborn baby trying to learn
Chinese as his first language from a Chinese-Chinese dictionary
without ever hearing any form of speech, because the baby starts off
with a lot of cognitive machinery about language, the real world and
connections between the two, that an AI program doesn't.

At the end of the day, the Internet just doesn't contain most of the
needed information. Consider the question of whether it's possible to
learn about water flowing downhill, from Internet text alone. From
Google (example not original to me, though I forget who first ran this
test):

Results 1 - 10 of about 864 for water flowing downhill
Results 1 - 10 of about 2,130 for water flowing uphill

The prosecution rests :)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/17/08, Pei Wang [EMAIL PROTECTED] wrote:
 There is no similar plan for OpenNARS. When the time comes, it
 probably will get its knowledge, in a mixed manner, (1) from various
 existing sources of formatted knowledge, including Cyc, (2) from the
 Internet, using information retrieval/extraction, data mining, etc.,
 (3) through a natural language interface, (4) through a sensorimotor
 interface, (5) by human tutoring. The last approach will require
 manually coded knowledge (commonsense or not), but in a much smaller
 scale. See http://nars.wang.googlepages.com/wang.roadmap.pdf

Thanks, I'll read it and give you some feedback later.  But I'm interested
in how AGI can be achieved collaboratively, and sharing KBs is one
possiblity, and may be very important.  Sure, you can go it alone, but that
may not be the best choice.


 I raised this issue before: by logical rules, do you mean inference
 rules (like Derive conclusion C from premises A and B), or
 implication statements (like If A and B are true, then C is true)?
 These two are very often confused with each other, and that confusion
 has serious consequences. AGI needs plenty of the latter, but just a
 relatively small number of the former.
Sorry... I can't see the distinction.  Maybe you mean causation vs
implication?  For example, eating sweets may cause cavities, but it is not
an implication because P(cavities|sweets) != 1?

What I mean by rule is any formula that has variables in it.

The kind of rules I have in mind... let me give an example.  One day I
opened the microwave and saw a dish of raw fish inside.  I abductively
conclude that my mom has put a frozen fish inside to defrost it but was too
lazy to wait till it finished to take it out.  In order to do this reasoning
I need the following facts:
1.  the microwave is normally empty when not in use
2.  humans can move things around
3.  defrosting takes time
4.  waiting for the fish to defrost is boring
5.  putting the fish inside and forgeting to press the cook button is
unlikely because the 2 actions occur closely
6.  forgetfulness usually require a substantial time interval
7.  etc etc...

Obviously the current Cyc KB do not have these facts.  That's why I say more
facts are needed.

Secondly, I suspect that some implicit rules are needed for an inference
engine to string these facts together to form a linear proof -- if you get
my drift.  But I find it hard to explain...

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Pei Wang
On Feb 17, 2008 12:56 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

  I raised this issue before: by logical rules, do you mean inference
  rules (like Derive conclusion C from premises A and B), or
  implication statements (like If A and B are true, then C is true)?
   These two are very often confused with each other, and that confusion
  has serious consequences. AGI needs plenty of the latter, but just a
  relatively small number of the former.

 Sorry... I can't see the distinction.  Maybe you mean causation vs
 implication?  For example, eating sweets may cause cavities, but it is not
 an implication because P(cavities|sweets) != 1?

The best example of this difference is Carroll's Paradox --- see
http://www.ditext.com/carroll/tortoise.html

 What I mean by rule is any formula that has variables in it.

Both of them can have variables in them.

 The kind of rules I have in mind... let me give an example.  One day I
 opened the microwave and saw a dish of raw fish inside.  I abductively
 conclude that my mom has put a frozen fish inside to defrost it but was too
 lazy to wait till it finished to take it out.  In order to do this reasoning
 I need the following facts:
 1.  the microwave is normally empty when not in use
 2.  humans can move things around
 3.  defrosting takes time
 4.  waiting for the fish to defrost is boring
 5.  putting the fish inside and forgeting to press the cook button is
 unlikely because the 2 actions occur closely
 6.  forgetfulness usually require a substantial time interval
 7.  etc etc...

Then by rules you mean implication statements, not inference rules.

 Obviously the current Cyc KB do not have these facts.  That's why I say more
 facts are needed.

Sure. No KB can be complete in this sense. However I'm not sure if you
can do better than Cyc. If you just want to add more knowledge, why
not build on the top of Cyc?

 Secondly, I suspect that some implicit rules are needed for an inference
 engine to string these facts together to form a linear proof -- if you get
 my drift.  But I find it hard to explain...

That will be control rules, which is part of the control mechanism.

With commonsense knowledge you cannot really have a proof that
settles the truth-value of a conclusion once for all. You can only
have arguments, which are much less conclusive.

Pei

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
  In other words, maybe what you think needs to be gotten from grounding
  in a nonlinguistic domain, could somehow be gotten indirectly via grounding
  in masses of text?
 
  I am not confident this is feasible, nor that it isn't ... and it's
  not the approach
  I'm following ... but I'm uncomfortable dismissing it out of hand...

 *nods* I'm comfortable dismissing it out of hand, for several reasons,
 not least of which is that we humans do not and cannot do anything
 remotely resembling what you're proposing.

I don't assume that all successful AGI's must be humanlike...

 At the end of the day, the Internet just doesn't contain most of the
 needed information. Consider the question of whether it's possible to
 learn about water flowing downhill, from Internet text alone. From
 Google (example not original to me, though I forget who first ran this
 test):

 Results 1 - 10 of about 864 for water flowing downhill
 Results 1 - 10 of about 2,130 for water flowing uphill

 The prosecution rests :)

Google is not an AGI, so I have no idea why you think this proves
anything about AGI ...

I strongly suspect there is enough information in the
text online for an AGI to learn that water flows downhill in most
circumstances, without having explicit grounding...

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Russell Wallace
On Feb 17, 2008 6:32 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 I don't assume that all successful AGI's must be humanlike...

Neither do I - on the contrary, I think a humanlike AGI isn't going to
happen, in the same way that we never did achieve birdlike flight.

But the only reason we have for believing ill-posed problems (i.e.
nearly all the problems presented by the real world) to be solvable at
all is that humans (in some cases) provide an existence proof. Where a
problem is ill-posed, and humans can't come close to solving it, and
we can't point to a specific human limit that would enable us to solve
it if overcome, then the reasonable default conclusion is that it's
not solvable.

 Google is not an AGI, so I have no idea why you think this proves
 anything about AGI ...

It doesn't. It does, however, prove something about the contents of
the Web, and constitutes a reason...

 I strongly suspect there is enough information in the
 text online for an AGI to learn that water flows downhill in most
 circumstances, without having explicit grounding...

...for disagreeing with you on this point.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Russell Wallace
On Feb 17, 2008 7:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  water flows downhill site:wikipedia.org  -- 4 results

  water flows uphill site:wikipedia.org  -- 2 results

 BUT, both of the latter 2 results are within user talk pages, not regular
 wikipedia entries... whereas all of the former 4 results are on regular
 wikipedia entries

 I'm not saying one can use wikipedia as the knowledge base for an
 AGI, it's clearly not big enough, but I think this certainly defuses your
 example a bit

 obviously, defusing more complex examples would require more work --
 many useful pieces of info exist only implicitly among various Web pages
 rather than on a single Web page...

Of course no matter what example I give, you can defuse it with an
appropriate amount of work. But the key word in that isn't defuse,
it's you. Yes, _you_, being a general intelligence with the
requisite real-world knowledge, can know what's relevant and what
isn't, and therefore ignore the documents that aren't relevant, keep
searching until you find one that is, and regard it as the answer. For
an AI program to do that, it would have to start off with precisely
the sort of real-world knowledge that I've been talking about.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Pei Wang [EMAIL PROTECTED] wrote:
   I raised this issue before: by logical rules, do you mean inference
   rules (like Derive conclusion C from premises A and B), or
   implication statements (like If A and B are true, then C is true)?
These two are very often confused with each other, and that confusion
   has serious consequences. AGI needs plenty of the latter, but just a
   relatively small number of the former.
 
  Sorry... I can't see the distinction.  Maybe you mean causation vs
  implication?  For example, eating sweets may cause cavities, but it is
not
  an implication because P(cavities|sweets) != 1?

 The best example of this difference is Carroll's Paradox --- see
 http://www.ditext.com/carroll/tortoise.html

I'm reading this:
http://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
which is easier to understand, but I still don't get it.

Achilles grants to the tortoise that the second kind of reader may exist,
but I think this second kind of reader is absurd.

If A and B are true, then a sane person MUST admit that Z is true.  I don't
see why not?

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Pei Wang
Only statements containing in a KB as content have truth-value, or
need acceptance. An inference rule is part of the system, which just
applies, and does not need acceptance within the system. An inference
rule has no truth-value.

If it is still unclear, try this:
http://www.mathacademy.com/pr/prime/articles/carroll/index.asp

Of course, the two are related (see
http://en.wikipedia.org/wiki/Deduction_theorem), but if you have the
two confused when designing an inference system, you'll run into
trouble.

Pei

On Feb 17, 2008 2:53 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:

 On 2/18/08, Pei Wang [EMAIL PROTECTED] wrote:
I raised this issue before: by logical rules, do you mean inference
 rules (like Derive conclusion C from premises A and B), or
implication statements (like If A and B are true, then C is true)?
 These two are very often confused with each other, and that confusion
 has serious consequences. AGI needs plenty of the latter, but just a
relatively small number of the former.
  
   Sorry... I can't see the distinction.  Maybe you mean causation vs
implication?  For example, eating sweets may cause cavities, but it is
 not
   an implication because P(cavities|sweets) != 1?
 
  The best example of this difference is Carroll's Paradox --- see
   http://www.ditext.com/carroll/tortoise.html


 I'm reading this:
 http://en.wikipedia.org/wiki/What_the_Tortoise_Said_to_Achilles
 which is easier to understand, but I still don't get it.

 Achilles grants to the tortoise that the second kind of reader may exist,
 but I think this second kind of reader is absurd.

 If A and B are true, then a sane person MUST admit that Z is true.  I don't
 see why not?



 YKY

  

  agi | Archives | Modify Your Subscription

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Richard Loosemore

Mark Waser wrote:
I've got to wonder if the masses of text on the Internet could, in 
themselves,
display a sufficient richness of patterns to obviate the need for 
grounding

in another domain like a physical or virtual world, or mathematics.


A system is grounded if it's internal representations are internally 
consistent and map accurately and completely to the experiences possible 
in a (physical or virtual domain).


Expert systems are not grounded because they do not map completely.  
There is always some additional factor that they do not experience or 
account for.


Most typical proto-AGI systems pretend to ground because they use 
English words that are grounded for the observer but which are not 
grounded for the system because they have a meaning which is enforced 
upon the system without being understood by the system.


A system which can only experience text still could be grounded in the 
physical world provided that there is enough text to describe the 
physical world well enough for the system to be grounded.  Couldn't any 
of us be said to still be grounded in the physical world even if we were 
removed from it except for a text interface?


The real trick is to get a system to state where entirely internally 
consistent (in terms of definitions, etc., not predictions) and large 
enough to be useful.


I applaud your attempt to bring some sense to this discussion.

It won't work, of course.  There is just no way to stop people having 
meaningless discussions about grounding, in which the thing they mean 
by the word has only a distant relationship to the real meaning.


Pity, because the real thing is indeed worth discussing.



Richard Loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread YKY (Yan King Yin)
On 2/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 I strongly suspect there is enough information in the
 text online for an AGI to learn that water flows downhill in most
 circumstances, without having explicit grounding...

I strongly suspect the contrary =)   for the simple reason that adults don't
talk about things that all 3-year-olds know -- such knowledge is often
implicit in conversations / writings.

That's why I believe that some manual method of teaching is *necessary*,
hence we need commonsense collection from web users.

YKY

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Stephen Reed
Yes, I would be very glad to incorporate any content that I can then republish 
using a Wikipedia-compatible license, e.g. GNU Free Documentation License.  Any 
weaker license, such as  Apache, BSD would be OK too.

-Steve
 
Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, February 17, 2008 3:56:40 AM
Subject: [agi] would anyone want to use a commonsense KB?

  
 I'm planning to collect commonsense knowledge into a large KB, in the form of 
first-order logic, probably very close to CycL.  Would current AGI projects 
(OpenNARS, OpenCog, Texai, etc) find it useful?  Or would you prefer to collect 
commonsense on your own?
   
 It seems that the Cyc KB is focused on building an ontology (hence a lot of 
is_a relations), and there's not enough emphasis on other logical rules.  I 
anticipate that AGIs will need plenty of such rules.
   
 YKY
  agi | Archives   | Modify  Your Subscription  
 
 






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Stephen Reed
Briefly, I think that Cyc indeed has solved the brittleness problem observed 
with 1980's style narrow-domain expert systems.  During the Halo project, Cyc 
was merely extended in a principled fashion to answer a battery of word 
questions in the chemistry domain.  In my opinion the chief drawback of the 
Cycorp approach to commonsense knowledge is their overwhelming emphasis on what 
Cyc knows, as contrasted with what skills has Cyc learned and can demonstrate.  
My own work is the construction of a bootstrap English dialog system for the 
purpose of linguistic knowledge *and* skill acquisition.  Also by using a 
robotics-style hierarchical control system, I hope to later connect high level 
symbolic concepts with low-level perceptions - as somewhat illustrated by 
current driverless cars.
 
-Steve 

Stephen L. Reed 
Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, February 17, 2008 10:51:12 AM
Subject: Re: [agi] would anyone want to use a commonsense KB?

 
 On 2/17/08, Lukasz Stafiniak [EMAIL PROTECTED] wrote: 
 On Feb 17, 2008 2:11 PM, Russell Wallace [EMAIL PROTECTED] wrote:
   Before you embark on such a project, it might be worth first looking
  closely at the question of why Cyc hasn't been useful, so that you
  don't end up making the same mistakes.
  
 This is perhaps a good opportunity to poll you on why do you think Cyc
 KB hasn't been useful / successful, I'm interested in grounded
 opinions (Stephen?), and not about Cyc as an AGI but about Cyc KB as
  what it was supposed to be (e.g. a universal backbone so that expert
 systems didn't fall off the knowledge cliff).

 Yes, I'd like to hear others' opinion on Cyc.  Personally I don't think it's 
the perceptual grounding issue -- grounding can be added incrementally later.  
I think Cyc (the KB) is on the right track, but it doesn't have enough rules.
   
 YKY
  agi | Archives   | Modify  Your Subscription  
 
 






  

Looking for last minute shopping deals?  
Find them fast with Yahoo! Search.  
http://tools.search.yahoo.com/newsearch/category.php?category=shopping

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Matt Mahoney
IMHO Cyc was doomed by the lack of a natural language interface.  It cannot
map between Eats(cats, mice) and cats eat mice, or recognize their
equivalence.  In Cyc, cats and Eats are just labels used to help human
programmers enter facts.  Without a natural language interface, it is very
expensive to verify and update the knowledge base.  More importantly, there is
no human interface.

It's not that Cycorp isn't aware of the problem.  Last year some people at
Cycorp were interested in entering the Hutter text compression contest, but
they wanted us to change the rules to not count the size of the database (we
declined).  Text compression or prediction is AI-complete, but it would
require a natural language model to predict the next word in cats eat.

The example rule I gave seems trivial to solve, but anyone who has worked with
NLP knows it is not, of course.  I believe the fundamental design error was to
insert knowledge at the wrong end.  Children learn lexical rules first
(segmenting continuous speech at 7-10 months), then semantics (starting at 12
months), then grammar (2-3 years), then logical rules.  Structured rules take
a lot less computing power to implement than language statistics, so they had
to skip the earlier steps, especially in the 1980's when Cyc was launched.  As
a result, Cyc has no theory that explains how people learn and apply language
and facts or how they communicate.


-- Matt Mahoney, [EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


  1   2   >