Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Richard,

  Unfortunately I cannot bring myself to believe this will help anyone new
  to the area.

  The main reason is that this is only a miscellaneous list of topics,
  with nothing to indicate a comprehensive theory or a unifying structure.
   I do not ask for a complete unified theory, of course, but something
  more than just a collection of techniques is needed if this is to be a
  textbook.



I have my own comprehensive theory and unifying structure for AGI...

Pei has his...

You have yours...

Stan Franklin has his...

Etc.

These have been published with varying levels of detail in various
places ... I'll be publishing more of mine this year, in the PLN book, and
then in the OpenCog documentation and plans ... but many of the
conceptual aspects of my approach were already mentioned in
The Hidden Pattern

My goal in Instead of an AGI Textbook is **not** to present anyone's
unifying theory (not even my own) but rather to give pointers to
**what information a student should learn, in order to digest the various
unifying theories being proposed**.

To put it another way: Aside from a strong undergrad background in CS
and good programming skills, what would I like someone to know about
in order for them to work on Novamente or OpenCog or
some other vaguely similar AI project?

Not everything in my suggested TOC is actually used in Novamente or OpenCog...
but even the stuff that isn't, is interesting to know about if you're
going to work
on these things, just to have a general awareness of the various approaches
that have been taken to these problems...

  A second reason for being skeptical is that there is virtually no
  cognitive psychology in this list - just a smattering of odd topics.

Yes, that's a fair point -- that's a shortcoming of the draft TOC as I
posted it.

Please feel free to add some additional, relevant cog psych topics
to the page ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
On Tue, Mar 25, 2008 at 9:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Richard,


Unfortunately I cannot bring myself to believe this will help anyone new
to the area.
  
The main reason is that this is only a miscellaneous list of topics,
with nothing to indicate a comprehensive theory or a unifying structure.

Actually it's not a haphazardly assembled miscellaneous list of topics
... it was
assembled with a purpose and structure in mind...

Specifically, I was thinking of OpenCog, and what it would be good for someone
to know in order to have a relatively full grasp of the OpenCog design.

As such, the topic list may contain stuff that is not relevant to your
AGI design,
and also may miss stuff that is critical to your AGI design...

But the non textbook is NOT intended as a presentation of OpenCog or any
other specific AGI theory or framework.  Rather, it is indeed,
largely, a grab bag
of relevant prerequisite information ... along with some information on specific
AGI approaches...

One problem I've found is that the traditional undergrad CS or AI education does
not actually give all the prerequisites for really grasping AGI
theories ... often
topics are touched in a particularly non-AGI-ish way ... for instance,
neural nets
are touched but complex dynamics in NN's are skipped ... Bayes nets are touched
but issues involving combining probability with more complex logic operations
are skipped ... neurons are discussed but theories of holistic brain function
are skipped ... etc.   The most AGI-relevant stuff always seems to get
skipped for
lack of time..!

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
  I'll try to find the time to provide my list --- at this moment, it
  will be more like a reading list than a textbook TOC.

That would be great -- however I may integrate your reading
list into my TOC ... as I really think there is value in a structured
and categorized reading list rather than just a list...

I know every researcher will have their own foci, but I'm going
to try to unify different researchers' suggestions into a single
TOC with a sensible organization, because I would like to cut
through the confusion faced by students starting out in this
field of research...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Yeah, the AGIRI wiki has been there for years ... the hard thing is
getting people
to contribute to it (and I myself rarely find the time...)

But if others don't chip in, I'll complete my little non-textbook
myself sometime w/in
the next month ...

-- Ben

On Tue, Mar 25, 2008 at 10:52 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Ok - that was silly of me.  After visiting the link (which was after I sent
 the email), I noticed that is WAS a Wiki.

 My apologies.

 ~Aki




 On Tue, Mar 25, 2008 at 9:47 PM, Aki Iskandar [EMAIL PROTECTED] wrote:

  Thanks Ben.  AGI is a daunting field to say the least.  Many scientific
 domains are involved in various degrees.  I am very happy to see  something
 like this, because knowing where to start is not so obvious for the
 beginner.  I actually recently purchased Artificial Intelligence: A Modern
 Approach - but only because I did not know where else to start.  I have the
 programming down - but, like most others, I don't know *what* to program.
 
  I really hope that others will contribute to your TOC.  In fact, I am
 willing to put up and host an AGI Wiki if theis community would find it of
 use.  I'd need a few weeks - because I don't have the time right now - but
 it is a worthwhile endeavor, and I'm happy to do it.
 
  ~Aki
 
 
 
 
 
  On Tue, Mar 25, 2008 at 6:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 
 
   Hi all,
  
   A lot of students email me asking me what to read to get up to speed on
 AGI.
  
   So I started a wiki page called Instead of an AGI Textbook,
  
  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
  
   Unfortunately I did not yet find time to do much but outline a table
   of contents there.
  
   So I'm hoping some of you can chip in and fill in some relevant
   hyperlinks on the pages
   I've created ;-)
  
   For those of you too lazy to click the above link, here is the
   introductory note I put on the wiki page:
  
  
   
  
   I've often lamented the fact that there is no advanced undergrad level
   textbook for AGI, analogous to what Russell and Norvig is for Narrow
   AI.
  
   Unfortunately, I don't have time to write such a textbook, and no one
   else with the requisite knowledge and ability seems to have the time
   and inclination either.
  
   So, instead of a textbook, I thought it would make sense to outline
   here what the table of contents of such a textbook might look like,
   and to fill in each section within each chapter in this TOC with a few
   links to available online resources dealing with the topic of the
   section.
  
   However, all I found time to do today (March 25, 2008) is make the
   TOC. Maybe later I will fill in the links on each section's page, or
   maybe by the time I get around it some other folks will have done it.
  
   While nowhere near as good as a textbook, I do think this can be a
   valuable resource for those wanting to get up to speed on AGI concepts
   and not knowing where to turn to get started. There are some available
   AGI bibliographies, but a structured bibliography like this can
   probably be more useful than an unstructured and heterogeneous one.
  
   Naturally my initial TOC represents some of my own biases, but I trust
   that by having others help edit it, these biases will ultimately come
   out in the wash.
  
   Just to be clear: the idea here is not to present solely AGI material.
   Rather the idea is to present material that I think students would do
   well to know, if they want to work on AGI. This includes some AGI,
   some narrow AI, some psychology, some neuroscience, some mathematics,
   etc.
  
   ***
  
  
   -- Ben
  
  
   --
   Ben Goertzel, PhD
   CEO, Novamente LLC and Biomind LLC
   Director of Research, SIAI
   [EMAIL PROTECTED]
  
   If men cease to believe that they will one day become gods then they
   will surely become worms.
   -- Henry Miller
  
   ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: http://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  
 
 
 
  --
  Aki R. Iskandar
  [EMAIL PROTECTED]



 --
 Aki R. Iskandar
 [EMAIL PROTECTED]
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
  I actually recently purchased Artificial Intelligence: A Modern
 Approach - but only because I did not know where else to start.

It's a very good book ... if you view it as providing insight into various
component technologies of potential use for AGI ... rather than as saying
very much directly about AGI...

I have the
 programming down - but, like most others, I don't know *what* to program.

Well I hope to solve that problem in May -- via releasing the initial version
of OpenCog, plus a load of wiki pages indicating stuff that, IMO, if
implemented,
tuned and tested would allow OpenCog to be turned into a powerful AGI
system ;-)

-- Ben



Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
On Tue, Mar 25, 2008 at 11:07 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Thanks Ben.  That is really exciting stuff / news.  I'm loking forward to
 OpenCog.

 BTW - is OpenCog mainly in C++ (like Novamente) ?  Or is it translations (to
 Java, or other languages) of concepts so that others can code  and add to it
 more readily and quickly?

yes, the OpenCog core system is C++ , though there are some peripheral
code libraries (e.g. the RelEx natural language preprocessor) which are in
Java...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
This kind of diagram would certainly be meaningful, but, it would be a
lot of work to put together, even more so than a traditional TOC ...

On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Hi Pei -

 What about having a tree like diagram that branches out into either:
 - the different paths / approaches to AGI (for instance: NARS, Novamente,
 and Richard's, etc.), with suggested readings at those leaves
  - area of study, with suggested readings at those leaves

 Or possibly, a Mind Map diagram that shows AGI in the middle, with the
 approaches stemming from it, and then either sub fields, or a reading list
 and / or collection of links (though the links may become outdated, dead).

 Point is, would a diagram help map the field - which caters to the
 differing approaches, and which helps those wanting to chart a course to
 their own learning/study ?

 Thanks,
 ~Aki




  On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED] wrote:
  Ben,
 
  It is a good start!
 
  Of course everyone else will disagree --- like what Richard did and
  I'm going to do. ;-)
 
  I'll try to find the time to provide my list --- at this moment, it
  will be more like a reading list than a textbook TOC. In the future,
  it will be integrated into the E-book I'm working on
  (http://nars.wang.googlepages.com/gti-summary).
 
  Compared to yours, mine will contain less math and algorithms, but
  more psychology and philosophy.
 
  I'd like to see what Richard and others want to propose. We shouldn't
  try to merge them into one wiki page, but several.
 
  Pei
 
 
 
 
 
  On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
   Hi all,
  
A lot of students email me asking me what to read to get up to speed on
 AGI.
  
So I started a wiki page called Instead of an AGI Textbook,
  
  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
  
Unfortunately I did not yet find time to do much but outline a table
of contents there.
  
So I'm hoping some of you can chip in and fill in some relevant
hyperlinks on the pages
I've created ;-)
  
For those of you too lazy to click the above link, here is the
introductory note I put on the wiki page:
  
  

  
I've often lamented the fact that there is no advanced undergrad level
textbook for AGI, analogous to what Russell and Norvig is for Narrow
AI.
  
Unfortunately, I don't have time to write such a textbook, and no one
else with the requisite knowledge and ability seems to have the time
and inclination either.
  
So, instead of a textbook, I thought it would make sense to outline
here what the table of contents of such a textbook might look like,
and to fill in each section within each chapter in this TOC with a few
links to available online resources dealing with the topic of the
section.
  
However, all I found time to do today (March 25, 2008) is make the
TOC. Maybe later I will fill in the links on each section's page, or
maybe by the time I get around it some other folks will have done it.
  
While nowhere near as good as a textbook, I do think this can be a
valuable resource for those wanting to get up to speed on AGI concepts
and not knowing where to turn to get started. There are some available
AGI bibliographies, but a structured bibliography like this can
probably be more useful than an unstructured and heterogeneous one.
  
Naturally my initial TOC represents some of my own biases, but I trust
that by having others help edit it, these biases will ultimately come
out in the wash.
  
Just to be clear: the idea here is not to present solely AGI material.
Rather the idea is to present material that I think students would do
well to know, if they want to work on AGI. This includes some AGI,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.
  
***
  
  
-- Ben
  
  
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
  
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
  
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
 
 
 
Powered by Listbox: http://www.listbox.com
  
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 



 --

 Aki R. Iskandar
 [EMAIL PROTECTED]
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD

[agi] Microsoft Launches Singularity

2008-03-24 Thread Ben Goertzel
 http://www.codeplex.com/singularity

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Seeking student programmers for summer 2008: OpenCog meets Google Summer of Code

2008-03-22 Thread Ben Goertzel
Hi all,

Sorry for the short notice, but I was out of town last week with limited email
access...

The Singularity Institute for AI was accepted as a mentoring organization for
Google's 2008 Summer of Code project, with a focus on the OpenCog
open-source AGI project (www.opencog.org).  See

http://code.google.com/soc/2008/siai/about.html

What this means is that programmers who want to spend Summer 2008
working on open-source AI code within the OpenCog framework, and get paid
$5000 by Google for this, can submit proposals for OpenCog projects,
within the GSOC website.

Student programmers have the interval btw March 24 and March 31 to
submit proposals, then accepted proposals will be announced on the GSOC
website on April 11.

If you have a particular proposal idea you'd like to discuss, best option
is to post it on the OpenCog Google Group mailing list (find info on
opencog.org).

Some proposal ideas are found here

http://opencog.org/wiki/Ideas

but we're quite open to other suggestions as well, in the freewheeling spirit
of GSOC...

Thanks
Ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi]

2008-03-13 Thread Ben Goertzel
If the test is defined to refer ONLY to conversations about
a sufficiently narrow domain of objects in
a toy virtual world ... and they encode enough knowledge ... then maybe they
could brute-force past the test... after all there is not that much to
say about
a desk, a table, a lamp and a box ... or whatever the set of objects in the toy
world may be...

This is the danger of toy test environments, be they in virtual worlds or
physical robotics...

ben g

On Thu, Mar 13, 2008 at 12:35 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Unless the details of that modified Turing Test are somehow profoundly
  flawed, then, yes...

  ben



  On Thu, Mar 13, 2008 at 12:28 PM, Eric B. Ramsay [EMAIL PROTECTED] wrote:
   So Ben, based on what you are saying, you fully expect them to fail their
   Turing test?
  
   Eric B. Ramsay
  
  
   Ben Goertzel [EMAIL PROTECTED] wrote:
I know Selmer and his group pretty well...
  
   It is well done stuff, but it is purely hard-coded-knowledge-based
   logical inference --
   there is no real learning there...
  
   It's not so hard to get impressive-looking functionality in toy demo
   tasks, by hard-
   coding rules and using a decent logic engine
  
   Others have failed at this, so his achievement is worthwhile and means his
   logic
   engine and formalism are better than most ... but still ... IMO, this
   is not a very likely
   path to AGI ...
  
   -- Ben
  
   On Thu, Mar 13, 2008 at 10:30 AM, Ed Porter wrote:
Here is an article about RPI's attempt to pass a slightly modified 
 version
of the turning test using supercomputers to power their Rascals AI
algorithm.
   
   
   
 http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=206903246pri
ntable=true
   
The one thing I didn't understand was that they said their Rascals AI
algorithm used a theorem proving architectures. I would assume that that
would mean it as based on binary logic, and thus would not be 
 sufficiently
flexible to model many human thought processes, which are almost 
 certainly
more neural net-like, and thus much more probabilistic.
   
Does anybody have any opinions on that.
   
Ed Porter
   
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
  
Powered by Listbox: http://www.listbox.com
   
  
  
  
   --
   Ben Goertzel, PhD
   CEO, Novamente LLC and Biomind LLC
   Director of Research, SIAI
   [EMAIL PROTECTED]
  
   If men cease to believe that they will one day become gods then they
   will surely become worms.
   -- Henry Miller
  
   ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: http://www.listbox.com/member/?;
  
   Powered by Listbox: http://www.listbox.com
  

  
agi | Archives | Modify Your Subscription



  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-11 Thread Ben Goertzel
   An attractor is a set of states that are repeated given enough time.  If
agents are killed and not replaced, you can't return to the current state.

  False. There are certainly attractors that disappear, first
  seen by Ruelle, Takens, 1971 its called a blue sky catastrophe

  http://www.scholarpedia.org/article/Blue-sky_catastrophe

Relatedly, you should look at Mikhail Zak's work on terminal attractors,
which occurred in the context of neural nets as I recall

These are attractors which a system zooms into for a while, then after a period
of staying in them, it zooms out of them  They occur when the differential
equation generating the dynamical system displaying the attractor involves
functions with points of nondifferentiability.

Of course, you may be specifically NOT looking for this kind of attractor,
in your Friendly AI theory ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Your mail to [EMAIL PROTECTED]

2008-03-11 Thread Ben Goertzel
I tried to fix the problem, let me know if it worked...

ben



On Tue, Mar 11, 2008 at 12:02 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Ben,

 Can we boot alien off the list?  I'm getting awfully tired of his
  auto-reply emailing me directly *every* time I post.  It is my contention
  that this is UnFriendly behavior (wasting my resources without furthering
  any true goal of his) and should not be accepted.

 Mark

  - Original Message -
  From: [EMAIL PROTECTED]
  To: [EMAIL PROTECTED]
  Sent: Tuesday, March 11, 2008 11:56 AM
  Subject: Re: Your mail to [EMAIL PROTECTED]


   Thank you for contacting Alienshift.
   We will respond to your Mail in due time.
  
   Please feel free to send positive thoughts in return back to the Universe.
   [EMAIL PROTECTED]
  


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Ben Goertzel
  The three most common of these assumptions are:

1) That it will have the same motivations as humans, but with a
  tendency toward the worst that we show.

2) That it will have some kind of Gotta Optimize My Utility
  Function motivation.

3) That it will have an intrinsic urge to increase the power of its
  own computational machinery.

  There are other assumptions, but these seem to be the big three.

And IMO, the truth is likely to be more complex...

For instance,  a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function

Some of the system's activity will be spontaneous ... i.e. only
implicitly goal-oriented .. and as such may involve some imitation
of human motivation, and plenty of radically non-human stuff...

ben g

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Ben Goertzel
Agree... I have not followed this discussion in detail, but if you have
a concrete proposal written up somewhere in a reasonably compact
format, I'll read it and comment

-- Ben G

On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman [EMAIL PROTECTED] wrote:
 From: Mark Waser [EMAIL PROTECTED]:

 Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in Well
  duh land, b) I'm so totally off the mark that I'm not even worth
  replying to, or c) I hope being given enough rope to hang myself.
  :-)

  I'll read the paper if you post a URL to the finished version, and I
  somehow get the URL.  I don't want to sort out the pieces from the
  stream of AGI emails, and I don't want to try to provide feedback on
  part of a paper.

  --
  Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Brief report on AGI-08

2008-03-08 Thread Ben Goertzel
 on AGI.  Society,
including the society of scientists, is starting to wake up to the
notion that, given modern technology and science, human-level AGI is
no longer a pipe dream but a potential near-term reality.  w00t!  Of
course there is a long way to go in terms of getting this kind of work
taken as seriously as it should be, but at least things seem to be
going in the right direction.

-- Ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] AGI-08 in the news...

2008-03-05 Thread Ben Goertzel
http://www.memphisdailynews.com/Editorial/StoryLead.aspx?id=101671

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Ben Goertzel
 Sure, AGI needs to handle NL in an open-ended way.  But the question is
 whether the internal knowledge representation of the AGI needs to allow
 ambiguities, or should we use an ambiguity-free representation.  It seems
 that the latter choice is better.  Otherwise, the knowledge stored in
 episodic memory would be open to interpretations and may need to errors in
 recall, and similar problems.

Rather, I think the right goal is to create an AGI that, in each
context, can be as ambiguous as it wants/needs to be in its
representation of a given piece of information.

Ambiguity allows compactness, and can be very valuable in this regard.

Guidance on this issue is provided by the Lojban language.  Lojban
allows extremely precise expression, but also allows ambiguity as
desired.  What one finds when speaking Lojban is that sometimes one
chooses ambiguity because it lets one make ones utterances shorter.  I
think the same thing holds in terms of an AGI's memory.  An AGI with
finite memory resources must sometimes choose to represent relatively
unimportant information ambiguously rather than precisely so as to
conserve memory.

For instance, storing the information

A is associated with B

is highly ambiguous, but takes little memory.  Storing logical
information regarding the precise relationship between A and B may
take one or more orders of magnitude more information.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Metaphysics and spatial biases.

2008-03-02 Thread Ben Goertzel
 Using informal words, how would you describe the metaphysics or
 biases currently encoded into the Novamente system?

 /Robert Wensman

This is a good question, and unfortunately I don't have a
systematic answer.  Biases are encoded in many different
aspects of the design, e.g.

-- the knowledge representation

-- the heuristics within the inference rules (e.g. for temporal
and spatial inference)

-- the set of predicates and procedures provided as primitives for
automated program learning

-- various specializations in the architecture (e.g. the use of
specialized SpaceServer and TimeServer objects to allow efficient
indexing of entities by space and time)

and we haven't made an effort to go through and systematize the
conceptual biases implicit in the detailed design of all the different
parts of the system, although there are plenty of important biases
there

sorry for the unsatisfying answer but it would take me a couple days
of analysis to give you a real answer, and other priorities beckon...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction Question

2008-02-29 Thread Ben Goertzel
I am not so sure that humans use uncomputable models in any useful sense,
when doing calculus.  Rather, it seems that in practice we use
computable subsets
of an in-principle-uncomputable theory...

Oddly enough, one can make statements *about* uncomputability and
uncomputable entities, using only computable operations within a
formal system...

For instance, one can prove that even if x is an uncomputable real number

x - x = 0

But that doesn't mean one has to be able to hold *any* uncomputable number x
in one's brain...

thus is the power of abstraction, and I don't see why AGIs can't have
it just like
humans do...

Ben

On Fri, Feb 29, 2008 at 4:37 PM, Abram Demski [EMAIL PROTECTED] wrote:
 I'm an undergrad who's been lurking here for about a year. It seems to me
 that many people on this list take Solomonoff Induction to be the ideal
 learning technique (for unrestricted computational resources). I'm wondering
 what justification there is for the restriction to turing-machine models of
 the universe that Solomonoff Induction uses. Restricting an AI to computable
 models will obviously make it more realistically manageable. However,
 Solomonoff induction needs infinite computational resources, so this clearly
 isn't a justification.

 My concern is that humans make models of the world that are not computable;
 in particular, I'm thinking of the way physicists use differential
 equations. Even if physics itself is computable, the fact that humans use
 incomputable models of it remains. Solomonoff Induction itself is an
 incomputable model of intelligence, so an AI that used Solomonoff Induction
 (even if we could get the infinite computational resources needed) could
 never understand its own learning algorithm. This is an odd position for a
 supposedly universal model of intelligence IMHO.

 My thinking is that a more-universal theoretical prior would be a prior over
 logically definable models, some of which will be incomputable.

 Any thoughts?

  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction Question

2008-02-29 Thread Ben Goertzel
  This is a general theorem about *strings* in this formal system, but
  no such string with uncomputable real number can ever be written, so
  saying that it's a theorem about uncomputable real numbers is an empty
  set theory (it's a true statement, but it's true in a trivial
  falsehood, therefore Mars is inhabited by little green men kind of
  formal sense).

Well, but NO uncomputable number can be written, so which theorems
about uncomputable numbers are NOT empty in the sense you  mean?

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread Ben Goertzel
Hi,

 I think Ben's text mining approach has one big flaw:  it can only reason
 about existing knowledge, but cannot generate new ideas using words /
 concepts.

Text mining is not an AGI approach, it's merely a possible way of getting
knowledge into an AGI.

Whether the AGI can generate new ideas is independent of whether it
gets knowledge via text mining or via some other means...

 I want to stress that AGI needs to be able to think at the
 WORD/CONCEPT level.  In order to do this, we need some rules that *rewrite*
 sentences made up of words, such that the AGI can reason from one sentence
 to another.  Such rewrite rules are very numerous and can be very complex --
 for example rules for auxillary words and prepositions, etc.  I'm not even
 sure that such rules can be expressed in FOL easily -- let alone learn them!

This seems off somehow -- I don't think reasoning should be implemented
on the level of linguistic surface forms.

 The embodiment approach provides an environment for learning qualitative
 physics, but it's still different from the common sense domain where
 knowledge is often verbally expressed.

I don't get your point...

Most of common sense is about the world in which we live, as embodied
social organisms...  Embodiment buys you a lot more than qualitative
physics.  It buys you richly shared social experience, among other things.

 In fact, it's not the environment
 that matters, it's the knowledge representation (whether it's expressive
 enough) and the learning algorithm (how sophisticated it is).

I think that all three of these things matter a lot, along with the
overall cognitive
architecture.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
 I'm not talking about inference control here -- I assume that inference
 control is done in a proper way, and there will still be a problem.  You
 seem to assume that all knowledge = what is explicitly stated in online
 texts.  So you deny that there is a large body of implicit knowledge other
 than inference control rules (which are few in comparison).

 I think that if your AGI doesn't have the implicit knowledge, it'd only be
 able to perform simple inferences about statistical events -- for example,
 calculating the probability of (lung cancer | smoking).

For instance, suppose you ask an AI if chocolate makes a person more
alert.

It might read one article saying that coffee makes people more alert,
and another article saying that chocolate contains theobromine, and another
article saying that theobromine is related to caffeine, and another article
saying that coffee contains caffeine ... and then put the pieces together to
answer YES

This kind of reasoning
may sound simple but getting it to work systematically on the large
scale based on text mining has not been done...

And it does seem w/in the grasp of current tech without any breakthroughs...

 The kind of reasoning I'm interested in is more sophisticated.  For example,
 I may ask the AGI to open a file and print the 100th line (in Java or C++,
 say).  The AGI should be able to use a loop to read and discard the first 99
 lines.  We need a step like:  read 99 lines - use a loop but such a step
 must be based on even simpler *concepts* of repetition and using loops.
 What I'm saying is that your AGI does NOT have such rules and would be
 incapable of thinking about such things.

Being incapable of thinking about such things is way too strong a statement --
that has to do with the AI's learning/reasoning algorithms rather than about the
knowledge it has.

I think there would be a viable path to AGI via

1)
Filling a KB up w/ commensense knowledge via text mining and simple inference,
as I described above

2)
Building an NL conversation system utilizing the KB created in 1

3)
Teaching the AGI the implicit knowledge you suggest via conversing with it

As noted I prefer to introduce embodiment into the mix, though, for a variety
of reasons...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-27 Thread Ben Goertzel
  d) you keep repeating the illusion that evolution did NOT achieve the
  airplane and other machines - oh yes, it did - your central illusion here is
  that machines are independent species. They're not. They are EXTENSIONS  of
  human beings, and don't work without human beings attached. Manifestly
  evolution has taken several stages to perfect tool/machine-using species -
  of whom we are only the latest version - I refer you to my good colleague,
  the tool-using-and-creating Caledonian crow.

That is purely rhetorical gamesmanship...

By that interpretation of achieved by evolution then any AGI that we create
will also be achieved by evolution, due to being created by humans that
were achieved by evolution, right?

So, by this definition, the concept of achieved by evolution makes no
useful distinctions among AGI designs...

And: a wheel does work without a human attached, btw ..

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-27 Thread Ben Goertzel
  Well,  what I and embodied cognitive science are trying to formulate
  properly, both philosophically and scientifically, is why:

  a) common sense consciousness is the brain-AND-body thinking on several
  levels simultaneously about any given subject...

I don't buy that my body plays a significant role in thinking about,
for instance,
mathematics.  I bet that my brain in a vat could think about math just
as well or
better than my embodied brain.

Of course my brain is what it is because of evolving to be embodied, but that's
a different statement.

  b) with the *largest* part of that thinking being body thinking - i.e.
  your body working out *in-the-body* how the actions under consideration can
  be enacted  (although this is inseparable from, and dependent on, the
  brain's levels of thinking)

What evidence do you have that this is the largest part ... it does
not feel at all
that way to me, as a subjectively-experiencing human; and I know of no evidence
in this regard.

The largest bulk of brain matter does not equate to the largest part
of thinking,
in any useful sense...

I suspect that, in myself at any rate, the vast majority of my brain
dynamics are driven
by the small percentage of my brain that deal with abstract cognition.
 An attractor
spanning the whole brain can nonetheless be triggered/controlled by dynamics
in a small region.

  c) if an agent doesn't have a body that can think about how it can move (and
  have emotions), then it almost certainly can't understand how other bodies
  move (and have emotions) - and therefore can't acquire a
  more-than-it's-all-Greek/Chinese/probabilistic-logic-to-me understanding
  of physics, biology, psychology, sociology etc. etc. - of both the
  formal/cultural and informal/personal kinds.

I agree about psychology and sociology, but not about physics and biology.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
  It could be done with a simple chain of word associations mined from a text
  corpus: alert - coffee - caffeine - theobromine - chocolate.

That approach yields way, way, way too much noise.  Try it.

  But that is not the problem.  The problem is that the reasoning would be
  faulty, even with a more sophisticated analysis.  By a similar analysis you
  could reason:

  - coffee makes you alert.
  - coffee contains water.
  - water (H20) is related to hydrogen sulfide (H2S).
  - rotten eggs produce hydrogen sulfide.
  - therefore rotten eggs make you alert.

There is a produce predicate in here which throws off the chain of
reasoning wildly.

And, nearly every food contains water, so the application of Bayes
rule within this inference chain of yours will yield a conclusion with
essentially zero confidence.  Since fewer foods contain caffeine or
theobromine, the inference trail I suggested will not have this
problem.

In short, I claim your similar analysis is only similar at a very
crude level of analysis, and is not similar when you look at the
actual probabilistic inference steps involved.

  Long chains of logical reasoning are not very useful outside of mathematics.

But the inference chain I gave as an example is NOT very long. The
problem is actually that outside of math, chains of inference (long or
short) require contextualization...

   I think there would be a viable path to AGI via
  
   1)
   Filling a KB up w/ commensense knowledge via text mining and simple
   inference,
   as I described above
  
   2)
   Building an NL conversation system utilizing the KB created in 1
  
   3)
   Teaching the AGI the implicit knowledge you suggest via conversing with 
 it

  I think adding common sense knowledge before language is the wrong approach.
  It didn't work for Cyc.

I agree it's not the best approach.

I also think, though, that one unsuccessful attempt should not be taken to damn
the whole approach.

The failure of explicit knowledge encoding by humans, does not straightforwardly
imply the failure of knowledge extraction via text mining (as approaches to AGI)

  Natural language evolves to the easiest form for humans to learn, because if 
 a
  language feature is hard to learn, people will stop using it because they
  aren't understood.  We would be wise to study language learning in humans and
  model the process.  The fact is that children learn language in spite of a
  lack of common sense.

Actually, they seem to acquire language and common sense together.

But, wild children and apes learn common sense, but never learn
language beyond
the proto-language level.

But I agree, study of human dev psych is one thing that has inclined
me toward the
embodied approach ...

yet I still feel you dismiss the text-mining approach too glibly...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
   yet I still feel you dismiss the text-mining approach too glibly...

  No, but text mining requires a language model that learns while mining.  You
  can't mine the text first.

Agreed ... and this gets into subtle points.  Which aspects of the
language model
need to be adapted while mining, and which can remain fixed?  Answering this
question the right way may make all the difference in terms of the viability of
the approach...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread Ben Goertzel
YKY,

I'm with Pei on this one...

Decades of trying to do procedure learning using logic have led only
to some very
brittle planners that are useful under very special and restrictive
assumptions...

Some of that work is useful but it doesn't seem to me to be pointing in an AGI
direction.

OTOH for instance evolutionary learning and NN's have been more successful
at learning simple procedures for embodied action.

Within NM we have done (and published) experiments using probabilistic logic
for procedure learning, so I'm well aware it can be done.  But I don't
think it's a
scalable approach.

There appears to be a solid information-theoretic reason that the human brain
represents and manipulates declarative, procedural and episodic knowledge
separately.

It's more complex, but I believe it's a better idea to have separate methods for
representing and learning/adapting procedural vs declarative knowledge
--- and then
have routines for converting btw the two forms of knowledge.

One advantage AGIs will have over humans is better methods for translating
procedural to declarative knowledge, and vice versa.

For us to translate knowing how to do X into
knowing how we do X can be really difficult (I play piano
improvisationally and by
ear, and I have a hard time figuring out what the hell my fingers are
doing, even though
they do the same complex things repeatedly each time I play the same
song..).  This is
not a trivial problem for AGIs either but it won't be as hard as for humans...

-- Ben G

On Tue, Feb 26, 2008 at 8:00 AM, Pei Wang [EMAIL PROTECTED] wrote:
 On Tue, Feb 26, 2008 at 7:03 AM, YKY (Yan King Yin)
  [EMAIL PROTECTED] wrote:
  
   On 2/15/08, Pei Wang [EMAIL PROTECTED] wrote:
   
To me, the following two questions are independent of each other:
   
 *. What type of reasoning is needed for AI? The major answers are:
(A): deduction only, (B) multiple types, including deduction,
induction, abduction, analogy, etc.
   
*. What type of knowledge should be reasoned upon? The major answers
 are: (1) declarative only, (2) declarative and procedural.
   
All four combination of the two answers are possible. Cyc is mainly
A1; you seem to suggest A2; in NARS it is B2.
  
  
   My current approach is B1.  I'm wondering what is your argument for
   including procedural knowledge, in addition to declarative?

  You have mentioned the reason in the following: some important
  knowledge is procedural by nature.


   There is the idea of deductive planning which allows us to plan actions
   using a solely declarative KB.  So procedural knowledge is not needed for
   acting.

  I haven't seen any no trivial result supporting this claim.


   Also, if you include procedural knowledge, things may be learned doubly in
   your KB.  For example, you may learn some declarative knowledge about the
   concept of reverse and also procedural knowledge of how to reverse
   sequences.

  The knowledge about how to do ... can either be in procedural form,
  as programs, or in declarative, as descriptions of the programs.
  There is overlapping/redundancy information in the two, but very often
  both are needed, and the redundancy is tolerated.


   Even worse, in some cases you may only have procedural knowledge, without
   anything declarative.  That'd be like the intelligence of a calculator,
   without true understanding of maths.

  Yes, but that is exactly the reason to directly reasoning on
  procedural knowledge, right?

  Pei


   YKY
  
  

  
agi | Archives | Modify Your Subscription



 ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread Ben Goertzel
  Knowing how to carry out inference can itself be procedural knowledge,
  in which case no explicit distinction between the two is required.

  --
  Vladimir Nesov

Representationally, the same formalisms can of course be used for both
procedural and declarative knowledge.

The slightly subtler point, however, is that it seems that **given finite space
and time resources**, it's far better to use specialized
reasoning/learning methods
for handling knowledge that pertains to carrying out coordinated sets of action
in space and time.

Thus, procedure learning as a separate module from general inference.

The brain works this way and, on this very general level, I think
we'll do best to
emulate the brain in our AGI designs (not necessarily in the specific
representations/
algorithms the brain uses, but rather in the simple fact of the
pragmatic declarative/
procedural distinction..)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread Ben Goertzel
Obviously, extracting knowledge from the Web using a simplistic SAT
approach is infeasible

However, I don't think it follows from this that extracting rich
knowledge from the Web is infeasible

It would require a complex system involving at least

1)
An NLP engine that maps each sentence into a menu of probabilistically
weighted logical interpretations of the sentence (including links into
other sentences built using anaphor resolution heuristics).  This
involves a dozen conceptually distinct components and is not at all
trivial to design, build or tune.

2)
Use of probabilistic inference rules to create implication links
between the different interpretations of the different sentences

3)
Use of an optimization algorithm (which could be a clever use of SAT
or SMT, or something else) to utilize the links formed in step 2, to
select the right interpretation(s) for each sentence


The job of the optimization algorithm is hard but not THAT hard
because the choice of the interpretation of one sentence is only
tightly linked to the choice of interpretation of a relatively small
set of other sentences (ones that are closely related syntactically,
semantically, or in terms of proximity in the same document, etc.).

I don't know any way to tell how well this would work, except to try.

My own approach, cast in these terms, would be to

-- use virtual-world grounding to help with the probabilistic
weighting in step 1 and the link building in step 2

-- use other heuristics besides SAT/SMT in step 3 ... but, using these
techniques within NM/OpenCog is also a possibility down the road, I've
been studying the possibility...


-- Ben





On Tue, Feb 26, 2008 at 6:56 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:


 On 2/25/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  Hi,
 
  There is no good overview of SMT so far as I know, just some technical
   papers... but SAT solvers are not that deep and are well reviewed in
  this book...
 
  http://www.sls-book.net/


 But that's *propositional* satisfiability, the results may not extend to
 first-order SAT -- I've no idea.

 Secondly, the learning of an entire KB from text corpus is much, much harder
 than SAT.  Even the learning of a single hypothesis from examples with
 background knowledge (ie the problem of inductive logic programming) is
 harder than SAT.  Now you're talking about inducing the entire KB, and
 possibly involving theory revision -- this is VERY impractical.

 I guess I'd focus on learning simple rules, one at a time, from NL
 instructions.  IMO this is one of the most feasible ways of acquiring the
 AGI KB.  But it also involves the AGI itself in the acquisition process, not
 just a passive collection of facts like MindPixel...

 YKY


  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread Ben Goertzel
YKY

I thought you were   talking about the extraction of information that
is explicitly stated in online text.

Of course, inference is a separate process (though it may also play a
role in direct information extraction).

I don't think the rules of inference per se need to be learned.  In
our book on PLN we outline a complete set of probabilistic logic
inference rules, for example.

What needs to be learned via experience is how to appropriately bias
inference control -- how to sensibly prune the inference tree.

So, one needs an inference engine that can adaptively learn better and
better inference control as it carries out inferences.  We designed
and partially implemented this feature in the NCE but never completed
the work due to other priorities ... but I hope this can get done in
NM or OpenCog sometime in late 2008..

-- Ben

On Tue, Feb 26, 2008 at 3:02 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:


 On 2/26/08, Ben Goertzel [EMAIL PROTECTED] wrote:
  Obviously, extracting knowledge from the Web using a simplistic SAT
  approach is infeasible
  
  However, I don't think it follows from this that extracting rich
  knowledge from the Web is infeasible
 
  It would require a complex system involving at least
 
  1)
   An NLP engine that maps each sentence into a menu of probabilistically
  weighted logical interpretations of the sentence (including links into
  other sentences built using anaphor resolution heuristics).  This
   involves a dozen conceptually distinct components and is not at all
  trivial to design, build or tune.
 
  2)
  Use of probabilistic inference rules to create implication links
  between the different interpretations of the different sentences
  
  3)
  Use of an optimization algorithm (which could be a clever use of SAT
  or SMT, or something else) to utilize the links formed in step 2, to
  select the right interpretation(s) for each sentence


 Gosh, I think you've missed something of critical importance...

 The problem you stated above is about choosing the correct interpretation of
 a bunch of sentences.  The problem we should tackle instead, is learning the
 rules that make up the KB.

 To see the difference, let's consider this example:

 Suppose I solve a problem (eg a programming exercise), and to illustrate my
 train of thoughts I clearly write down all the steps.  So I have, in
 English, a bunch of sentences A,B,C,...,Z where Z is the final conclusion
 sentence.

 Now the AGI can translate sentences A-Z into logical form.  You claim that
 this problem is hard because of multiple interpretations.  But I think
 that's relatively unimportant compared to the real problem we face.  So
 let's assume that we successfully -- correctly -- translate the NL sentences
 into logic.

 Now let's imagine that the AGI is doing the exercise, not me.  Then it
 should have a train of inference that goes from A to B to C ... and so on...
 to Z.  But, the AGI would NOT be able to make such a train of thoughts.  All
 it has is just a bunch of *static* sentences from A-Z.

 What is missing?  What would allow the AGI to actually conduct the inference
 from A-Z?

 The missing ingredient is a bunch of rules.  These are the invisible glue
 that links the thoughts between the lines.  This is the knowledge that I
 think should be learned, and would be very difficult to learn.

 You know what I'm talking about??



 YKY
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread Ben Goertzel
  Your piano example is a good one.

  What it illustrates, I suggest, is:

  your knowledge of, and thinking about, how to play the piano, and perform
  the many movements involved, is overwhelmingly imaginative and body
  knowledge/thinking (contained in images and the motor parts of the brain and
  body as distinct from any kind of symbols)

  The percentage of that knowledge that can be expressed in symbolic form -
  logical, mathematical, verbal  etc- i.e. the details of those movements that
  can be named or measured - is only A TINY FRACTION of the total.

Wrong...

This knowledge CAN be expressed in logical, symbolic form... just as can the
positions of all the particles in my brain ... but for these cases, the logical,
symbolic representation is highly awkward and inefficient...


Our
  cultural let alone your personal vocabulary (both linguistic and of  any
  other symbolic form) for all the different finger movements you will
  perform, can only name a tiny percentage of the details involved.

That is true, but in principle one could give a formal logical description of
them, boiling things all the way down to logical atoms corresponding to the
signals sent along the nerves to and from my fingers...

  Such imaginative and body knowledge (which takes both declarative,
  procedural and episodic forms) isn't, I suggest, - when considered as
  corpuses or corpora of knowledge - MEANT to be put into explicit, symbolic,
  verbal, logico-mathematical form.

Correct

 It would be utterly impossible to name all
  the details of that knowledge.

Infeasible, not impossible

 One imaginative picture : an infinity of
  words and other symbols. Any attempt to symbolise our imaginative/body
  knowledge as a whole, would simply overwhelm our brain, or indeed any brain.

The concept of infinity is better handled in formal logic than anywhere else!!!

  The idea that an AGI can symbolically encode all the knowledge, and perform
  all the thinking, necessary to produce, say, a golf swing, let alone play a
  symphony,  is a pure fantasy. Our system keeps that knowledge and thinking
  largely in the motor areas of the brain and body, because that's where it
  HAS to be.

Again you seem to be playing with different meanings of the word symbolic.

I don't think that formal logic is a suitably convenient language for describing
motor movements or dealing with motor learning.

But still, I strongly suspect one can produce software programs that do handle
motor movement and learning effectively.  They are symbolic at the level of
the programming language, but not symbolic at the level of the deliberative,
reflective component of the artificial mind doing the learning.

A symbol is a symbol **to some system**.  Just because a hunk of program
code contains symbols to the programmer, doesn't mean it contains symbols
to the mind it helps implement.  Any more than a neuron being a symbol to a
neuroscientist, implies that neuron is a symbol to the mind it helps implement.

Anyway, I agree with you that formal logical rules and inference are not the
end-all of AGI and are not the right tool for handling visual imagination or
motor learning.  But I do think they have an important role to play even so.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread Ben Goertzel

 No one in AGI is aiming for common sense consciousness, are they?


The OpenCog and NM architectures are in principle supportive of this kind
of multisensory integrative consciousness, but not a lot of thought has gone
into exactly how to support it ...

In one approach, one would want to have

-- a large DB of embodied experiences (complete with the sensorial and
action data from the experiences)

-- a number of dimensional spaces, into which experiences are embedded
(a spatiotemporal region corresponds to a point in a dimensional space).
Each dimensional space would be organized according to a different principle,
e.g. melody, rhythm, overall visual similarity, similarity of shape, similarity
of color, etc.

-- an internal simulation world in which concrete remembered experiences,
blended experiences, or abstracted experiences could be enacted and
internally simulated

-- conceptual blending operations implemented on the dimensional spaces
and directly in the internal sim world

-- methods for measuring similarity, inheritance and other logical relationships
in the dimensional spaces and the internal sim world

-- methods for enacting learned procedures in the internal sim world,
and learning
new procedures based on simulating what they would do in the internal sim world


This is all do-able according to mechanisms that exist in the OpenCog and NM
designs, but it's an aspect we haven't focused on so far in NM... though we're
moving in that direction due to our work w/ embodiment in simulation
worlds...

We have built a sketchy internal sim world for NM but haven't experimented with
it much yet due to other priorities...

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge

2008-02-26 Thread Ben Goertzel
  You guys seem to think this - true common sense consciousness - can all be
  cracked in a year or two. I think there's probably a lot of good reasons -
  and therefore major creative problems - why it took a billion years of
  evolution to achieve.

I'm not trying to emulate the brain.

Evolution took billions of years to NOT achieve the airplane, helicopter
or wheel ...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Ben Goertzel
Hi,

There is no good overview of SMT so far as I know, just some technical
papers... but SAT solvers are not that deep and are well reviewed in
this book...

http://www.sls-book.net/

-- Ben

On Sun, Feb 24, 2008 at 4:38 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben or anyone,

  Do you know of an explanation or reference that is a for Dummies explanation
  of how SAT (or SMT) handles computations in spaces with and 100,000
  variables and/or 10^300 states in practically computable time.

  I assume it is by focusing only on that part of the space through which
  relevant and/or relatively short inferences paths pass, or something like
  that.

  Ed Porter


  -Original Message-
  From: Ben Goertzel [mailto:[EMAIL PROTECTED]
  Sent: Wednesday, February 20, 2008 5:54 PM
  To: agi@v2.listbox.com

 Subject: Re: [agi] would anyone want to use a commonsense KB?



  And I seriously doubt that a general SMT solver +
prob. theory is going to beat a custom probabilistic logic solver.

  My feeling is that an SMT solver plus appropriate subsets of prob theory
  can be a very powerful component of a general probabilistic inference
  framework...

  I can back this up with some details but that would get too thorny
  for this list...

  ben


 ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;


 Powered by Listbox: http://www.listbox.com

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] A Follow-Up Question re Vision.. P.S.

2008-02-21 Thread Ben Goertzel
Mike,

  I'm disappointed that you guys, especially Bob M,  aren/t responding to
  this. It just might be important to how the brain succeeds in perceiving
  images, while computers are such a failure.

This is all well-known information!!!

Tomasso Poggio and many others are working on making
detailed computer simulations of how the brain does vision processing.

It's a worthy line of research, but unlike you I am not impelled to consider
it AGI-critical ... anyway that line of research appears to be proceeding
steadily and successfully... though like everything in science, not as fast we
we'd like...

ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
 C is not very viable as of now.  The physics in Second Life is simply not
 *rich* enough.  SL is mainly a space for humans to socialize, so the physics
 will not get much richer in the near future -- is anyone interested in
 emulating cigarette smoke in SL?

Second Life will soon be integrating the Havok 4 physics engine.

I agree that game-world physics is not yet very realistic, but it's improving
fast, due to strong economics in the MMOG industry.

 E is also hard, but you seem to be *unaware* of its difficulty.  In fact,
 the problem with E is the same as that with AIXI -- the thoery is elegant,
 but the actual learning would take forever.  Can you explain, in broad
 terms, how the AGI is to know that water runs downhill instead of up, and
 that the moon is not blue, but a greyish color?

Water does not always run downhill, sometimes it runs uphill.

To learn commonsense information from text requires parsing the text
and mapping the parse-trees into semantic relationships, which are then
reasoned on by a logical reasoning engine.  There is nothing easy about this,
and there is a hard problem of semantic disambiguation of relationships.
Whether the disambiguation problem can be solved via statistical/inferential
integration of masses of extracted relationships, remains to be seen.

Virtual embodiment coupled with NL conversation is the approach I
currently favor, but I think that large-scale NL information extraction can
also play an important helper role.  And I think that as robotics tech
develops, it can play a big role too.

I think we can take all approaches at once within an integrative framework
like Novamente or OpenCog, but if I have to pick a single focus it will
be virtual embodiment, with the other aspects as helpers...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Looking at the moon won't help --

of course it helps, it tells you that something odd is with the expression,
as opposed to say yellow sun ...

it might be the case that it described a
 particular appearance that only had a slight resemblance to other blue things
 (as in red hair), for example. There are some rare conditions (high
 stratospheric dust) which can make the moon look actually blue.

 In fact blue moon is generally taken to mean, metaphorically, something very
 rare (or even impossible) or the second full moon in a given month (which
 happens about every two-and-a-half years on the average).

 ask someone is of course what human kids do a lot of. An AI could do this,
 or look it up in Wikipedia, or the like. All of which are heuristics to
 reduce the ambiguity/generality in the information stream.
 The question is do enough heuristics make an autogenous AI or is there
 something more fundamental to its structure?


 On Wednesday 20 February 2008 12:27:59 pm, Ben Goertzel wrote:

  The trick to understanding once in a blue moon is to either
 
  -- look at the moon
 
  or
 
  -- ask someone
 


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
There seems to be an assumption in this thread that NLP analysis
of text is restricted to simple statistical extraction of word-sequences...

This is not the case...

If there were to be a hope for AGI based on text analysis, it would have
to be based on systems that parse linguistic expressions into logical
relationships, and combine these logical relationships via reasoning.

Assessing metaphoric versus literal mentions would be part of that reasoning.

Critiquing NLP-based AGI based on Google is a lot like critiquing robotics-
based AGI based on the Roomba.

Google is a good product implemented very scalably, but in its linguistic
sophistication, it is nowhere near the best research systems out there.
Let alone what would be possible with further research.

I stress that this is not my favored approach to AGI, but I think these
discussions based on Google are unfairly dismissing NLP-based
AGI by using Google as a straw man.

I note also that a web-surfing AGI could resolve the color of the moon
quite easily by analyzing online pictures -- though this isn't pure
text mining, it's in the same spirit...

ben



On Feb 20, 2008 2:30 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 So, looking at the moon, what color would you say it was?

 Here's what text mining might give you (Google hits):

 blue moon 11,500,000
 red moon 1,670,000
 silver moon 1,320,000
 yellow moon 712,000
 white moon 254,000
 golden moon 163,000
 orange moon 122,000
 green moon 105,000
 gray moon 9,460

 To me, the moon varies from a deep orange to brilliant white depending on
 atmospheric conditions and time of night... none of which would help me
 understand the text references.



 On Wednesday 20 February 2008 02:02:52 pm, Ben Goertzel wrote:
  On Feb 20, 2008 1:34 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
   Looking at the moon won't help --
 
  of course it helps, it tells you that something odd is with the expression,
  as opposed to say yellow sun ...
 


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
  As I am sure you are fully aware, you can't parse English without a knowledge
  of the meanings involved. (The council opposed the demonstrators because
  they (feared/advocated) violence.) So how are you going to learn meanings
  before you can parse, or how are you going to parse before you learn
  meanings? They have to be interleaved in a non-trivial way.

True indeed!

Feeding all the ambiguous interpretations of a load of sentences into
a probabilistic
logic network, and letting them get resolved by reference to each
other, is a sort of
search for the most likely solution of a huge system of simultaneous
equations ...
i.e. one needs to let each, of a huge set of ambiguities, be resolved
by the other ones...

This is not an easy problem, but it's not on the face of it unsolvable...

But I think the solution will be easier with info from direct
experience to nudge the
process in the right direction...

Ben





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
Yes, of course, but no human except an expert in lunar astronomy would have
a definitive answer to the question either

The issue at hand is really how a text-analysis based AGI would distinguish
literal from metaphoric text, and how it would understand the context in which
a statement is implicitly intended by the speaker/writinger.

These are hard problems, which are being worked on by many individuals
in the computational linguistics community.

I tend to think that introducing (real or virtual) embodiment will make the
solution of these problems easier...

-- Ben

On Wed, Feb 20, 2008 at 3:45 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Ben Goertzel [EMAIL PROTECTED] wrote:

   I note also that a web-surfing AGI could resolve the color of the moon
   quite easily by analyzing online pictures -- though this isn't pure
   text mining, it's in the same spirit...

  Not really.  You can get a better answer to what color is the moon? if you
  google what color is the moon?.  Better, but not definitive.  Even the
  photos are not in agreement.  Some photos show a mix of orange and blue.  If
  you stood on the moon, it would look black next to your feet, but white in
  contrast to the even darker sky.  During tonight's eclipse, it should look
  reddish brown.



  -- Matt Mahoney, [EMAIL PROTECTED]

  ---


 agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 OK, imagine a lifetime's experience is a billion symbol-occurences. Imagine
  you have a heuristic that takes the problem down from NP-complete (which it
  almost certainly is) to a linear system, so there is an N^3 algorithm for
  solving it. We're talking order 1e27 ops.

That's kind of specious, since modern SAT and SMT solvers can solve many
realistic instances of NP-complete problems for large n, surprisingly quickly...

and without linearizing anything...

Worst-case complexity doesn't mean much...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
Not necessarily, because

--- one can encode a subset of the rules of probability as a theory in
SMT, and use an SMT solver

-- one can use probabilities to guide the search within an SAT or SMT solver...

ben

On Wed, Feb 20, 2008 at 5:00 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 A PROBABILISTIC logic network is a lot more like a numerical problem than a
  SAT problem.



  On Wednesday 20 February 2008 04:41:51 pm, Ben Goertzel wrote:
   On Wed, Feb 20, 2008 at 4:27 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
  wrote:
OK, imagine a lifetime's experience is a billion symbol-occurences.
  Imagine
 you have a heuristic that takes the problem down from NP-complete (which
  it
 almost certainly is) to a linear system, so there is an N^3 algorithm 
 for
 solving it. We're talking order 1e27 ops.
  
   That's kind of specious, since modern SAT and SMT solvers can solve many
   realistic instances of NP-complete problems for large n, surprisingly
  quickly...
  
   and without linearizing anything...
  
   Worst-case complexity doesn't mean much...
  
   ben
  

  ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription:
  http://www.listbox.com/member/?;


  Powered by Listbox: http://www.listbox.com
  


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
 To get back to Ben's statement: Is the computer chip industry happy with
 contemporary SAT solvers

Well they are using them, but of course there is loads of room for improvement!!

or would a general solver that is capable of
 beating n^4 time be of some use to them?  If it would be useful, then there
 is a reason to believe that it might be useful to AGI.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
 And I seriously doubt that a general SMT solver +
  prob. theory is going to beat a custom probabilistic logic solver.

My feeling is that an SMT solver plus appropriate subsets of prob theory
can be a very powerful component of a general probabilistic inference
framework...

I can back this up with some details but that would get too thorny
for this list...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
 Yes, I'd like to hear others' opinion on Cyc.  Personally I don't think it's
 the perceptual grounding issue -- grounding can be added incrementally
 later.  I think Cyc (the KB) is on the right track, but it doesn't have
 enough rules.

I do think it's possible a Cyc approach could work if one had a few
billion rules in there -- but so what?  (Work meaning: together with a logic
engine, serve as the seed for an AGI that really learns and understands)

It's clear that the mere millions of rules in their KB now are VASTLY
inadequate in terms of scale...

Similarly, AIXItl or related approaches
could work for AGI if one had an insanely powerful computer -- but so what

AGI approaches that could work, in principle if certain wildly infeasible
conditions were met, are not hard to come by ... ;=)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
1)
While in my own AI projects I am currently gravitating toward an approach
involving virtual-worlds grounding, as a general rule I don't think it's obvious
that sensorimotor grounding is needed for AGI.  Certainly it's very useful, but
there is no strong argument that it's required.  The human path to AGI is not
the only one.

2)
I think that, potentially, building a KB could be part of an approach to
solving the grounding problem.  Encode some simple knowledge, instruct
the system in how to ground it in its sensorimotor experience ... then encode
some more (slightly more complex) knowledge ... etc.   I'm not saying this is
the best way but it seems a viable approach.  Thus, even if you want to take
a grounding-focused approach, it doesn't follow that fully solving the grounding
problem must precede the creation and utilization of a KB.  Rather, there could
be a solution to the grounding problem that couples a KB with other aspects.


In the NM approach, we could proceed with or without a KB, and with or
without sensorimotor grounding; and I believe NARS has that same property...

My feeling is that sensorimotor grounding is an Extremely Nice to Have
whereas a KB is just a Sort of Nice to Have, but I don't have a rigorous
demonstration of that

-- Ben G


On Feb 17, 2008 11:30 AM, Russell Wallace [EMAIL PROTECTED] wrote:
 On Feb 17, 2008 3:34 PM, Pei Wang [EMAIL PROTECTED] wrote:
  As Lukasz just pointed out, there are two topics:
  1. Cyc as an AGI project
  2. Cyc as a knowledge base useful for AGI systems.

 Well, I'm talking about Cyc (and similar systems) as useful for
 anything at all (other than experience to tell us what doesn't work
 and why not). But if it's proposed that such a system might be a
 useful knowledge base for something, then the something will have to
 have solved the grounding problem, right? And what I'm saying is, I
 wouldn't start off building a Cyc-like knowledge base and assume the
 grounding problem will be solved later. I'd start off with the
 grounding problem.


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
 I agree, that might be a viable approach. But the key phrase is
 Encode some simple knowledge, instruct the system in how to ground it
 in its sensorimotor experience - i.e. you're _not_ spending a decade
 writing a million assertions and _then_ looking for the first time at
 the grounding problem. Instead grounding is addressed, if not as step
 1, then at least as step 1.001.

Well, I find that grounding-based AGI is the kind I can think about most
easily, since that's how human intelligence works...

But I'm less confident that it's the only possible kind of AGI...

I've got to wonder if the masses of text on the Internet could, in themselves,
display a sufficient richness of patterns to obviate the need for grounding
in another domain like a physical or virtual world, or mathematics.

In other words, maybe what you think needs to be gotten from grounding
in a nonlinguistic domain, could somehow be gotten indirectly via grounding
in masses of text?

I am not confident this is feasible, nor that it isn't ... and it's
not the approach
I'm following ... but I'm uncomfortable dismissing it out of hand...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-17 Thread Ben Goertzel
  In other words, maybe what you think needs to be gotten from grounding
  in a nonlinguistic domain, could somehow be gotten indirectly via grounding
  in masses of text?
 
  I am not confident this is feasible, nor that it isn't ... and it's
  not the approach
  I'm following ... but I'm uncomfortable dismissing it out of hand...

 *nods* I'm comfortable dismissing it out of hand, for several reasons,
 not least of which is that we humans do not and cannot do anything
 remotely resembling what you're proposing.

I don't assume that all successful AGI's must be humanlike...

 At the end of the day, the Internet just doesn't contain most of the
 needed information. Consider the question of whether it's possible to
 learn about water flowing downhill, from Internet text alone. From
 Google (example not original to me, though I forget who first ran this
 test):

 Results 1 - 10 of about 864 for water flowing downhill
 Results 1 - 10 of about 2,130 for water flowing uphill

 The prosecution rests :)

Google is not an AGI, so I have no idea why you think this proves
anything about AGI ...

I strongly suspect there is enough information in the
text online for an AGI to learn that water flows downhill in most
circumstances, without having explicit grounding...

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Visual Reasoning Part 1 The Scene

2008-02-16 Thread Ben Goertzel
 where people and animals are likely to move, objects are
 likely to move/splash,   whether a figure is threatening to reach or
 actually reaching for his gun, considering shooting or about to shoot a
 rifle, what those girls on the sofa are trying to do, what those four feet
 mean, what that man by the sea is looking at and even what mood he might be
 in, how that woman dancing is talking to the man and how he is reacting, why
 that lovers' embrace is particularly hot, why that man is a drunk,how a
 child or the cat will play that piano and even react and what noises she may
 make, what those people in the dark are looking at, and so on ...?

 One thing's for sure: you are doing a lot of visual reasoning.

 And in fact, you are doing visual reasoning all day long - reasoning -
 composing stories-in-pictures about what has just happened and is about to
 happen in front of you - where objects are going to move, or how they've
 just moved, (fallen on the floor),  how the people around you are about to
 move, how fast they will approach you and whether that car might hit you,
 what their expressions mean, and whether they are likely to be friendly or
 come on or be angry, and how fast that blood may coagulate, whether that
 light indicates someone is in a room, whether the clouds indicate rain,
 whether those people are grouping together in friendship or to fight,
 whether that shop attendant is going to take too long etc etc.

 And all day long you are in effect doing tacit physics, chemistry, biology,
 psychology, sociology about the world around you. But almost none of it
 involves formal reasoning  that any of those disciplines could explain. They
 couldn't begin to tell you for example how you work out visually how things
 and animals and people are likely to behave - how you read the emotional
 complexities of a face - how someone is straining that smile too hard. There
 are no formulae that can tell you just by looking whether that suitcase is
 likely to be too heavy.

 All of this is visual and common-sense reasoning, most of which you'd be v.
 hard put to explain verbally let alone mathematically or logically .

 And that's why you were that wonderful little scientist of legend as an
 infant, pre-verbally exploring all the physical qualiities and nature of the
 world, conducting all those physical experiments with objects and people -
 very largely without words. And actually you've never stopped being a tacit
 scientist.

 For the moment, all I want you to retain is that we are all doing a massive
 amount of tacit, visual, commonsense reasoning which we are, blithely
 unaware of..

 The supreme example of our blind prejudice here is our idea that thinking is
 primarily a medium of language. Seems obvious. And yet, if you stop to think
 about it, there is only one form of thinking that never stops from the
 moment you wake till the moment you go to sleep, and that is the
 movie-in-the round that is your consciousness. It never stops. Verbal
 thinking stops. The movie goes on and on with you continually visually
 working out what is going on or about to go on behind the scenes. And when
 your unconscious brain wants to think,it always, always thinks in movies
 never in just words. Movies are the basic medium of thought - not just
 pictures, still pictures - but continuous rolling movies, involving all the
 senses simultaneously. That's how you interpreted those photos - as
 slices-of- , stills-from-a-movie - and NOT just as pure photos.

 I merely want to suggest here - and not really argue - that all that visual
 reasoning is indeed truly visual - that we actually process all those photos
 and visuals as *whole images* and *whole image sequences* against similar
 images/sequences stored in memory,  and that we couldn't possibly process
 them as just symbols. In the next post, I will zero in on a simple proof.

 P.S. I am not attacking symbols - I am attacking the idea that we or an
 AGI can think in symbols exclusively, and that includes thinking in
 images-as-symbolic-formulae. I believe that we think - and so must an AGI -
 in symbols-AND- graphics/schemas-AND detailed images - simultaneously,
 interdependently - that we are the greatest movie on earth with
 words/symbols-AND-pictures.
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Visual Reasoning Part 1 The Scene

2008-02-16 Thread Ben Goertzel
 you. But almost none of it
  involves formal reasoning  that any of those disciplines could explain.
  They
  couldn't begin to tell you for example how you work out visually how
  things
  and animals and people are likely to behave - how you read the emotional
  complexities of a face - how someone is straining that smile too hard.
  There
  are no formulae that can tell you just by looking whether that suitcase
  is
  likely to be too heavy.
 
  All of this is visual and common-sense reasoning, most of which you'd be
  v.
  hard put to explain verbally let alone mathematically or logically .
 
  And that's why you were that wonderful little scientist of legend as an
  infant, pre-verbally exploring all the physical qualiities and nature of
  the
  world, conducting all those physical experiments with objects and
  people -
  very largely without words. And actually you've never stopped being a
  tacit
  scientist.
 
  For the moment, all I want you to retain is that we are all doing a
  massive
  amount of tacit, visual, commonsense reasoning which we are, blithely
  unaware of..
 
  The supreme example of our blind prejudice here is our idea that thinking
  is
  primarily a medium of language. Seems obvious. And yet, if you stop to
  think
  about it, there is only one form of thinking that never stops from the
  moment you wake till the moment you go to sleep, and that is the
  movie-in-the round that is your consciousness. It never stops. Verbal
  thinking stops. The movie goes on and on with you continually visually
  working out what is going on or about to go on behind the scenes. And
  when
  your unconscious brain wants to think,it always, always thinks in movies
  never in just words. Movies are the basic medium of thought - not just
  pictures, still pictures - but continuous rolling movies, involving all
  the
  senses simultaneously. That's how you interpreted those photos - as
  slices-of- , stills-from-a-movie - and NOT just as pure photos.
 
  I merely want to suggest here - and not really argue - that all that
  visual
  reasoning is indeed truly visual - that we actually process all those
  photos
  and visuals as *whole images* and *whole image sequences* against similar
  images/sequences stored in memory,  and that we couldn't possibly process
  them as just symbols. In the next post, I will zero in on a simple proof.
 
  P.S. I am not attacking symbols - I am attacking the idea that we or an
  AGI can think in symbols exclusively, and that includes thinking in
  images-as-symbolic-formulae. I believe that we think - and so must an
  AGI -
  in symbols-AND- graphics/schemas-AND detailed images - simultaneously,
  interdependently - that we are the greatest movie on earth with
  words/symbols-AND-pictures.
   
 
   agi | Archives | Modify Your Subscription
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  No virus found in this incoming message.
  Checked by AVG Free Edition.
  Version: 7.5.516 / Virus Database: 269.20.6/1282 - Release Date: 2/15/2008
  7:08 PM

 
 


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] A 1st Step To Using Your Image-ination

2008-02-15 Thread Ben Goertzel
  Perhaps it will start to give you a sense that words and indeed all symbols
  provide an extremely limited *inventory of the world* and all its infinite
  parts and behaviours.

  I welcome any impressionistic responses here, including confused questions.

I agree with the above, but I think one needs to be careful about levels of
description...

One way to define symbol is in accordance with Peircean semiotics
... and in this sense,
not every term, predicate or variable utilized in a logical reasoning engine
is actually a symbol from the standpoint of the reasoning/learning
process implemented
by the reasoning engine

Similarly, if one implements a neural net learning algorithm on a
digital computer,
the bits used to realize the software program are symbols from the
standpoint of the
programming language compiler and executor, but not from the standpoint of the
neural net itself...

LIke neurons, logical tokens may be used as components of complex patterned
arrangements, without any individual symbolic meaning.

Visual images may be represented with superhuman accuracy using logical tokens
for instance.  These tokens are symbolic at one level, but not
visually symbolic...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Applicable to Cyc, NARS, ATM others?

2008-02-14 Thread Ben Goertzel
,
  to encode a description of genes in XML, but it would be impossible to get a
  universal standard for such a description, because biologists are still
  arguing about what a gene actually is. There are several competing standards
  for describing genetic information, and the semantic divergence is an
  artifact of a real conversation among biologists. You can't get a standard
  til you have an agreement, and you can't force an agreement to exist where
  none actually does.

  Furthermore, when we see attempts to enforce semantics on human situations,
  it ends up debasing the semantics, rather then making the connection more
  informative. Social networking services like Friendster and LinkedIn assume
  that people will treat links to one another as external signals of deep
  association, so that the social mesh as represented by the software will be
  an accurate model of the real world. In fact, the concept of friend, or even
  the type and depth of connection required to say you know someone, is quite
  slippery, and as a result, links between people on Friendster have been
  drained of much of their intended meaning. Trying to express implicit and
  fuzzy relationships in ways that are explicit and sharp doesn't clarify the
  meaning, it destroys it.
  Worse is Better #

  In an echo of Richard Gabriel's Worse is Better argumment, the Semantic Web
  imagines that completeness and correctness of data exposed on the web are
  the cardinal virtues, and that any amount of implementation complexity is
  acceptable in pursuit of those virtues. The problem is that the more
  semantic consistency required by a standard, the sharper the tradeoff
  between complexity and scale. It's easy to get broad agreement in a narrow
  group of users, or vice-versa, but not both.

  The systems that have succeeded at scale have made simple implementation the
  core virtue, up the stack from Ethernet over Token Ring to the web over
  gopher and WAIS. The most widely adopted digital descriptor in history, the
  URL, regards semantics as a side conversation between consenting adults, and
  makes no requirements in this regard whatsoever: sports.yahoo.com/nfl/ is a
  valid URL, but so is 12.0.0.1/ftrjjk.ppq. The fact that a URL itself doesn't
  have to mean anything is essential -- the Web succeeded in part because it
  does not try to make any assertions about the meaning of the documents it
  contained, only about their location.

  There is a list of technologies that are actually political philosophy
  masquerading as code, a list that includes Xanadu, Freenet, and now the
  Semantic Web. The Semantic Web's philosophical argument -- the world should
  make more sense than it does -- is hard to argue with. The Semantic Web,
  with its neat ontologies and its syllogistic logic, is a nice vision.
  However, like many visions that project future benefits but ignore present
  costs, it requires too much coordination and too much energy to effect in
  the real world, where deductive logic is less effective and shared worldview
  is harder to create than we often want to admit.

  Much of the proposed value of the Semantic Web is coming, but it is not
  coming because of the Semantic Web. The amount of meta-data we generate is
  increasing dramatically, and it is being exposed for consumption by machines
  as well as, or instead of, people. But it is being designed a bit at a time,
  out of self-interest and without regard for global ontology. It is also
  being adopted piecemeal, and it will bring with it with all the
  incompatibilities and complexities that implies. There are significant
  disadvantages to this process relative to the shining vision of the Semantic
  Web, but the big advantage of this bottom-up design and adoption is that it
  is actually working now.


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning knowledge.. p.s.

2008-02-14 Thread Ben Goertzel
Hi Mike,


  P.S. I also came across this lesson that AGI forecasting must stop (I used
  to make similar mistakes elsewhere).

  We've been at it since mid-1998, and we estimate that within 1-3 years from
  the time I'm writing this (March 2001), we will complete the creation of a
  program that can hold highly intelligent (though not necessarily fully
  human-like) English conversations, talking to us about its own creative
  discoveries and ideas regarding the digital data that is its worldOf
  course, 1-4 years from real AI and 1-3 years more to fully self-modifying
  AI are very gutsy claims, similar to other claims that have been made (and
  not fulfilled) throughout the history of AI. But we believe that, due to the
  combination of advances in computer hardware and software with advances in
  various aspects of cognitive science, real AI really now is possible - and
  that we know how to achieve it, and are substantially advanced along the
  path to this goal.
  http://www.goertzel.org/books/DIExcerpts.htm


I'd like to note that at that time I was working with a team of about
**40** full-time
RD staff focused on nothing but AGI.

On April 1, 2001, the company hosting that team (Webmind Inc.) shut its doors.

Who knows what we might have achieved had that level of dedication actually
continued for 4-7 more years?

Our codebase had some problems, and some of our ideas at that point were
inadequately specified.  But we were moving in the right direction,
and my progress
since that point has been significantly slower due to having less than 1/10 the
team-size devoted to AGI.

The real stupidity underlying that prediction I made, in early 2001,
was my naivete
in not realizing how suddenly the dot-com bubble was going to burst.
The prediction
was conditional on the Webmind AI team continuing in the form it existed at that
time; but as it happened, the creation and maintenance of that sort of
AGI RD team
was an epiphenomenon of the temporary dot-com bubble.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Reading on automatic programming?

2008-02-05 Thread Ben Goertzel
  8.  Generative Programming, Methods, Tools, and Applications (2000) -
  Krzysztof Czarnecki, Ulrich W. Eisenecker

The above is a very good book, IMO ... not directly AGI-related, but
damn insightful re generative software design...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=94114734-b6bbb4


[agi] A little more technical information about OpenCog

2008-02-03 Thread Ben Goertzel
I got a free hour this afternoon, and posted a little more technical information
about our plans for OpenCog, here:

http://opencog.org/wiki/OpenCog_Technical_Information

Nothing surprising or dramatic, mostly just a clear online explanation of our
basic plans, as have already been discussed in various emails...

-- Ben G

p.s. for those who don't know what opencog is, see

http://opencog.org/wiki/Main_Page



--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93259758-229b0c


Re: [agi] Emergent languages Org

2008-02-03 Thread Ben Goertzel
 IMO language is integral to strong AI in the same way that logic is
 integral to mathematics.

The counterargument is that no one has yet made an AI virtual chimp ...
and nearly all of the human brain is the same as that of a chimp ...

I think that language-centric approaches are viable, but I wouldn't dismiss
sensorimotor-centric approaches to AGI either ... looking at evolutionary
history, it seems that ONE way to achieve linguistic functionality is via
some relatively minor tweaks on a prelinguistic mind tuned for flexible
sensorimotor learning... (tho I don't believe this is the only way, unlike some)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93270421-5eade1


Re: [agi] Emergent languages Org

2008-02-03 Thread Ben Goertzel
Hi,

 I'd be interested in what you see as the path from SLAM to AGI.

 To me, language generation seems obvious: 1. Make a language and
 algorithms for generating stuff in that language. 2. Implement pattern
 recognition and abstraction (imo not _that_ hard if you've designed
 your language well) 3. Ground the language through real-world
 sensorimotor experiences so the utterances mirror the agents'
 experiences.

 What do you see as the equivalent path from mapping, navigation and SLAM?

Mapping, navigation and SLAM are not the key point -- embodied learning is
the point ... these are just prerequisites...

The robotics path to AI is a lot like the evolutionary path to natural
intelligence...

Create a system that learns to achieve simple sensorimotor goals in
its environment...
then move on to social goals... and language eventually emerges as an aspect of
social interaction...

Rather than language being a separate thing that is then grounded in experience,
make language **emerge** from nonlinguistic interactions ... as it
happened historically

See Mithen's The Singing Neanderthals for ideas about how language may
have emerged
from prelinguistic sound-making ... and a host of researchers for
ideas about how language
may have emerged from gesture (I have a paper touching on the latter
at novamente.net/papers )

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=93273630-9e8239


Re: Singularity Outcomes [WAS Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-27 Thread Ben Goertzel
  Google
 already knows more than any human,

This is only true, of course, for specific interpretations of the word
know ... and NOT for the standard ones...

and can retrieve the information faster,
 but it can't launch a singularity.

Because, among other reasons, it is not an intelligence, but only
a very powerful tool for intelligences to use...

 When your computer can write and debug
 software faster and more accurately than you can, then you should worry.

A tool that could generate computer code from formal specifications
would be a wonderful thing, but not an autonomous intelligence.

A program that creates its own questions based on its own goals, or
creates its own program specifications based on its own goals, is
a quite different thing from a tool.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=90465940-5ffd85


Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
I think a more precise way to phrase what they showed,
philosophically, would be like this:


Very likely, to the extent that flies are conscious, then they have a
SUBJECTIVE FEELING of possessing free will.


In other words, flies seem to possess the same kind of internal
spontaneity-generation that we possess, and that we associate with our
subjectively-experienced feeling of free will.

-- Ben G

On Jan 24, 2008 7:57 AM, Robert Wensman [EMAIL PROTECTED] wrote:


 1. Brembs and his colleagues reasoned that if fruit flies (Drosophila
 melanogaster) were simply reactive robots entirely determined by their
 environment, in completely featureless rooms they should move completely
 randomly.
 Yes, but no one has ever argued that a flier is a stateless machine. It
 seems like their argument ignores the concept of internal state. If they
 went through all this trouble just to prove that the brain of the flies has
 an internal state, it seems they wasted a lot of time on something trivial.

 I cannot see how the concept of free will has got anything to do with
 this.

 /R
   
  This list is sponsored by AGIRI: http://www.agiri.org/email

 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89412948-cb41f5


Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
 In other words, flies seem to possess the same kind of internal
 spontaneity-generation that we possess, and that we associate with our
 subjectively-experienced feeling of free will.

 -- Ben G

To clarify further:

Suppose you are told to sit still for a while, and then move your hand
suddenly at some arbitrary moment.  The choice of the moment is a kind
of spontaneous action on your part -- not determined by external
reality in any obvious way.  You just suddenly decide to do it, and
then do it.

This same kind of spontaneous action-choice seems to be made by flies.

It's not exactly the same as a reflective, deliberative choice.  We
humans seem to couple the two together: reflective deliberation and
spontaneous choice.  But I think they're different things.

However, I don't think that this sort of spontaneous choice is
necessarily free in any profound sense ... rather, Libet's classic
work suggests the opposite, as summarized e.g. in

http://www.consciousentities.com/experiments.htm#decisions

where it says


Libet asked his experimental subjects to move one hand at an arbitrary
moment decided by them, and to report when they made the decision
(they timed the decision by noticing the position of a dot circling a
clock face).  At the same time the electrical activity of their brain
was monitored. Now it had already been established by much earlier
research that consciously-chosen actions are preceded by a pattern of
activity known as a Readiness Potential (or RP).  The surprising
result was that the reported time of each decision was consistently a
short period (some tenths of a second)after the RP appeared. This
seems to prove that the supposedly conscious decisions had actually
been determined unconsciously beforehand. This seems to lend strong
experimental support both to the idea that free will is an illusion
(at most, it would seem, there is scope for a last-minute veto by the
conscious mind - a possibility which has been much debated since)


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89416197-1c6823


Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
 The question vis-a-vis the fly - or any animal - is whether the *whole*
 course of action of the fly in that experiment can be accounted for by one -
 or a set of - programmed routines or programs period. My impression -
 without having studied the experiment in detail - is that it weighs against
 that conclusion, without being the final word.

Definitely not ... there is vast evidence from the theory of complex,
deterministic
dynamical systems that this sort of apparently spontaneous behavior can
emerge from simple underlying deterministic dynamics...

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89465283-62b335


Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
If you're asking whether there are accurate complex-systems simulations
of whole animals, there aren't yet ...

At present, we lack instrumentation capable of gathering detailed data about
how animals work; and we lack computers powerful enough to run such
simulations (though some supercomputers may be on the verge)

Theory suggests that such simulations will be possible, but it hasn't been
proved conclusively ... so I guess you can still maintain some kind of
vitalism
for a couple decades or so if you really want to ;-)

ben

On Jan 24, 2008 11:27 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 I take your general point re how complex systems can produce apparently
 spontaneous behaviour.

 But to what actual courses of action of actual animals (such as the fly
 here) or humans  has this theory been successfully applied?

 Ben:  The question vis-a-vis the fly - or any animal - is whether the

 *whole*
  course of action of the fly in that experiment can be accounted for by
  one -
  or a set of - programmed routines or programs period. My impression -
  without having studied the experiment in detail - is that it weighs
  against
  that conclusion, without being the final word.
 
  Definitely not ... there is vast evidence from the theory of complex,
  deterministic
  dynamical systems that this sort of apparently spontaneous behavior can
  emerge from simple underlying deterministic dynamics...
 


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89482521-9ee316


Re: [agi] Study hints that fruit flies have free will

2008-01-24 Thread Ben Goertzel
 Possible major misunderstanding : I am not in any shape or form a vitalist.
 My argument is solely about whether a thinking machine (brain or computer)
 has to be instructed to think rigidly or freely,  with or without prior
 rules -   and whether, with the class of problems that come under AGI,
 programming is possible at all.

I have no idea what you mean by programming ...

Anything that happens on a digital computer is controlled by some program ...
hence programmed ...

So, the fundamental question seems to be whether entities like flies
are digitally simulable or not...

I think so, but am open to the possibility that quantum-gravity weirdness
renders this false...

ben g

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=89502643-dc40c1


Re: [agi] SAT, SMT and AGI

2008-01-21 Thread Ben Goertzel
 As far as I know there is little or no work done yet to integrate 
 probabilistic
 reasoning with these solvers and it will probably not be easy to do it and
 keep things efficient.

I don't think it will be easy, but what's intriguing is that it seems
like it might
be feasible-though-difficult ...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88136383-b552d3


KILLTHREAD -- Re: [agi] Logical Satisfiability

2008-01-20 Thread Ben Goertzel
Hi all,

I'd like to kill this thread, because not only is it off-topic, but it seems not
to be going anywhere remotely insightful or progressive.

Of course a polynomial-time solution to the boolean satisfiability
problem could potentially have impact on AGI (though it wouldn't
necessarily do so -- this would depend on many things, e.g. the
average-case time of the algorithm, the size of the constants in front
of the terms of the polynomial, etc.).

However, no one has such a solution yet, and no one is putting forth
any detailed ideas about such a solution, in this thread.

There are lots of scientific breakthroughs that could impact AGI --
for instance, faster semiconductors, nanotech-based computer
memories, accelerated Monte Carlo integration routines, whatever --
but they're not really on-topic for the AGI list unless being discussed
specifically in the context of their AGI implications.

So, I wouldn't say discussions of P=NP are universally verboten
for this list; but unless there are specific AGI implications, let's
leave that sorta discussion for elsewhere.

Luke, I've also had some fun proofs of P=NP, and my best one only
lasted about 3 days ... but that is because I thought of it while
backpacking ... and it only evaporated after I wrote it down when
I got back from the wilds and checked the details ;-)

My office-mate in grad school proved P=NP and mailed the proof
to 200 professors worldwide.  He mailed a retraction 2 days later.
I believe he thought he had reduced it to linear programming
somehow.

Thanks
Ben Goertzel
List Owner



On Jan 20, 2008 1:51 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 Jim,

 I'm sure most people here don't have any difficulty understanding what
 you are talking about. You seem to lack solid understanding of these
 basic issues however. Please stop this off-topic discussion, I'm sure
 you can find somewhere else to discuss computational complexity. Read
 a good textbook, if you are sincerely interested in these things.


 On Jan 20, 2008 9:21 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 
  I had no idea what you were talking about until I read
  Matt Mahoney's remarks.  I do not understand why people have so much trouble
  reading my messages but it is not entirely my fault.  I may have
  misunderstood something that I read, or you may have misinterpreted
  something that I was saying.  Or even both!  But if you want to continue
  this discussion feel free.
 
  Robin said: As for your problem involving SAT, it's not applicable to P-NP
  because they are classes of decisions problems
  (http://en.wikipedia.org/wiki/Decision_problem), which means problems that
  can be answered yes or no.
 
  Wikipedia: http://en.wikipedia.org/wiki/Boolean_satisfiability_problem
  In complexity theory, the Boolean satisfiability problem (SAT) is a decision
  problem, whose instance is a Boolean expression written using only AND, OR,
  NOT, variables, and parentheses. The question is: given the expression, is
  there some assignment of TRUE and FALSE values to the variables that will
  make the entire expression true?
 
 



 --
 Vladimir Nesovmailto:[EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87929019-bbd33f


Re: KILLTHREAD -- Re: [agi] Logical Satisfiability

2008-01-20 Thread Ben Goertzel
On Jan 20, 2008 2:34 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 I am disappointed because the question of how a polynomial time solution of
 logical satisfiability might affect agi is very important to me.

Well, feel free to start a new thread on that topic, then ;-)

In fact, I will do so: I will post a message on SAT, SMT and AGI

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87997968-9121a0


[agi] SAT, SMT and AGI

2008-01-20 Thread Ben Goertzel
I wrote


On Jan 20, 2008 2:34 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 I am disappointed because the question of how a polynomial time solution of
 logical satisfiability might affect agi is very important to me.

Well, feel free to start a new thread on that topic, then ;-)

In fact, I will do so: I will post a message on SAT, SMT and AGI


And here it is:

However, I would rephrase the question as: How would a pragmatically useful
polynomial time solution of logical satisfiability affect AGI?

In fact, it's interesting to talk about how existing SAT and SMT solvers

http://en.wikipedia.org/wiki/Satisfiability_modulo_theories

-- which are often quite effective on surprisingly large real-world problems,
in spite of being exponential time in the worst case -- affect AGI.

SMT in particular seems to have deep potential applicability.

It would seem to me that a practical SMT solver handling quantifier
logic would be
a more useful research goal than proving P=NP.

In AGI, we don't care that much about worst-case complexity, nor even
necessarily about average-case complexity for very large N.  We care mainly
about average-case complexity for realistic N and for the specific probability
distribution of problem-cases confronted in embodied experience.

Most work with SMT solvers seems to have to do with theories like arithmetic...
simple stuff.  But what if the theory involved is the (large) set of
predicates probabilistically
held to be true by an AGI system.  How effective are current SMT solvers then?

If they are effective, then SMT could prove an interesting tool within an AGI
inference engine... a way of relatively rapidly resolving complex queries...

-- Ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88001594-8a8210


Re: [agi] SAT, SMT and AGI

2008-01-20 Thread Ben Goertzel
 So, people do have a practically useful way of cheating problems in NP
 now. Problem with AGI is, we don't know how to program it even given
 computers with infinite computational power.

Well, that is wrong IMO  AIXI and the Godel Machine are provably correct
ways to achieve AGI with infinite (or even huge finite) computational power.

Furthermore, if we assume humongous computational power, the Novamente
design becomes a lot simpler ... almost but not quite as trivial as AIXI or the
Godel machine...

The whole AGI problem is about coping with seriously bounded computational
resources ... as has been pointed out on this list s many times ... and as
Eric Baum argues quite elegantly (among other points) in What Is Thought?

ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88042864-3133e9


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
Well, Lenat survives...

But he paid people to build his database (Cyc)

What's depressing is trying to get folks to build a commonsense KB for
free ... then you
get confronted with the absolute stupidity of what they enter, and the
poverty and
repetitiveness of their senses of humor... ;-p

ben

On Jan 19, 2008 4:42 PM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 http://www.wired.com/techbiz/people/magazine/16-02/ff_aimystery?currentPage=all

 I guess the moral here is Stay away from attempts to hand-program a
 database of common-sense assertions.

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87836600-bf128b


Re: [agi] OpenMind, MindPixel founders both commit suicide

2008-01-19 Thread Ben Goertzel
On Jan 19, 2008 5:53 PM, a [EMAIL PROTECTED] wrote:
 This thread has nothing to do with artificial general intelligence -
 please close this thread. Thanks

IMO, this thread is close enough to AGI to be list-worthy.

It is certainly true that knowledge-entry is not my preferred
approach to AGI ... I think that it is at best peripheral to any
really serious AGI approach.

However, some serious AGI thinkers, such as Doug Lenat,
believe otherwise.

And, this list is about AGI in general, not about any specific
approaches to AGI.

So, the thread can stay...

-- Ben Goertzel, list owner




 Bob Mottram wrote:
  Quality is an issue, but it's really all about volume.  Provided that
  you have enough volume the signal stands out from the noise.
 
  The solution is probably to make the knowledge capture into a game or
  something that people will do as entertainment.  Possibly the Second
  Life approach will provide a new avenue for acquiring commonsense.
 
 
  On 19/01/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  What's depressing is trying to get folks to build a commonsense KB for
  free ... then you
  get confronted with the absolute stupidity of what they enter, and the
  poverty and
  repetitiveness of their senses of humor... ;-p
 
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


We are on the edge of change comparable to the rise of human life on Earth.
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=87842518-105d7f


Re: [agi] Glocal knowledge representation?

2007-03-26 Thread Ben Goertzel


Yes, Google reveals that the term glocal has been used a few times
in the context of social activism.

I am writing a conference paper on knowledge representation and am
thinking of introducing it as a buzzword for the type of mixed global/local
knowledge rep used in Novamente, which I also hypothesize is used
in the brain...

thx
ben


On 3/25/07, Ben Goertzel [EMAIL PROTECTED] wrote:


Hi,

Does anyone know if the term glocal (meaning global/local) has
previously been used in the context of
AI knowledge representation?


While not recognized as a formal term of knowledge representation,
glocal has strong connotations of think globally, act locally which
is a fairly deep principle of effective interaction for any agent with
its environment.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Re: Why is progress toward AGI so slow?

2007-03-26 Thread Ben Goertzel




The project was founded officially in 2001 but for much of the time 
between 2001 and 2004 there was NOBODY working on it full time.  All 
of us founders had day jobs, either actual jobs or AI consulting 
jobs, needed to pay the bills.


For the last couple years there were 2-3 people working on it 
full-time.  Now there are 3 people working on it full-time, and a 
fourth just came on board, but hasn't come up to speed yet.  Plus 
much-valued part-time efforts from a few others.




Actually, I realized I phrased things a little too pessimistically in 
the above.


During the period 2001-2004, we had a number of staff working on 
consulting projects that used the Novamente core system to do various 
practical narrow-AI things like natural language processing and 
bioinformatic data analysis.  This helped us build out the core system 
and refine various AI algorithms used within the Novamente AGI system. 

But still, it's different than having staff actually working on building 
out Novamente **as an AGI system**.   The work done on these 
Novamente-core-utilizing consulting projects did get us a certain 
distance toward AGI, but in the end we found there was only so much 
mileage we could get this way, because the requirements of the 
consulting projects were too different from the requirements of 
Novamente as an AGI system.


Which led us to our current direction ... which I will discuss in 
another email a little later ;-)


ben



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] New business direction for Novamente LLC

2007-03-26 Thread Ben Goertzel


Hi all,

As there has been a lot of discussion of the Novamente AI system on this 
list,

it seems apropos to announce here that Novamente LLC has decided upon a
significant shift in business direction/approach.

If you're curious a pertinent company blog entry is here:

http://www.novamente.net/blog/

-- Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Why is progress toward AGI so slow?

2007-03-26 Thread Ben Goertzel


Hi,
- what is the REAL reason highly talented AGI research groups keep 
pushing their deadlines back. E.g. Ben's announced imminent breakthrus 
several times ... the one fact he mentioned a few years back that made 
sense is the huge parameter space/degrees of freedom (you have at 
least 5 to 10 tunable parameters per module) but I wonder about the 
others he hasnt mentioned (barring the excuses) and even more so for 
other projects - newcomers might learn from concentrating their 
thinking on AGI aspects where current projects are weak.


Well, the real reason the Novamente Cognition Engine is taking so damn 
long to develop is that the design is big and the staff working on it 
are few.


The project was founded officially in 2001 but for much of the time 
between 2001 and 2004 there was NOBODY working on it full time.  All of 
us founders had day jobs, either actual jobs or AI consulting jobs, 
needed to pay the bills.


For the last couple years there were 2-3 people working on it 
full-time.  Now there are 3 people working on it full-time, and a fourth 
just came on board, but hasn't come up to speed yet.  Plus much-valued 
part-time efforts from a few others.


But 3-5 full-time people is not enough to make extremely rapid progress 
on a large-scale software system like Novamente.  We need at least 
double that, just counting core AI stuff (not stuff like prettying up 
the AGISim sim world, system administration, etc.).


Now you may say: OK, that proves your AI design is too complex, so go 
make a simpler design that can be completed by a couple good computer 
scientists in a year or so.  Well, I've tried.  Novamente is the 
simplest thing I could come up with that has a prayer of working on 
networks of contemporary computers. 

Of the 10 or so major topics covered in the Novamente design document, 
there are 3-4 that haven't even been touched yet in terms of 
implementation, and a couple others that have only been handled on a 
prototype level.  And even the aspects that have been implemented still 
have known shortcomings (relative to what the design specifies), that 
are being filled in.


Maybe, if we got the staff we need, we would then run into some OTHER 
obstacle.  (Like the, Oops, we built this whole huge cognitive system, 
and the theory says it should learn stuff, but in fact it's a complete 
moron obstacle ;-)  I can't rule that out, though I've certainly done a 
lot of theory to minimize the odds of it happening.  But anyway, what 
I've said above is the actual reason why our progress has been slow.


-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Glocal knowledge representation?

2007-03-25 Thread Ben Goertzel


Hi,

Does anyone know if the term glocal (meaning global/local) has 
previously been used in the context of

AI knowledge representation?

thx
Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-24 Thread Ben Goertzel




If we're talking language for AGI _content_ (as opposed to framework 
for which Ben Goertzel has made a fair case for even C++), then more 
like removal of features. Because for AGI content, it's not what you 
can do in principle, it's what you can be _casual_ with.


Correct, this is an important distinction.

One thing that's nice about LISP --- at first glance -- is that it looks 
like it can be a language for both AGI content and AGI framework.


But I believe this is somewhat deceptive.  In principle LISP could be OK 
for AGI framework (though I'm not convinced it's there yet ... though 
Allegro LISP arguably comes close...), but I don't think it's right for 
AGI content. 

On the other hand, you could build an AGI-content language by 
**extending** LISP ... whereas if your framework language is C++ you 
need to make a content language totally separately.


In fact our content language, Combo, looks a bit like LISP, but with 
other features like

-- explicit higher-order typing [not yet implemented, but needed soon]
-- a particular kind of uncertain truth values
-- probabilistic tools for dealing with statements based on their truth 
values


Currently this content language is used almost only for internal 
AI-generated programs, and has an awkward textual syntax, but we intend 
to improve the syntax so that in some cases we can supply the system 
with human-programmed modules to use as a starting-point for learning.


Anyway, the plan is that initial NM self-modification will take the form 
of NM modifying its **cognitive control scripts** that are written in 
the internal Combo language ... modification of the underlying C++ code 
is going to be a later-phase thing.  [This also enforces some basic, 
non-absolute AGI safety in that the C++ layer provides certain 
constraints.]


Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-24 Thread Ben Goertzel



Is the research on AI full of Math because there are many Math
professors that publish in the field or is the problem really Math
related?  Many PhDs in computer science are Math oriented exactly
because the professors that deem their work worth a PhD are either
Mathematicians or their sponsoring professor was.


I don't know of any math profs who publish in artificial intelligence, 
though no doubt

there are a few that do.  No, thinking about it now I can think of a few.


My PhD is in math and I used to be a math prof, but I have found no 
opportunity yet to use really advanced math in AI


Advanced undergraduate level math is as far as it's gone so far ... 
the most advanced stuff has been in Novamente's probabilistic reasoning 
component, but there's nothing here really going beyond undergrad 
probability, stats, and vector calculus...


No algebraic geometry, no modular forms or inaccessible cardinals of 
the mind, etc. ;-)


Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-24 Thread Ben Goertzel



The fact that C, C++ and I would presume C# has pointers, precludes any of
these from my list up front.  There can be no boundary checks at either
compile or execution time so this feature alone is incompatible with a
higher level language IMO. 


FYI, C# has no pointers generically, but you can create unsafe code 
blocks that can contain pointers

inside them.

Your language may well be a great one, but personally, I feel like your 
criticisms of other languages aren't really

adequately informed...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda]

2007-03-24 Thread Ben Goertzel





As for all the other talk on this list, recently, about programming 
languages and the need for math, etc., I find myself amused by the 
irrelevance of most of it:  when someone gets a clue about what they 
are trying to build, and why, the question of what language (or 
environment) they need to use will answer itself.




Yes ... and this is the same thing those of us actually working on AGI 
projects have been saying.


My experience is:
Once you have an AGI design, the choice of prog. language becomes a 
pragmatic rather than philosophical/emotional choice.  Even if none of 
the existing languages matches one's desires perfectly, one chooses a 
decent option and gets on with it.


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda]

2007-03-24 Thread Ben Goertzel

Richard Loosemore wrote:

Ben Goertzel wrote:

Richard Loosemore wrote:


As for all the other talk on this list, recently, about programming 
languages and the need for math, etc., I find myself amused by the 
irrelevance of most of it:  when someone gets a clue about what they 
are trying to build, and why, the question of what language (or 
environment) they need to use will answer itself.




Yes ... and this is the same thing those of us actually working on 
AGI projects have been saying.


You mean to imply I am *not* one of those actually working on an AGI 
project?




No, sorry for the inaccurate phrasing...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: Environments and Languages for AGI [WAS Re: [agi] My proposal for an AGI agenda]

2007-03-24 Thread Ben Goertzel

rooftop8000 wrote:
one chooses a 
decent option and gets on with it.


-- Ben



That's exactly the problem.. everyone just builds their
own ideas and doesn't consider how their ideas and code could
(later) be used by other people

  


I'm not at all sure something like AGI is well-suited to a large-scale, 
open-source project.


Linux, for instance, is based on well-known ideas (Unix) and consists of 
a lot of loosely-related parts; it's well-suited to

construction by a large pool of part-timers.

An AGI design like Novamente is based on a lot of very obscure ideas 
that are quite hard to understand (due to being at
the research frontier), and consists of a set of parts that interdepend 
very intricately and subtlely.  It really needs to be
built by a small team with a deep common understanding and close 
interaction.


I guess most other AGI designs are the same way.

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Ben Goertzel

Mark Waser wrote:
 IMO, creating an AGI isn't really a programming problem.  The hard 
part is knowing exactly what to program. 
 
Which is why it turns into a programming problem . . . .  I 
started out as a biochemist studying enzyme kinetics.  The only 
reasonable way to get a reasonable turn-around time on testing a new 
fancy formula was to update the simulation program myself. 
 
If the tools were there (i.e. Loosemoore's environment), it 
wouldn't be a programming problem.  Since they aren't, the programming 
turns into a/the real problem.:-)


Well, programming AGI takes more time and effort now than it would with 
more appropriate programming tools ...


But it seems like what Loosemore wants is an environment that will help 
him **discover** the right AGI design ... this is a different 
matter  Or am I misunderstanding?


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Why C++ ?

2007-03-23 Thread Ben Goertzel

Samantha Atknis wrote:

Ben Goertzel wrote:


Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.
I am  curious as to how C++ helps scalability.  What sorts of 
scalability?  Along what dimensions?  There are ways that C++ does 
not scale so well like across large project sizes or in terms of 
maintainability.   It also doesn't scale that well in terms of  space 
requirements if the  class hierarchy gets  too deep or uses much  
multiple inheritance  of  non-mixin classes.   It also doesn't scale 
well in large team development.  So I am curious what you mean.




I mean that Novamente needs to manage large amounts of data in heap 
memory, which needs to be very frequently garbage collected according to 
complex patterns.


We are doing probabilistic logical inference IN REAL TIME, for real time 
embodied agent control.  This is pretty intense.  A generic GC like 
exists in LISP or Java won't do.


Aside from C++, another option might have been to use LISP and write our 
own custom garbage collector for LISP.  Or, to go with C# and then use 
unsafe code blocks for the stuff requiring intensive memory management.


Additionally, we need real-time, very fast coordinated usage of multiple 
processors in an SMP environment.  Java, for one example, is really slow 
at context switching between different threads.


Finally, we need rapid distributed processing, meaning that we need to 
rapidly get data out of complex data structures in memory and into 
serialized bit streams (and then back into complex data structures at 
the other end).  This means we can't use languages in which object 
serialization is a slow process with limited 
customizability-for-efficiently.


When you start trying to do complex learning in real time in a 
distributed multiprocessor context, you quickly realize that 
C-derivative languages are the only viable option.   Being mostly a 
Linux shop we didn't really consider C# (plus back when we started, .Net 
was a lot less far along, and Mono totally sucked).


C++ with heavy use of STL and Boost is a different language than the C++ 
we old-timers got used to back in the 90's.   It's still a large and 
cumbersome language but it's quite possible to use it elegantly and 
intelligently.  I am not such a whiz myself, but fortunately some of our 
team members are.


-- Ben G



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] Re: Why C++ ?

2007-03-23 Thread Ben Goertzel


BTW I think I have answered that question at least 5 times on this list 
or on the SL4 list.  I'm almost motivated to make a Novamente FAQ to 
avoid this sort of repetition!!! 


ben


Ben Goertzel wrote:

Samantha Atknis wrote:

Ben Goertzel wrote:


Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.
I am  curious as to how C++ helps scalability.  What sorts of 
scalability?  Along what dimensions?  There are ways that C++ does 
not scale so well like across large project sizes or in terms of 
maintainability.   It also doesn't scale that well in terms of  space 
requirements if the  class hierarchy gets  too deep or uses much  
multiple inheritance  of  non-mixin classes.   It also doesn't scale 
well in large team development.  So I am curious what you mean.




I mean that Novamente needs to manage large amounts of data in heap 
memory, which needs to be very frequently garbage collected according 
to complex patterns.


We are doing probabilistic logical inference IN REAL TIME, for real 
time embodied agent control.  This is pretty intense.  A generic GC 
like exists in LISP or Java won't do.


Aside from C++, another option might have been to use LISP and write 
our own custom garbage collector for LISP.  Or, to go with C# and then 
use unsafe code blocks for the stuff requiring intensive memory 
management.


Additionally, we need real-time, very fast coordinated usage of 
multiple processors in an SMP environment.  Java, for one example, is 
really slow at context switching between different threads.


Finally, we need rapid distributed processing, meaning that we need to 
rapidly get data out of complex data structures in memory and into 
serialized bit streams (and then back into complex data structures at 
the other end).  This means we can't use languages in which object 
serialization is a slow process with limited 
customizability-for-efficiently.


When you start trying to do complex learning in real time in a 
distributed multiprocessor context, you quickly realize that 
C-derivative languages are the only viable option.   Being mostly a 
Linux shop we didn't really consider C# (plus back when we started, 
.Net was a lot less far along, and Mono totally sucked).


C++ with heavy use of STL and Boost is a different language than the 
C++ we old-timers got used to back in the 90's.   It's still a large 
and cumbersome language but it's quite possible to use it elegantly 
and intelligently.  I am not such a whiz myself, but fortunately some 
of our team members are.


-- Ben G






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-23 Thread Ben Goertzel

David Clark wrote:

I appreciate the amount of effort you made in replying to my email.
 
Most of your questions would be answered if you read the documentation 
on my site.  The last time I looked, LISP had no built-in database.


Allegro Lisp has a very nice (easy to use, scalable, damn fast) built in 
database, FYI



ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-21 Thread Ben Goertzel

Chuck Esterbrook wrote:

On 3/20/07, Ben Goertzel [EMAIL PROTECTED] wrote:

I would certainly expect that a mature Novamente system would be able to
easily solve this kind of
invariant recognition problem.  However, just because a human toddler
can solve this sort of problem easily, doesn't
mean a toddler level AGI should be able to solve it equally easily.
Different specific modalities will
come more naturally to different intelligences, and humans are
particularly visual in focus...


I generally agree, but wanted to ask this: Shouldn't AGIs be visual in
focus because we are? We want AGIs to help us with various tasks many
of which will require looking at diagrams, illustrations and pictures.
And that's just the static material.


Eventually, yeah, a useful AGI should be able to process visual info,
just like it should be able to understand human language.

But I feel that the strong focus on vision that characterizes much
AI work today (especially AI work with a neuroscience foundation)
generally tends to lead in the wrong direction, because vision
processing in humans is carried out largely by fairly specialized
structures and processes (albeit in combination with more general-
purpose structures and processes).  So, one can easily progress 
incrementally

toward better and better vision processing systems, via better and
better emulating the specialized component of human vision processing,
without touching the general-understanding-based component...

Of course, the same dynamic happens across all areas of AI
(creating specialized rather than general methods being a better
way to get impressive, demonstrable incremental progress), but it happens
particularly acutely with vision

Gary Lynch, in the late 80's, made some strong arguments as to why
olfaction might in some ways be a better avenue to cognition than vision.
Walter Freeman's work on the neuroscience of olfaction is inspired by
this same idea.

One point is that vision processing has an atypically hierarchical 
structure in the

human brain.  Olfaction OTOH seems to work more based on attractors
and nonlinear dynamics (cf Freeman's work), sorta like a fancier Hopfield
net (w/asymmetric weights thus leading to non fixed point attractors).  The
focus on vision has led many researchers to overly focus on the hierarchal
aspect rather than the attractor aspect, whereas both aspects obviously 
play

a bit role in cognition.




I guess I worry about the applicability... Would a blind AGI really be
able to find more effective treatments for heart disease, cancer and
aging?

IMO vision is basically irrelevant to these biomedical research tasks.

Direct sensory connections to biomedical lab equipment would
be more useful ;-)




Regarding Numenta, they tout irrespective of scale, distortion and
noise and they chose a visual demonstration, so it seems that at
least their AGI work is deserving of Kevin's criticism. 


I agree.  Poggio's recent work on vision processing using brain models
currently seems more impressive than Numenta's, in terms of combining

-- far greater fidelity as a brain simulation
-- far better performance as an image processing system

But, the Numenta architecture is more general and may be used very
interestingly in future, who knows...

-- Ben



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


[agi] META: People ... be nice, please! [list moderation action]

2007-03-21 Thread Ben Goertzel


Hmmm.  I am rarely inspired to an official moderation action
[though recently some people moderated **me** due to my overly sarcastic
humor regarding AIXI]

But, comments like


Your arrogance surely
exceeds your intelligence.


should be avoided on this list  Let's keep things civil!

Thanks
Ben Goertzel
(list owner/ moderator)


David Clark wrote:

I put up with 1 person out of all the thousands of emails I get who insisted
on sending standard text messages as a attachment.  Because of virus
infections, I had normally set all emails with attachments to automatically
get put in the garbage can.  I had to stop that so I could read your emails
for the past 2 years.

You have a lot of nerve, indeed.  I made a number of arguments in my email
about your conclusions (supported I might add by no arguments) and you
respond by pointing me to how to post email URL's.  Your arrogance surely
exceeds your intelligence.

-- David Clark

- Original Message - 
From: Eugen Leitl [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, March 21, 2007 2:04 PM
Subject: Re: [agi] Fwd: Numenta Newsletter: March 20, 2007


  

On Wed, Mar 21, 2007 at 10:47:35AM -0700, David Clark wrote:



In my previous email, I mistakenly edited out the part from Yan King Yin
  

and
  

it looks like the We know that logic is easy was attributed to him
  

when it
  

was actually a quote of Eugen Leitl.

Sorry for my mistake.
  

It's not your mistake. It's the mistake of those who choose to ignore

http://www.netmeister.org/news/learn2quote.html

It is really a great idea to use plaintext posting and set standard
quoting in your MUA. For those with braindamaged MUAs there are
workarounds like

http://home.in.tum.de/~jain/software/outlook-quotefix/

--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820http://www.ativel.com
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Emergence

2007-03-20 Thread Ben Goertzel


Richard


But where you (I believe) start to confuse the picture is by selecting 
an example of an 'emergent' system that is a special case.  Hopfield 
nets are barely complex enough to have any emergent properties:  in 
fact, they were pretty much engineered so that they could be analysed 
using known laws of statistical physics.  So it is no surprise that 
the behavior of the attractors are subject to some predicatble laws.


Generalizing from Hopfield Nets to the case of a general complex 
system with emergent properties is just a sleight of hand.  HNs are a 
freak case, in that larger context. 


I chose HN's because they were the simplest system I could think of that 
can fairly be said to involve emergence.


If you look at more complex ANN's as described e.g. in Daniel Amit's 
book Modeling Brain Function, then things get more and more subtle and 
dramatic in terms of the kinds of emergence that are possible  (Here 
we have strange attractors, strange transients, and all sorts of fun 
things happen...)


Amit reviews a series of more and more complex NN models, starting with 
simple HN's and ending up with networks that are complex enough to carry 
out arbitrary Turing computations in a purely emergent way (although he 
doesn't phrase it this way).  [I.e., once you have an ANN with an 
arbitrarily complex strange attractor, then you can consider the 
different wings of the attractor as symbols if you wish to, and view 
the transition of the dynamics through the attractor as carrying out an 
arbitrarily complex computation.]


My own view is that the brain utilizes a combination of emergent 
representations/dynamics, with representations/dynamics that are more 
directly and obviously tied to the neural level.  The Novamente design 
also has two levels of representation, with ways to communicate/convert 
between the two.


One feature of my perspective is that it allows me to annoy both the 
people who like emergence, and the people who dislike it ;-)


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-20 Thread Ben Goertzel


Eric Baum wrote:

Hayek doesn't directly scale from random start
to an AGI architecture in as much as the
learning is too slow. But the same is true of any other means of
EC or learning that doesn't start with some huge  head start.
It seems entirely reasonable to merge a Hayek like architecture with
scaffolds and hand-coded chunks and other stuff (maybe whatever is in
Novamente) to get it a head
start.

This does seem reasonable in principle, and is something worth exploring.

We use some economic ideas in the Novamente design, but those aspects of 
the design
have not been implemented yet except in crude prototype form; and in the 
current version
of the design they are more simplistic than (and much faster than) the 
sort of stuff in

Hayek...

 An advantage of having the economic system then is to impose
coherence and constrainedness-- parts that don't in fact work
effectively with others will be seen to be dying, forcing you to fix
the problems. Without the economic discipline, you are likely to have
subsystems (and sub-subsystems) you think are positive but are failing
in some way through interaction effects.

  
True.  However, to get the economic system to work effectively enough to 
identify problems
in a general and accurate way, requires significant computational 
resources to be devoted to
the economics aspect.  So the system as a whole must make a tradeoff 
between more accurate
economic regulation, and having more processor time for things other 
than economic

regulation...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Emergence

2007-03-20 Thread Ben Goertzel

Hi,



P.S.  About Daniel Amit:

I haven't read the book, but are you saying he demonstrates coherent, 
*meaningful* symbol processing as the transition of the dynamics 
through the lobes of an ultracomplex set of attractor lobes?  Like, 
reasoning with the symbols, or something?


And that he does more than just redescribe the system behavior in 
terms of attractors  e.g he uses the analytic math as a way to 
predict the symbol processing in some way?


I'd be willing to bet that he could do this for a Turing machine, 
maybe, but that he does not derive or predict any new properties of 
anything remotely resembling real-world symbol processing using the 
math that describes the attractor dynamics.


That is correct...

ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread Ben Goertzel


 For people who might be interested in influencing some of the 
features of this system, I would appreciate them looking at my 
documentation at www.rccconsulting.com/hal.htm 
http://www.rccconsulting.com/hal.htm  Although my system isn't quite 
ready for alpha distribution yet, I expect that it will be within a 
few months.  People that help with the alpha and beta testing will be 
given consideration on the use of the system in the future even if 
they don't participate in the AGI development.
 
When this project goes ahead, I think even Ben (who has a huge 
intellectual and financial investment in his Novamente project) will 
be interested in the experiments and results a system like I am 
proposing will have, even if he never interfaces his program with it.
 


http://v2.listbox.com/member/?list_id=303


Your programming language looks interesting, and well designed in many 
ways, but I have never been convinced that the inadequacies of current 
programming languages are a significant cause of the slow progress we 
have seen toward AGI.


If you were introducing a radically new programming paradigm for AGI, I 
would be more interested  Not that I think this is necessary to 
achieve AGI, but I would find it more intellectually stimulating ;-)


Regarding languages, I personally am a big fan of both Ruby and 
Haskell.  But, for Novamente we use C++ for reasons of scalability.


-- Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Project Halo [Was: DARPA Ends Brain Reverse Engineering Project]

2007-03-20 Thread Ben Goertzel


FreeBase should be a really wonderful resource for early-stage AGIs
to learn from...

-- Ben



I think Danny Hillis became consumed with FreeBasing.  ;-)

See http://www.edge.org/documents/archive/edge205.html for a recent
report on his newly announced open database project.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread Ben Goertzel

David Clark wrote:

If you were introducing a radically new programming paradigm for AGI, I
would be more interested  Not that I think this is necessary to
achieve AGI, but I would find it more intellectually stimulating ;-)




If you care to detail what kind of problem or structure you find hard to
deal with in C++ or other major languages, I would be interested either on
or off list. 


Well, **anything** can be dealt with in C++, it's just a matter of how 
awkward it is.


For instance, using Boost you can do lambda-calculus in C++ ... but it's 
not the most elegant

way to do lambda-calculus

The biggest thing I miss in C++ are higher-order functions, as you find 
in Haskell for example.


And, at the opposite end of the spectrum, I'm a big fan of Ruby's duck 
typing , as well ...


But I can see that duck typing and higher-order types provide compiler 
writers with a lot of

challenges, where efficiency is concerned...

I am hopeful that recent advances in programming language theory will 
allow the creation of

efficient functional language interpreters in the next 5-10 years, see e.g.

http://www.dcc.fc.up.pt/~frs/

But I don't want to wait!  Instead of creating a better language for 
human use, we are coding
NM in C++ ... but internally, Novamente learns cognitive code expressed 
in a different language,
which doesn't need to have a nice syntax (and is more like LISP or 
Haskell than C++)



With your huge investment in C++ code, I would be leery of any suggestion of
language change as well.  It is hard to evaluate the robustness and
scalability of a language from only documentation where the actual code
isn't available yet.  I think once you actually get a copy, you might just
change your mind about the scalability and speed of this system but I know
all that probably won't be enough for you to switch.  Just you having a
small interest in the outcome of this project as it goes forward would be
great for me.

  


Hey, when the language is ready, I'll try it out ;-)

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread Ben Goertzel

Shane Legg wrote:


Ben, I didn't know you were a Ruby fan...


Cassio has gotten me into Ruby ... but in Novamente it's used only
for prototyping, the real system is C++

For some non-AGI consulting projects we have also used Ruby.

Ruby runs slowly, but, other than that, it's a great language.

Getting back to AGI: I think that, with AGI, the programming
language is pretty much irrelevant, **unless** it stands in the
way of getting the ideas worked out right.

Personally I find that, with C++, I need to have everything
figured out really well in advance before starting coding, or
the code becomes a mess.  Whereas with Ruby I can fiddle
around and think while coding, because modifying code
on-the-fly is so easy.  So, I have liked using Ruby for
prototyping that is aimed at understanding the viability of
some idea.  Then once something has been fully understood,
via prototyping along with other methods, it can be translated
to C++ using a proper scalable, maintainable design...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-20 Thread Ben Goertzel


KIF would be a highly practical lingua franca

Lojban would work fine too

I agree that using English to interface btw modules of an AGI system 
seems suboptimal...


I am glad that the different components of my brain don't need to 
communicate using English ;-_)


Ben


Jey Kottalam wrote:

On 3/20/07, David Clark [EMAIL PROTECTED] wrote:


My proposal certainly allows that.  If sockets and some form of standard
English is used to communicate between the different systems, then any
language should work.  If you want to directly use major chunks of 
code that
others have written within a whole set of your own code, then you 
will have
to have compatible interfaces and work from a common language 
(whether that

is mine or not.)



Why do you choose English as the lingua-franca amongst modules? Even
if you want to use natural language, English is a particularly messy
and internally-inconsistent natural language. How about lojban? Or why
use natural language at all, as opposed to statements in first-order
logic, or semantic nets, or some other machine representation?

-Jey Kottalam

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Fwd: Numenta Newsletter: March 20, 2007

2007-03-20 Thread Ben Goertzel

Kevin Cramer wrote:

I tested this and it is very very poor at invariant recognition.  I am
surprised they released this given how bad it actually is.  As an example I
drew a small A in the bottom left corner of their draw area.  The program
returns the top 5 guesses on what you drew.  The letter A was not even in
the top 5, much less being the first best guess...

Back to the drawing board for this fundamental problem that no one has
solved...including anyone on this list.  And I can say with certainty that
until it is that AGI will not come to pass. 
  


I agree that any reasonably powerful AGI that has been given visual 
sensors since its childhood
will be able to solve this kind of visual invariant recognition problem 
easily.


However, I wouldn't say that this is a prerequisite for human-level AGI: 
some AGI's could simply
not be aware of visual stimuli, existing e.g. in a world of mathematics 
or quantum-level data, etc.


Novamente for example doesn't deal with low-level vision

I would certainly expect that a mature Novamente system would be able to 
easily solve this kind of
invariant recognition problem.  However, just because a human toddler 
can solve this sort of problem easily, doesn't
mean a toddler level AGI should be able to solve it equally easily.  
Different specific modalities will
come more naturally to different intelligences, and humans are 
particularly visual in focus...


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] My proposal for an AGI agenda

2007-03-19 Thread Ben Goertzel

rooftop8000 wrote:

Hi, I've been thinking for a bit about how a big collaboration AI project
could work. I browsed the archives and i see you guys have similar
ideas

I'd love to see someone build a system that is capable of adding any
kind of AI algorithm/idea to. It should unite the power of all existing
different flavors: neural nets, logical systems, etc


The Novamente core system actually does fits this description, but

1)
the API is in some places inelegant, though we have specific
plans for improving it

2)
it's C++, which some folks don't like

3)
it currently only runs on Unix systems, though a Windows port
will likely be made during the next month, as it happens

4)
it is proprietary


If there would be use for such a thing, I would consider open-sourcing
the Novamente core system, separate from the specific learning modules
we have created to go with it.  I would only do so after the inelegancies
mentioned above (point 1) are resolved though.

My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI. 


Novamente consists of a set of agents that have been very carefully
sculpted to work together in such a way as to (when fully implemented
and tuned) give rise to the right overall emergent structures.

The Webmind system I was involved with in the late 90's was more
of a heterogeneous agents architecture, but through that experience
I became convinced that such an approach, while workable in principle,
has too much potential to lead to massive-dimensional parameter-
tuning nightmares...

This gets into my biggest dispute w/Minsky (and Push Singh): they
really think intelligence is just about hooking together a sufficiently
powerful community of agents/critics/resources whatever, whereas
I think it's about hooking together a community of learning algorithms
that is specifically configured to give rise to the right emergent
structures/dynamics. 


Minsky is not big on emergence, and I don't
feel he understands the real nature of self very well.  He tends to
look at self as just another aspect of the system whereas I look at it
as a high-level emergent pattern that comes about holistically in a
system when the parts are configured to work together properly.

Relatedly, I don't think he understands the combined distributed/
localized nature of knowledge representation.  Even if a certain
faculty or piece of knowledge X is associated with some localized
agent or memory store, you should view that localized element
as a kind of key for accessing the global, system-wide activation
pattern associated with X.  Thus, in thinking about each local
part of your AGI system, you need to think about its impact
on the collective, self-organizing dynamics of the whole.

But when you think this way, an AGI starts to seem less like
a heterogenous madhouse of diverse learning agents and like
something more particularly structured ... even though it may
still live within an agents architecture that has general potential..

-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Emergence

2007-03-19 Thread Ben Goertzel


Like so many other terms relevant to AGI, emergence has a lot of 
different meanings.


Some have used a very strong interpretation that I don't like much... a 
meaning like a property of a collective that is fundamentally 
unpredictable based on the components


According to my interpretation, the attractors in a Hopfield net are 
emergent properties of the interactions of the neurons ... but this 
doesn't mean it's impossible to predict the attractors that will arise 
if one knows about the neurons and their interactions.  It just means 
that the details of the attractors are **computationally hard** to 
predict from the details of the neurons.  But the qualitative nature of 
the attractors can be understood cleanly by mathematical theory, in this 
case.


So in general I will call a property of a collective **emergent** if it 
is relatively simple to describe on its own, but computationally very 
difficult to predict, in detail, from properties of the components of 
the collective. 

According to the above definition, it is quite possible to engineer 
systems with emergent properties, and to prove things about the 
constraints on emergent system properties as well.


-- Ben G

Russell Wallace wrote:
On 3/19/07, *Ben Goertzel* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Minsky is not big on emergence


This is an interesting point.

I'm not big on emergence, not in artificial systems anyway. It 
produced us, sure, but that's one planet with intelligence out of a 
zillion universes without it. Emergence is what you get when you 
backcast from sentience to the shortest program that produces it; put 
more poetically, it's what God uses when His limiting resource is not 
time or manpower but improbability.


That doesn't make it a good tool for human engineers, for whom 
improbability is no big deal but time and manpower definitely are. 
Emergence, after all, basically means you couldn't/didn't predict the 
results from the setup; and when a machine does something unpredicted, 
it's generally called a bug. (When you bring humans into the equation 
it's different - blogs, for example, could be called an emergent 
result of the Internet - but then, humans aren't engineered systems, 
and we don't look for emergent behavior within blog-serving software.)


Obviously you disagree with this perspective, and I'm wondering if 
that's a significant axis for classifying approaches to AGI.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] structure of the mind

2007-03-19 Thread Ben Goertzel

J. Storrs Hall, PhD. wrote:

On Monday 19 March 2007 17:30, Ben Goertzel wrote:
...
  

My own view these days is that a wild combination of agents is
probably not the right approach, in terms of building AGI.

Novamente consists of a set of agents that have been very carefully
sculpted to work together in such a way as to (when fully implemented
and tuned) give rise to the right overall emergent structures.



There is one way you can form a coherent, working system from a congeries of 
random agents: put them in a marketplace. This has a fairly rigorous 
discipline of its own and most of them will not survive... and of course the 
system has to have some way of coming up with new ones that will. 


In principle, yeah, this can work.  

But we have to remember that the biggest problem of AGI is dealing with 
severe computational
resource limitations (and, the brain's resources are also to be 
considered severely limited, compared
to what naive computational algorithms could easily consume, 
mathematically speaking).


The question is whether a virtual marketplace is a viable approach to 
AGI, in terms of computational

expense...

For instance, Baum's Hayek is an innovative and exciting use of 
economics in an AI learning context,
yet the approach seems not to be scalable into anything resembling an 
AGI architecture.


Novamente uses economic ideas in some aspects, but mainly just for 
allocation of attention (system

resources) among different internal processes.

My strong intuitive feeling is that using a virtual marketplace to 
originate a coherent working system from a
congerie of random agents would not be computationally feasible.  This, 
to me, falls into the same
general category as build a primordial soup and let Alife and then AI 
evolve from it.  Yes, these
things can work given enough resources.  But the resource requirements 
are way higher than for

more direct engineering-oriented approaches.

The brain may well involve some economics-ish dynamics.  Energy 
minimization and energy
conservation certainly share some common factors with profit 
maximization and money conservation.
However, I really doubt the brain relies on emergent market dynamics to 
enable interoperation of
its various components.  The interoperation of the components was 
originated via evolution, and is merely
tuned and minorly adjusted by brain dynamics during the life of the 
organism (quasi-economic or

otherwise).

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


Re: [agi] Emergence

2007-03-19 Thread Ben Goertzel

Russell Wallace wrote:
On 3/19/07, *Ben Goertzel* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


According to the above definition, it is quite possible to engineer
systems with emergent properties, and to prove things about the
constraints on emergent system properties as well.


Sure. I'm not claiming it's impossible (see the couldn't/didn't bit 
in my description). I'm only claiming that it's typically not an 
efficient approach. That is, it's efficient in terms of improbability 
(God's limiting resource), but wasteful of time and manpower (our 
limiting resource). That is, in most cases (yes, I know about the 
exceptions - but they are exceptions) a design that exhibits a lower 
degree of emergence will achieve a given level of performance with 
less effort, or a higher level of performance with the same effort, 
than a design that exhibits a higher degree of emergence. 


Well, I strongly suspect that human-level AGI is one of the exceptions...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303


<    3   4   5   6   7   8   9   10   11   12   >