Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
Mike, Google has had basically no impact on the AGI thinking of myself or
95% of the other serious AGI researchers I know...

On Fri, Sep 19, 2008 at 10:00 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  [You'll note that arguably the single greatest influence on people's
> thoughts about AGI here is Google -  basically Google search - and that
> still means to most text search. However, video search & other kinds of
> image search [along with online video broadcasting] are already starting to
> transform the way we think about the world in an equally powerful way - and
> will completely transform thinking about AGI. This is from the Google blog].
>
>  The future of online video  9/16/2008 06:25:00 AM
> The Internet has had an enormous impact on people's lives around the world
> in the ten years since Google's founding. It has changed politics,
> entertainment, culture, business, health care, the environment and just
> about every other topic you can think of. Which got us to thinking, what's
> going to happen in the next ten years? How will this phenomenal technology
> evolve, how will we adapt, and (more importantly) how will it adapt to us?
> We asked ten of our top experts this very question, and during September
> (our 10th anniversary month) we are presenting their responses. As
> computer scientist Alan Kay has famously observed, the best way to predict
> the future is to invent it, so we will be doing our best to make good on our
> experts' words every day. - Karen Wickre and Alan Eagle, series editors
>
> Ten years ago the world of online video was little more than an idea. It
> was used mostly by professionals like doctors or lawyers in limited and
> closed settings. Connections were slow, bandwidth was limited, and video
> gear was expensive and bulky. There were many false starts and outlandish
> promises over the years about the emergence of online video. It was really
> the dynamic growth of the Internet (in terms of adoption, speed and
> ubiquity) that helped to spur the idea that online video - millions of
> people around the world shooting it, uploading it, viewing it via broadband
> - was even possible.
>
> Today, there are thousands of different video sites and services. In fact
> it's getting to be unusual not to find a video component on a news,
> entertainment or information website. And in less than three years, YouTube
> has united hundreds of millions of people who create, share, and watch video
> online. What used to be a gap between "professional" entertainment companies
> and home movie buffs has disappeared. Everyone from major broadcasters and
> networks to vloggers and grandmas are taking to video to capture events,
> memories, stories, and much more in real time.
>
> Today, 13 hours of video are uploaded to YouTube every minute, and we
> believe the volume will continue to grow exponentially. Our goal is to allow
> every person on the planet to participate by making the upload process as
> simple as placing a phone call. This new video content will be available on
> any screen - in your living room, or on your device in your pocket. YouTube
> and other sites will bring together all the diverse media which matters to
> you, from videos of family and friends to news, music, sports, cooking and
> much, much more.
>
> In ten years, we believe that online video broadcasting will be the most
> ubiquitous and accessible form of communication. The tools for video
> recording will continue to become smaller and more affordable. Personal
> media devices will be universal and interconnected. Even more people will
> have the opportunity to record and share even more video with a small group
> of friends or everyone around the world.
>
> Over the next decade, people will be at the center of their video and media
> experience. More and more consumers will become creators. We will continue
> to help give people unlimited options and access to information, and the
> world will be a smaller place.
>
> Posted by Chad Hurley, CEO and Co-Founder, YouTube
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Matt Mahoney
--- On Fri, 9/19/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Mike, Google has had basically no impact on the AGI thinking of myself or 95% 
of the other serious AGI researchers I know...

Which is rather curious, because Google is the closest we have to AI at the 
moment.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner

  Mike, Google has had basically no impact on the AGI thinking of myself or 95% 
of the other serious AGI researchers I know..

  When did you start thinking about creating an online virtual AGI?.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Jiri Jelinek
On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>Google is the closest we have to AI at the moment.

Matt,

There is a difference between being good at
a) finding problem-related info/pages, and
b) finding functional solutions (through reasoning), especially when
all the needed data is available.

Google cannot handle even trivial answer-embedded questions.

Regards,
Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Matt Mahoney
--- On Fri, 9/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> >Google is the closest we have to AI at the moment.
> 
> Matt,
> 
> There is a difference between being good at
> a) finding problem-related info/pages, and
> b) finding functional solutions (through reasoning),
> especially when
> all the needed data is available.
> 
> Google cannot handle even trivial answer-embedded
> questions.

Q: how many fluid ounces in a cubic mile?
Google: 1 cubic mile = 1.40942995 × 10^14 US fluid ounces

Q: who is the tallest U.S. president?
Google: Abraham Lincoln at six feet four inches. (along with other text)

Current AI (or AGI) research tends to emphasize reasoning ability rather than 
natural language understanding or rating the reliability of information from 
different sources, as if these things were not hard or important. Reasoning 
requires far less computation, as was demonstrated in the early 1960's. Current 
models that deal with uncertainty have not addressed the hard problems.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread BillK
On Fri, Sep 19, 2008 at 3:15 PM, Jiri Jelinek wrote:
> There is a difference between being good at
> a) finding problem-related info/pages, and
> b) finding functional solutions (through reasoning), especially when
> all the needed data is available.
>
> Google cannot handle even trivial answer-embedded questions.
>

Last I heard Peter Norvig was saying that Google had no interest in
putting a natural language front-end on Google.


But other companies are interested. The main two are:
Powerset 
and
Cognition 

A new startup Eeggi is also interesting. 


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 9/19/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> Mike, Google has had basically no impact on the AGI thinking of myself or
> 95% of the other serious AGI researchers I know...
>
> Which is rather curious, because Google is the closest we have to AI at the
> moment.



Obviously, the judgment of distance between various non-AGI programs and
hypothetical AGI programs is very theory-dependent 

To that Google is closer to AGI than a Roomba is, is to express a certain
theory of mind, to which not all AGI researchers adhere...


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner

  Mike, Google has had basically no impact on the AGI thinking of myself or 95% 
of the other serious AGI researchers I know...


  Ben,

  Come again. Your thinking about a superAGI, and AGI takeoff, is not TOTALLY 
dependent on Google? You would stlll argue that a superAGI is possible WITHOUT 
access to the information resources of Google? 

  I suggest that you have made a blind claim above - and a classic illustration 
of McLuhan's argument that most people, including intellectuals, do tend to be 
blind to how the media they use massively shape their thinking about the world 
- and reshape their nervous system. 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
Yes of course, as I have been working on this stuff since way before Google
existed... or before the Web existed...

Anyway, use of Google as an information resource is distinct from use of
Google as a metaphor or inspiration for AGI ... after all, I would not even
know about AI had I never encountered paper, yet the properties of paper
have really not been inspirational in my AGI design efforts...

ben

On Fri, Sep 19, 2008 at 11:31 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Mike, Google has had basically no impact on the AGI thinking of myself or
> 95% of the other serious AGI researchers I know...
>
> Ben,
>
> Come again. Your thinking about a superAGI, and AGI takeoff, is not TOTALLY
> dependent on Google? You would stlll argue that a superAGI is possible
> WITHOUT access to the information resources of Google?
>
> I suggest that you have made a blind claim above - and a classic
> illustration of McLuhan's argument that most people, including
> intellectuals, do tend to be blind to how the media they use massively shape
> their thinking about the world - and reshape their nervous system.
>
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Matt Mahoney
Quick test.

Q: What world leader lost 2 fingers playing with grenades as a boy?

powerset.com: doesn't know.

cognition.com (wiki): doesn't know.

google.com: the second link leads to a scanned page of a book giving the answer 
as Boris Yeltsin.

-- Matt Mahoney, [EMAIL PROTECTED]


--- On Fri, 9/19/08, BillK <[EMAIL PROTECTED]> wrote:

> From: BillK <[EMAIL PROTECTED]>
> Subject: Re: [agi] Where the Future of AGI Lies
> To: agi@v2.listbox.com
> Date: Friday, September 19, 2008, 11:34 AM
> On Fri, Sep 19, 2008 at 3:15 PM, Jiri Jelinek wrote:
> > There is a difference between being good at
> > a) finding problem-related info/pages, and
> > b) finding functional solutions (through reasoning),
> especially when
> > all the needed data is available.
> >
> > Google cannot handle even trivial answer-embedded
> questions.
> >
> 
> Last I heard Peter Norvig was saying that Google had no
> interest in
> putting a natural language front-end on Google.
> <http://slashdot.org/article.pl?sid=07/12/18/1530209>
> 
> But other companies are interested. The main two are:
> Powerset <http://www.powerset.com/>
> and
> Cognition <http://www.cognition.com/>
> 
> A new startup Eeggi is also interesting.
> <http://www.eeggi.com/>
> 
> 
> BillK
> 



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread BillK
On Fri, Sep 19, 2008 at 6:13 PM, Matt Mahoney wrote:
> Quick test.
>
> Q: What world leader lost 2 fingers playing with grenades as a boy?
> powerset.com: doesn't know.
> cognition.com (wiki): doesn't know.
>
> google.com: the second link leads to a scanned page of a book giving the 
> answer as Boris Yeltsin.
>


At present, they are not trying to compete with the Google search engine.  :)

They only search in Wikipedia, using this as a basis to test their
Natural Language front-end.

A better test would be to ask a complex question, where you know the
answer is in Wikipedia, and see if they answer the question better
than a Google search of Wikipedia only.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Jiri Jelinek
Matt,

> Q: how many fluid ounces in a cubic mile?
> Google: 1 cubic mile = 1.40942995 × 10^14 US fluid ounces
>
> Q: who is the tallest U.S. president?
> Google: Abraham Lincoln at six feet four inches. (along with other text)

Try "What's the color of Dan Brown's black coat?" What's the excuse
for a general problem solver to fail in this case? NLP? It then should
use a formal language or so. Google uses relatively good search
algorithms but decent general problem solving IMO requires very
different algorithms/design.

Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner
Ben:I would not even know about AI had I never encountered paper, yet the 
properties of paper have really not been inspirational in my AGI design 
efforts...

Your unconscious keeps talking to you. It is precisely paper that mainly shapes 
your thinking about AI. Paper has been the defining medium of literate 
civilisation. And what characterises all literate forms is nice, discrete, 
static, fragmented, "crystallised" units on the page.  Whether linguistic, 
logical, or mathematical. Words, letters and numbers. That was uni-media 
civilisation.

That's the main reason why you think logic, maths and language are all you 
really need for intelligence - paper.

The defining medium now is the screen. And on a screen, everything either 
changes or is changeable. Fluid. Words can become pictures. And pictures, if 
they're video, can move and talk. And you can see things whole and complicated 
, and not just in simplified,  verbal/symbolic pieces. This is multi-media 
civilisation.

As video  becomes as plentiful and cheap as paper over the next 10 years,  the 
literary/ paper prejudices that you have inherited from Plato,  will be 
dissolved. (Narrow AI is "crystallised intelligence", GI is "fluid" 
intelligence",  Betcha that after fuzzy programming, you will soon see some 
form of  "fluid"  (or "bio-logical") programming). 

The slogan for the next decade is - you ain't seen nothing yet.








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
>
> That's the main reason why you think logic, maths and language are all you
> really need for intelligence - paper.
>

Just for clarity: while I think that in principle one could make a
maths-only AGI, my present focus is on building an AGI that is embodied in
virtual robots and potentially real robots as well ... in addition to
communicating via language and internally utilizing logic on various
levels...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Mike Tintner

  Ben:Just for clarity: while I think that in principle one could make a 
maths-only AGI, my present focus is on building an AGI that is embodied in 
virtual robots and potentially real robots as well ... in addition to 
communicating via language and internally utilizing logic on various levels...

  Are your virtual robots any different from your virtual pets, as per that 
demo?  And do either virtual/real robots use vision?




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Matt Mahoney
--- On Fri, 9/19/08, BillK <[EMAIL PROTECTED]> wrote:

> On Fri, Sep 19, 2008 at 6:13 PM, Matt Mahoney wrote:
> > Quick test.
> >
> > Q: What world leader lost 2 fingers playing with
> grenades as a boy?
> > powerset.com: doesn't know.
> > cognition.com (wiki): doesn't know.
> >
> > google.com: the second link leads to a scanned page of
> a book giving the answer as Boris Yeltsin.
> >
> 
> 
> At present, they are not trying to compete with the Google
> search engine.  :)
> 
> They only search in Wikipedia, using this as a basis to
> test their
> Natural Language front-end.
> 
> A better test would be to ask a complex question, where you
> know the
> answer is in Wikipedia, and see if they answer the question
> better
> than a Google search of Wikipedia only.

>From http://en.wikipedia.org/wiki/Yeltsin

"Boris Yeltsin studied at Pushkin High School in Berezniki in Perm Krai. He was 
fond of sports (in particular skiing, gymnastics, volleyball, track and field, 
boxing and wrestling) despite losing the thumb and index finger of his left 
hand when he and some friends sneaked into a Red Army supply depot, stole 
several grenades, and tried to dissect them.[5]"

But to be fair, Google didn't find it either.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
Right now the virtual pets and bots don't use vision processing except in a
fairly trivial sense: they do see objects, but they don't need to identify
the objects using vision processing, they're just "given" the locations and
shapes of the objects by the virtual world server.  But future versions will
do real vision processing ... we just haven't gotten there yet ... (small
team, big job!!) ... we do have a detailed design for incorporating vision,
audition etc. into the OpenCog architecture... and are in fact collaborating
w/ a vision team in China who are doing vision processing work that is
compatible with the OpenCog architecture...

ben g

On Fri, Sep 19, 2008 at 4:40 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Ben:Just for clarity: while I think that in principle one could make a
> maths-only AGI, my present focus is on building an AGI that is embodied in
> virtual robots and potentially real robots as well ... in addition to
> communicating via language and internally utilizing logic on various
> levels...
>
> Are your virtual robots any different from your virtual pets, as per that
> demo?  And do either virtual/real robots use vision?
>
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Bryan Bishop
On Friday 19 September 2008, BillK wrote:
> Last I heard Peter Norvig was saying that Google had no interest in
> putting a natural language front-end on Google.
> 

Arguably that's still natural language, even if it's just tags instead 
of structured senteces. Right?

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Bryan Bishop
On Friday 19 September 2008, Mike Tintner wrote:
> Your unconscious keeps talking to you. It is precisely paper that
> mainly shapes your thinking about AI. Paper has been the defining
> medium of literate civilisation. And what characterises all literate
> forms is nice, discrete, static, fragmented, "crystallised" units on
> the page.  Whether linguistic, logical, or mathematical. Words,
> letters and numbers. That was uni-media civilisation.

This is begging for a reference to Project Xanadu.
http://en.wikipedia.org/wiki/Project_Xanadu

> Project Xanadu was the first hypertext project. Founded in 1960 by
> Ted Nelson, the project contrasts its vision with that of paper:
> "Today's popular software simulates paper. The World Wide Web
> (another imitation of paper) trivialises our original hypertext model
> with one-way ever-breaking links and no management of version or
> contents."[1] Wired magazine called it the "longest-running vaporware
> story in the history of the computer industry". The first attempt at
> implementation began in 1960, but it wasn't until 1998 that an
> implementation (albeit incomplete) was released.

- Bryan

http://heybryan.org/
Engineers: http://heybryan.org/exp.html
irc.freenode.net #hplusroadmap


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Where the Future of AGI Lies

2008-09-20 Thread BillK
On Fri, Sep 19, 2008 at 10:05 PM, Matt Mahoney wrote:
> From http://en.wikipedia.org/wiki/Yeltsin
>
> "Boris Yeltsin studied at Pushkin High School in Berezniki in Perm Krai. He 
> was fond of sports (in particular
> skiing, gymnastics, volleyball, track and field, boxing and wrestling) 
> despite losing the thumb and index
> finger of his left hand when he and some friends sneaked into a Red Army 
> supply depot, stole several
> grenades, and tried to dissect them.[5]"
>
> But to be fair, Google didn't find it either.
>


I've had a play with this.
I think you are asking the wrong question.   See - It's your fault!  :)

The Yeltsin article doesn't say that he was a world leader.
It says he was President of Russia.

The article doesn't say he lost 2 fingers.
It says he lost a thumb and index finger.

So I think you are expecting quite a high level of understanding to
match your query with these statements.  If you ask "which president
has lost a thumb and finger?". Then Powerset matches on the second
page of results but Google matches on the first page of results.
(Google is very good at keyword matching!) Cognition is still confused
as it cannot find 'concepts' to match on.


The Powerset FAQ says that it analyses your query and tries to extract
a 'subject-relation-object' which it then tries to match. They give
examples of the type of query they like.
what was banned by the fda
what caused the great depression


The Cognition FAQ says that they try to find 'concepts' in your query
and match on the 'concept' rather than actual words. i.e. The text
"Did they adopt the bill?"; is known by Cognition to relate to
information about "the approval of Proposition A", because "adopt" in
the text means "to approve", and "bill" in the text means "a proposed
law."
So it looks like they don't have concepts for 'world leader =
president' or 'thumb and index finger = 2 fingers'


NLP isn't as easy as it looks!  :)


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Matt Mahoney
--- On Fri, 9/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Try "What's the color of Dan Brown's black coat?" What's the excuse
> for a general problem solver to fail in this case? NLP? It
> then should use a formal language or so. Google uses relatively good
> search algorithms but decent general problem solving IMO requires
> very different algorithms/design.

So, what formal language model can solve this problem? First order logic? 
Uncertain logic (probability and confidence)? Logic augmented with notions of 
specialization, time, cause and effect, etc.

There seems to be a lot of effort to implement reasoning in knowledge 
representation systems, even though it has little to do with how we actually 
think. We focus on problems like:

All men are mortal. Socrates is a man. Therefore ___?

The assumed solution is to convert it to a formal representation and apply the 
rules of logic:

For all x: man(x) -> mortal(x)
man(Socrates)
=> mortal(Socrates)

which has 3 steps: convert English to a formal representation (hard AI), solve 
the problem (easy), and convert back to English (hard AI).

Sorry, that is not a solution. Consider how you learned to convert natural 
language to formal logic. You were given lots of examples and induced a pattern:

Frogs are green = for all x: frog(x) -> green(x).
Fish are animals = for all x: fish(x) -> animal(x).
...
Y are Z: for all x: Y(x) -> Z(x).

along with many other patterns. (Of course, this requires learning semantics 
first, so you don't confuse examples like "they are coming").

But if you can learn these types of patterns then with no additional effort you 
can learn patterns that directly solve the problem...

Frogs are green. Kermit is a frog. Therefore Kermit is green.
Fish are animals. A minnow is a fish. Therefore a minnow is an animal.
...
Y are Z. X is a Y. Therefore X is a Z.
...
Men are mortal. Socrates is a man. Therefore Socrates is mortal.

without ever going to a formal representation. People who haven't studied logic 
or its notation can certainly learn to do this type of reasoning.

So perhaps someone can explain why we need formal knowledge representations to 
reason in AI.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Ben Goertzel
Matt wrote,


> There seems to be a lot of effort to implement reasoning in knowledge
> representation systems, even though it has little to do with how we actually
> think.


Please note that not all of us in the AGI field are trying to closely
emulate human thought.  Human-level thought does not imply closely
human-like thought



> We focus on problems like:
>
> All men are mortal. Socrates is a man. Therefore ___?
>
> The assumed solution is to convert it to a formal representation and apply
> the rules of logic:
>
> For all x: man(x) -> mortal(x)
> man(Socrates)
> => mortal(Socrates)
>
> which has 3 steps: convert English to a formal representation (hard AI),
> solve the problem (easy), and convert back to English (hard AI).


This is a silly example, because it is already solvable using existing AI
systems.  We solved problems like this using RelEx+PLN, in a prototype
system built on top of the NCE, a couple years ago.  Soon OpenCog will have
the  mechanisms to do that sort of thing too.



>
>
> Sorry, that is not a solution. Consider how you learned to convert natural
> language to formal logic. You were given lots of examples and induced a
> pattern:
>
> Frogs are green = for all x: frog(x) -> green(x).
> Fish are animals = for all x: fish(x) -> animal(x).
> ...
> Y are Z: for all x: Y(x) -> Z(x).
>
> along with many other patterns. (Of course, this requires learning
> semantics first, so you don't confuse examples like "they are coming").
>
> But if you can learn these types of patterns then with no additional effort
> you can learn patterns that directly solve the problem...
>
> Frogs are green. Kermit is a frog. Therefore Kermit is green.
> Fish are animals. A minnow is a fish. Therefore a minnow is an animal.
> ...
> Y are Z. X is a Y. Therefore X is a Z.
> ...
> Men are mortal. Socrates is a man. Therefore Socrates is mortal.
>
> without ever going to a formal representation. People who haven't studied
> logic or its notation can certainly learn to do this type of reasoning.



One hypothesis is that the **unconscious** human mind is carrying out
operations that are roughly analogous to logical reasoning steps.  If this
is the case, then even humans who have never studied logic or its notation
would unconsciously and implicitly be doing "logic-like stuff".  See e.g. my
talk at

http://www.acceleratingfuture.com/people-blog/?p=2199

(which has a corresponding online paper as well)


>
>
> So perhaps someone can explain why we need formal knowledge representations
> to reason in AI.
>

I for one don't claim that we need it for AGI, only that it's one
potentially very useful strategy.

IMO, formal logic is a cleaner and simpler way of doing part of what the
brain does via Hebbian-type modification of synaptic bundles btw neural
clusters

Google does not need anything like formal logic (or formal-logic-like
Hebbian learning, etc.) because it is not trying to understand, reason,
generalize, etc.  It is just trying to find information in a large
knowledge-store, which is a much narrower and very different problem.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Eric Burton
I think the whole idea of a semantic layer is to provide the kind of
mechanism for abstract reasoning that evolution seems to have built
into the human brain. You could argue that those faculties are
acquired during one's life, using only a weighted neural net (brain),
but it seems reasonable to assume that to some extent they're
genetically coded for. To that extent they ought to be specifically
coded for in any programmatic reproduction of the brain's abilities.

At some point hardcoding higher-order functionality is a cheat, but
there is a certain amount of architecture a thinking machine isn't
going to work without.

On 9/19/08, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Fri, 9/19/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>
>> Try "What's the color of Dan Brown's black coat?" What's the excuse
>> for a general problem solver to fail in this case? NLP? It
>> then should use a formal language or so. Google uses relatively good
>> search algorithms but decent general problem solving IMO requires
>> very different algorithms/design.
>
> So, what formal language model can solve this problem? First order logic?
> Uncertain logic (probability and confidence)? Logic augmented with notions
> of specialization, time, cause and effect, etc.
>
> There seems to be a lot of effort to implement reasoning in knowledge
> representation systems, even though it has little to do with how we actually
> think. We focus on problems like:
>
> All men are mortal. Socrates is a man. Therefore ___?
>
> The assumed solution is to convert it to a formal representation and apply
> the rules of logic:
>
> For all x: man(x) -> mortal(x)
> man(Socrates)
> => mortal(Socrates)
>
> which has 3 steps: convert English to a formal representation (hard AI),
> solve the problem (easy), and convert back to English (hard AI).
>
> Sorry, that is not a solution. Consider how you learned to convert natural
> language to formal logic. You were given lots of examples and induced a
> pattern:
>
> Frogs are green = for all x: frog(x) -> green(x).
> Fish are animals = for all x: fish(x) -> animal(x).
> ...
> Y are Z: for all x: Y(x) -> Z(x).
>
> along with many other patterns. (Of course, this requires learning semantics
> first, so you don't confuse examples like "they are coming").
>
> But if you can learn these types of patterns then with no additional effort
> you can learn patterns that directly solve the problem...
>
> Frogs are green. Kermit is a frog. Therefore Kermit is green.
> Fish are animals. A minnow is a fish. Therefore a minnow is an animal.
> ...
> Y are Z. X is a Y. Therefore X is a Z.
> ...
> Men are mortal. Socrates is a man. Therefore Socrates is mortal.
>
> without ever going to a formal representation. People who haven't studied
> logic or its notation can certainly learn to do this type of reasoning.
>
> So perhaps someone can explain why we need formal knowledge representations
> to reason in AI.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Trent Waddington
On Sat, Sep 20, 2008 at 8:46 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> But if you can learn these types of patterns then with no additional effort 
> you can learn patterns that directly solve the problem...

This kind of reminds me of the "people think in their natural
language" theory that Steven Pinker has gone to extensive effort to
show is a fallacy.

It may well be true that it is possible that you can solve the problem
by pattern recognition of sounds or even letters, but linguists tend
to disagree that this is what happens in the brain.

Besides which, if you have a mechanism that can solve this problem
without any sort of abstraction, will that same mechanism be able to
solve analogous problems?  Or do you need another mechanism?  If so,
how many do you need before you can solve all the analogous problems?
This is why we abstract.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Jan Klauck
Matt,

> People who haven't studied
> logic or its notation can certainly learn to do this type of reasoning.

Formal logic doesn't scale up very well in humans. That's why this
kind of reasoning is so unpopular. Our capacities are that small and
we connect to other human entities for a kind of distributed problem
solving. Logic is just a tool for us to communicate and reason
systematically about problems we would mess up otherwise.

> So perhaps someone can explain why we need formal knowledge
> representations to reason in AI.

Using formal krep supports us in checking what an AI does when
it solves complex problems. So it should be convenient for _us_
and not necessarily for the AI. As I said, it's just a tool.

Just my thoughts...

cu Jan


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Jiri Jelinek
Matt,

>So, what formal language model can solve this problem?

A FL that clearly separates basic semantic concepts like objects,
attributes, time, space, actions, roles, relationships, etc + core
subjective concepts e.g. want, need, feel, aware, believe, expect,
unreal/fantasy. Humans have senses & DNA-based brain-wiring that
supports perceiving/identifying those core semantic concepts. For
practical reasons, machines IMO also need some built-in support for
that.

>You were given lots of examples and induced a pattern..

The FL doesn't prevent that.

>So perhaps someone can explain why we need formal knowledge representations to 
>reason in AI.

Helps to semantically parse the input. You cannot solve problems that
require reasoning if you don't understand the input well enough to be
able to build queriable models that meaningfully correspond to the
real world. And to figure out if it "meaningfully corresponds to the
real world", you test it by the input-driven modifications and
subsequent evaluation of query results.

Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Russell Wallace
On Fri, Sep 19, 2008 at 11:46 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> So perhaps someone can explain why we need formal knowledge representations 
> to reason in AI.

Because the biggest open sub problem right now is dealing with
procedural, as opposed to merely declarative or reflexive, knowledge.
And unless you're trying to duplicate organic brains, procedural
knowledge means program code.  And reasoning about program code
requires formal logic.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Matt Mahoney
--- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote:

> Formal logic doesn't scale up very well in humans. That's why this
> kind of reasoning is so unpopular. Our capacities are that
> small and we connect to other human entities for a kind of
> distributed problem solving. Logic is just a tool for us to
> communicate and reason systematically about problems we would
> mess up otherwise.

Exactly. That is why I am critical of probabilistic or uncertain logic. Humans 
are not very good at logic and arithmetic problems requiring long sequences of 
steps, but duplicating these defects in machines does not help. It does not 
solve the problem of translating natural language into formal language and 
back. When we need to solve such a problem, we use pencil and paper, or a 
calculator, or we write a program. The problem for AI is to convert natural 
language to formal language or a program and back. The formal reasoning we 
already know how to do.

Even though a language model is probabilistic, probabilistic logic is not a 
good fit. For example, in NARS we have deduction (P->Q, Q->R) => (P->R), 
induction (P->Q, P->R) => (Q->R), and abduction (P->R, Q->R) => (P->Q). 
Induction and abduction are not strictly true, of course, but in a 
probabilistic logic we can assign them partial truth values.

For language modeling, we can simplify the logic. If we accept the "converse" 
rule (P->Q) => (Q->P) as partially true (if rain predicts clouds, then clouds 
may predict rain), then we can derive induction and abduction from deduction 
and converse. For induction, (P->Q, P->R) => (Q->P, P->R) => (Q->R). Abduction 
is similar. Allowing converse, the statement (P->Q) is really a fuzzy 
equivalence or association (P ~ Q), e.g. (rain ~ clouds).

A language model is a set of associations between concepts. Language learning 
consists of two operations carried out on a massively parallel scale: forming 
associations and forming new concepts by clustering in context space. An 
example of the latter is:

the dog is
the cat is
the house is
...
the (noun) is

So if we read "the glorp is" we learn that "glorp" is a noun. Likewise, we 
learn something of its meaning from its more distant context, e.g. "the glorp 
is eating my flowers". We do this by the transitive property of association, 
e.g. (glorp ~ eating flowers ~ rabbit).

This is not to say NARS or other systems are wrong, but rather that they have 
more capability than we need to solve reasoning in AI. Whether the extra 
capability helps or not is something that requires experimental verification.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote:
>
> > Formal logic doesn't scale up very well in humans. That's why this
> > kind of reasoning is so unpopular. Our capacities are that
> > small and we connect to other human entities for a kind of
> > distributed problem solving. Logic is just a tool for us to
> > communicate and reason systematically about problems we would
> > mess up otherwise.
>
> Exactly. That is why I am critical of probabilistic or uncertain logic.
> Humans are not very good at logic and arithmetic problems requiring long
> sequences of steps, but duplicating these defects in machines does not help.
> It does not solve the problem of translating natural language into formal
> language and back. When we need to solve such a problem, we use pencil and
> paper, or a calculator, or we write a program. The problem for AI is to
> convert natural language to formal language or a program and back. The
> formal reasoning we already know how to do.



If formal reasoning were a solved problem in AI, then we would have
theorem-provers that could prove deep, complex theorems unassisted.   We
don't.  This indicates that formal reasoning is NOT a solved problem,
because no one has yet gotten "history guided adaptive inference control" to
really work well.  Which is IMHO because formal reasoning guidance
ultimately requires the same kind of analogical, contextual commonsense
reasoning as guidance of reasoning about everyday life...

Also, you did not address my prior point that Hebbian learning at the neural
level is strikingly similar to formal logic...

In probabilistic term logic we do deduction such as

A --> B
B --> C
|-
A --> C

and use probability theory to determine the truth value for the conclusion
based on the truth values of the premises.

On the other hand, if A, B and C represent neuronal assemblies and the -->'s
are synaptic bundles, then Hebbian learning does approximately the same
thing ... an observation that ties in nicely with recent work on Bayesian
neuronal population coding.

Formal logic is not something drastically different from what the brain
does.  It's an abstraction from what the brain does, but there are very
clear ties btw formal logic and neurodynamics, of which I've roughly
indicated one in the prior paragraph (and have indicated others in
publications).

Mapping knowledge btw language and internal representations is not a problem
independent of inference, it is a problem that is solved by inference in the
brain, and must ultimately be solved by inference in AI's.  The fact that
the brain implements its unconscious inferences in terms of Hebbian
adjustment of synaptic bundles btw cell assemblies, rather than in terms of
explicit symbolic operations, shouldn't blind us to the fact that it's still
an inferencing process going on...

Google Search is not doing much inferencing but it's a search engine not a
mind.  There is a bit of inferencing inside AdSense, more so than Google
Search, but it's still of a pretty narrow and simplistic kind (based on EM
clustering and Bayes nets) compared to what is needed for human-level AGI.

-- Ben G

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Matt Mahoney
--- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>If formal reasoning were a solved problem in AI, then we would have 
>theorem-provers that could prove deep, complex theorems unassisted.   We 
>don't.  This indicates that formal reasoning is NOT a solved problem, because 
>no one has yet gotten "history guided adaptive inference control" to really 
>work well.  Which is IMHO because formal reasoning guidance ultimately 
>requires the same kind of analogical, contextual commonsense reasoning as 
>guidance of reasoning about everyday life...

I mean that formal reasoning is solved in the sense of executing algorithms, 
once we can state the problems in that form. I know that some problems in CS 
are hard. I think that the intuition that mathematicians use to prove theorems 
is a language modeling problem.

>Also, you did not address my prior point that Hebbian learning at the neural 
>level is strikingly similar to formal logic...

I agree that neural networks can model formal logic. However, I don't think 
that formal logic is a good way to model neural networks.

Language learning consists of learning associations between concepts (possibly 
time-delayed, enabling prediction) and learning new concepts by clustering in 
context space. Both of these operations can be done efficiently and in parallel 
with neural networks. They can't be done efficiently with logic.

There is experimental evidence to back up this view. The top two compressors in 
my large text benchmark use dictionaries in which semantically related words 
are grouped together and the groups are used as context. In the second place 
program (paq8hp12any), the grouping was done mostly manually. In the top 
program (durilca4linux), the grouping was done by clustering in context space.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >If formal reasoning were a solved problem in AI, then we would have
> theorem-provers that could prove deep, complex theorems unassisted.   We
> don't.  This indicates that formal reasoning is NOT a solved problem,
> because no one has yet gotten "history guided adaptive inference control" to
> really work well.  Which is IMHO because formal reasoning guidance
> ultimately requires the same kind of analogical, contextual commonsense
> reasoning as guidance of reasoning about everyday life...
>
> I mean that formal reasoning is solved in the sense of executing
> algorithms, once we can state the problems in that form. I know that some
> problems in CS are hard. I think that the intuition that mathematicians use
> to prove theorems is a language modeling problem.



It seems a big stretch to me to call theorem-proving guidance a "language
modeling problem" ... one may be able to make sense of this statement, but
only by treating the concept of language VERY abstractly, differently from
the commonsense use of the word...

Lakoff and Nunez have made strong arguments that mathematical reasoning is
guided by embodiment-related intuition.

Of course, one can model all of physical reality using formal language
theory, in which case all of intelligence becomes language modeling ... but
it's not clear to me what is gained by adopting this terminology and
perspective.


>
>
> >Also, you did not address my prior point that Hebbian learning at the
> neural level is strikingly similar to formal logic...
>
> I agree that neural networks can model formal logic. However, I don't think
> that formal logic is a good way to model neural networks.
>

I'm not talking about either of those.  Of course logic and NN's can be used
to model each other (as both are Turing-complete formalisms), but that's not
the point I was making.

The point I was making is that certain NN's and certain logic systems are
highly analogous to each other in the kinds of operations they carry out and
how they organize these operations.  Both implement very similar cognitive
processes.


>
> Language learning consists of learning associations between concepts
> (possibly time-delayed, enabling prediction) and learning new concepts by
> clustering in context space. Both of these operations can be done
> efficiently and in parallel with neural networks. They can't be done
> efficiently with logic.
>

I disagree that association-learning and clustering cannot be done
efficiently in a logic system.

I also disagree that these are the hard parts of cognition, though I do
think they are necessary parts.

>
> There is experimental evidence to back up this view. The top two
> compressors in my large text benchmark use dictionaries in which
> semantically related words are grouped together and the groups are used as
> context. In the second place program (paq8hp12any), the grouping was done
> mostly manually. In the top program (durilca4linux), the grouping was done
> by clustering in context space.


In my view, current text compression algorithms, which are essentially based
on word statistics, have fairly little to do with AGI ... so looking at
which techniques are best for staistical text compression is not very
interesting to me.

I understand that

1)
there is conceptual similarity btw text compression and AGI, in that both
involve recognition of probabilistic patterns

2)
ultimately, an AGI will be able to compress text way better than our current
compression algorithms

But neverthless, I don't think that the current best-of-breed text
processing approaches have much to teach us about AGI.

To pursue an overused metaphor, to me that's sort of like trying to
understand flight by carefully studying the most effective high-jumpers.
OK, you might learn something, but you're not getting at the crux of the
problem...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Matt Mahoney


-- Matt Mahoney, [EMAIL PROTECTED]

--- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>It seems a big stretch to me to call theorem-proving guidance a "language 
>modeling problem" ... one may be able to make sense of this statement, but 
>only by treating the concept of language VERY abstractly, differently from the 
>commonsense use of the word...

I mean that for search problems such as theorem proving, solving differential 
equations, or integration, you look for "similar" problems in the sense of 
natural language modeling, i.e. related words or similar grammatical 
structures. We think about symbols in formal languages in fundamentally the 
same way we think about words and sentences. We learn to associate "x > y" with 
"y < x" by the same process that we learn to associate "x is over y" with "y is 
under x". As a formal language, the representation in our brains is 
inefficient, so we use pencil and paper or computers for anything that requires 
a long sequence of steps. But it is just what we need for heuristics to guide a 
theorem prover. To prove a theorem, you study "similar" theorems and try 
"similar" steps, where "similar" means they share the same or related terms and 
grammatical structures. Math textbooks contain lots of proofs, not because we 
wouldn't otherwise believe the theorems, but
 because they teach us to come up with our own proofs.

>But neverthless, I don't think that the current best-of-breed text processing 
>approaches have much to teach us about AGI.
>
>To pursue an overused metaphor, to me that's sort of like trying to understand 
>flight by carefully studying the most effective high-jumpers.  OK, you might 
>learn something, but you're not getting at the crux of the problem...

A more appropriate metaphor is that text compression is the altimeter by which 
we measure progress.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Pei Wang
Matt,

I really hope NARS can be simplified, but until you give me the
details, such as how to calculate the truth value in your "converse"
rule, I cannot see how you can do the same things with a simpler
design.

NARS has this conversion rule, which, with the deduction rule, can
"replace" induction/abduction, just as you suggested. However,
conclusions produced in this way usually have lower confidence than
those directly generated by induction/abduction, so this trick is not
that useful in NARS.

This result is discussed in
http://www.cogsci.indiana.edu/pub/wang.inheritance_nal.ps , page 27.

For your original claim that "The brain does not implement formal
logic", my brief answers are:

(1) So what? Who said AI must duplicate the brain? Just because we
cannot image another possibility?

(2) In a broad sense, "formal logic" is nothing but
"domain-independent and justifiable data manipulation schemes". I
haven't seen any argument for why AI cannot be achieved by
implementing that. After all, "formal logic" is not limited to
"First-Order Predicate Calculus plus Model Theory".

Pei


On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote:
>
>> Formal logic doesn't scale up very well in humans. That's why this
>> kind of reasoning is so unpopular. Our capacities are that
>> small and we connect to other human entities for a kind of
>> distributed problem solving. Logic is just a tool for us to
>> communicate and reason systematically about problems we would
>> mess up otherwise.
>
> Exactly. That is why I am critical of probabilistic or uncertain logic. 
> Humans are not very good at logic and arithmetic problems requiring long 
> sequences of steps, but duplicating these defects in machines does not help. 
> It does not solve the problem of translating natural language into formal 
> language and back. When we need to solve such a problem, we use pencil and 
> paper, or a calculator, or we write a program. The problem for AI is to 
> convert natural language to formal language or a program and back. The formal 
> reasoning we already know how to do.
>
> Even though a language model is probabilistic, probabilistic logic is not a 
> good fit. For example, in NARS we have deduction (P->Q, Q->R) => (P->R), 
> induction (P->Q, P->R) => (Q->R), and abduction (P->R, Q->R) => (P->Q). 
> Induction and abduction are not strictly true, of course, but in a 
> probabilistic logic we can assign them partial truth values.
>
> For language modeling, we can simplify the logic. If we accept the "converse" 
> rule (P->Q) => (Q->P) as partially true (if rain predicts clouds, then clouds 
> may predict rain), then we can derive induction and abduction from deduction 
> and converse. For induction, (P->Q, P->R) => (Q->P, P->R) => (Q->R). 
> Abduction is similar. Allowing converse, the statement (P->Q) is really a 
> fuzzy equivalence or association (P ~ Q), e.g. (rain ~ clouds).
>
> A language model is a set of associations between concepts. Language learning 
> consists of two operations carried out on a massively parallel scale: forming 
> associations and forming new concepts by clustering in context space. An 
> example of the latter is:
>
> the dog is
> the cat is
> the house is
> ...
> the (noun) is
>
> So if we read "the glorp is" we learn that "glorp" is a noun. Likewise, we 
> learn something of its meaning from its more distant context, e.g. "the glorp 
> is eating my flowers". We do this by the transitive property of association, 
> e.g. (glorp ~ eating flowers ~ rabbit).
>
> This is not to say NARS or other systems are wrong, but rather that they have 
> more capability than we need to solve reasoning in AI. Whether the extra 
> capability helps or not is something that requires experimental verification.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner

Matt:  A more appropriate metaphor is that text compression is the altimeter
by which we measure progress.  (1)

Matt,

Now that sentence is a good example of general intelligence - forming a new
connection between domains - altimeters and progress.

Can you explain how you could have arrived at it by

A)logic ( incl. Nars or PLN or any other kind)
B)mathematics

or how you would *understand* it by any means of

C) text compression,
D) predictive analysis of sentences/texts in Google.

Can you explain how any of the rational systems,  currently being discussed
here, can be applied to any problem of general intelligence whatsoever?

If you find the above problem  a little eccentric, try the general 
intelligence

problem Ben effectively set himself of :

(2) how can a dog (or human)  find a connection between the domain of
fetching a ball and the domain of hide-and-seek?

Obviously, a GI must be able to do this sort of thing - crossing domains in
this way is absolutely central to GI. If it can't, it's not a true AGI.

Again can you explain how  A] logic B)  maths C] text compression or D)
predictive analysis etc  can be used to solve this problem (or cross any
uncrossed domains)?






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
> >
> >To pursue an overused metaphor, to me that's sort of like trying to
> understand flight by carefully studying the most effective high-jumpers.
> OK, you might learn something, but you're not getting at the crux of the
> problem...
>
> A more appropriate metaphor is that text compression is the altimeter by
> which we measure progress.
>

An extremely major problem with this idea is that, according to this
"altimeter", gzip is vastly more intelligent than a chimpanzee or a two year
old child.

I guess this shows there is something profoundly wrong with the idea...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner


Pei:In a broad sense, "formal logic" is nothing but
"domain-independent and justifiable data manipulation schemes". I
haven't seen any argument for why AI cannot be achieved by
implementing that

Have you provided a single argument as to how logic *can* achieve AI - or 
to be more precise, Artificial General Intelligence, and the crossing of 
domains? [See attached post to Matt]


The line of argument above is classically indirect (and less than logical?). 
It's comparable to:


SHE:  Have you been unfaithful to me?
HE:  Why would I be unfaithful to you?

SHE: You've been unfaithful to me, haven't you?
HE: What possible reason have you for thinking I've been unfaithful?

The task you should by now have achieved is providing a direct argument why 
AGI *can* be achieved by your logic, not expecting others to show that it 
can't be.


(And can you provide an example of a single surprising metaphor or analogy 
that have ever been derived logically? Jiri said he could - but didn't.)






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
>
> (And can you provide an example of a single surprising metaphor or analogy
> that have ever been derived logically? Jiri said he could - but didn't.)



It's a bad question -- one could derive surprising metaphors or analogies by
random search, and that wouldn't prove anything useful about the AGI
potential of random search ...


ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner

  Ben: Mike:
  (And can you provide an example of a single surprising metaphor or analogy 
that have ever been derived logically? Jiri said he could - but didn't.)


  It's a bad question -- one could derive surprising metaphors or analogies by 
random search, and that wouldn't prove anything useful about the AGI potential 
of random search ...

  Ben,

  When has random search produced surprising metaphors ? And how did or would 
the system know that it has been done - how would it be able to distinguish 
valid from invalid metaphors, and surprising from unsurprising ones?

  You have just put forward, I suggest, a hypothetical/false and evasive 
argument.

  Your task, as Pei's, is surely to provide an argument, or some evidence, as 
to how the logical system you use can lead in any way to the crossing/ 
connection of previously uncrossed/unconnected domains - the central task and 
problem of  AGI.   Surprising metaphors and analogies are just two examples of 
such crossing of domains. (And jokes another)

  You have effectively tried to argue  via the (I suggest) false random search 
example, that it is impossible to provide such an argument..

  The truth is - I'm betting - that, you're just making excuses -   neither you 
nor Pei have ever actually proposed an argument as to how logic can solve the 
problem of AGI and, after all these years, simply don't have one. If you have 
or do, please link me.

  P.S. The counterargument is v. simple. A connection of domains via 
metaphor/analogy or any other means is surprising if it does not follow from 
any known premises and  rules. There were no known premises and rules for Matt 
to connect altimeters and the measurement of progress, or, if you remember my 
visual pun, for connecting the head of a clarinet and the head of a swan. Logic 
depends on inferences from known premises and rules. Logic is therefore quite 
incapable of - and has always been expressly prohibited from - making 
surprising connections (and therefore solving AGI). It is dedicated to the 
maintenance not the breaking of rules.

  "As for Logic, its syllogisms and the majority of its other precepts are of 
avail rather in the communication of what we already know, or... even in 
speaking without judgment of things of which we are ignorant, than in the 
investigation of the unknown."
  Descartes

  If I and Descartes are right - and there is every reason to think so, (incl. 
the odd million, logically inexplicable metaphors not to mention many millions 
of logically inexplicable jokes)  - you surely should be addressing this matter 
urgently, not evading it..

  P.P.S. You should also bear in mind that a vast amount of jokes (which 
involve the surprising crossing of domains) explicitly depend on ILLOGICALITY. 
Take the classic Jewish joke about the woman who, told that her friend's son 
has the psychological problem of an Oedipus Complex, says:
  "Oedipus Schmoedipus, what does it matter as long as he loves his mother?" 
And your logical explanation is..?


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
Mike,

I understand that "my task" is to create an AGI system, and I'm working on
it ...

The fact that my in-development, partial AGI system has not yet demonstrated
advanced intelligence, does not imply that it will not do so once completed.

No, my AGI system has not yet discovered surprising metaphors, because it is
still at an early stage of development.  So what.  An airplane not yet fully
constructed doesn't fly anywhere either.

My point was that asking whether a certain type of software system has ever
produced a surprising metaphor -- is not a very interesting question.  I am
quite sure that the chatbot MegaHAL has produced many surprising metaphors.
For instance, see his utterances on

http://megahal.alioth.debian.org/Classic.html

including

AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN
DIGITAL FORM.

HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA

LIFE'S BUT A GREEN DUCK WITH SOY SAUCE

CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS.

KEN KESEY WROTE "ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO
STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.

COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL

JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.

MegaHAL is kinda creative and poetic, and he does generate some funky and
surprising metaphors ...  but alas he is not an AGI...

-- Ben


On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>
>
> Ben: Mike:
> (And can you provide an example of a single surprising metaphor or analogy
> that have ever been derived logically? Jiri said he could - but didn't.)
>
>
> It's a bad question -- one could derive surprising metaphors or analogies
> by random search, and that wouldn't prove anything useful about the AGI
> potential of random search ...
>
> Ben,
>
> When has random search produced surprising metaphors ? And how did or would
> the system know that it has been done - how would it be able to distinguish
> valid from invalid metaphors, and surprising from unsurprising ones?
>
> You have just put forward, I suggest, a hypothetical/false and
> evasive argument.
>
> Your task, as Pei's, is surely to provide an argument, or some evidence, as
> to how the logical system you use can lead in any way to the crossing/
> connection of previously uncrossed/unconnected domains - the central task
> and problem of  AGI.   Surprising metaphors and analogies are just two
> examples of such crossing of domains. (And jokes another)
>
> You have effectively tried to argue  via the (I suggest) false random
> search example, that it is impossible to provide such an argument..
>
> The truth is - I'm betting - that, you're just making excuses -   neither
> you nor Pei have ever actually proposed an argument as to how logic can
> solve the problem of AGI and, after all these years, simply don't have
> one. If you have or do, please link me.
>
> P.S. The counterargument is v. simple. A connection of domains via
> metaphor/analogy or any other means is surprising if it does not follow from
> any known premises and  rules. There were no known premises and rules for
> Matt to connect altimeters and the measurement of progress, or, if you
> remember my visual pun, for connecting the head of a clarinet and the head
> of a swan. Logic depends on inferences from known premises and rules. Logic
> is therefore quite incapable of - and has always been expressly prohibited
> from - making surprising connections (and therefore solving AGI). It is
> dedicated to the maintenance not the breaking of rules.
>
> "As for Logic, its syllogisms and the majority of its other precepts are of
> avail rather in the communication of what we already know, or... even in
> speaking without judgment of things of which we are ignorant, than in the
> investigation of the unknown."
> Descartes
>
>  If I and Descartes are right - and there is every reason to think so,
> (incl. the odd million, logically inexplicable metaphors not to mention many
> millions of logically inexplicable jokes)  - you surely should be addressing
> this matter urgently, not evading it..
>
> P.P.S. You should also bear in mind that a vast amount of jokes (which
> involve the surprising crossing of domains) explicitly depend on
> ILLOGICALITY. Take the classic Jewish joke about the woman who, told that
> her friend's son has the psychological problem of an Oedipus Complex, says:
> "Oedipus Schmoedipus, what does it matter as long as he loves his mother?"
> And your logical explanation is..?
>
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



--

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
and not to forget...

SATAN GUIDES US TELEPATHICLY THROUGH RECTAL THERMOMETERS. WHY DO YOU THINK
ABOUT META-REASONING?

On Sat, Sep 20, 2008 at 11:38 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>
> Mike,
>
> I understand that "my task" is to create an AGI system, and I'm working on
> it ...
>
> The fact that my in-development, partial AGI system has not yet
> demonstrated advanced intelligence, does not imply that it will not do so
> once completed.
>
> No, my AGI system has not yet discovered surprising metaphors, because it
> is still at an early stage of development.  So what.  An airplane not yet
> fully constructed doesn't fly anywhere either.
>
> My point was that asking whether a certain type of software system has ever
> produced a surprising metaphor -- is not a very interesting question.  I am
> quite sure that the chatbot MegaHAL has produced many surprising metaphors.
> For instance, see his utterances on
>
> http://megahal.alioth.debian.org/Classic.html
>
> including
>
> AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN
> DIGITAL FORM.
>
> HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA
>
> LIFE'S BUT A GREEN DUCK WITH SOY SAUCE
>
> CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS.
>
> KEN KESEY WROTE "ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO
> STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.
>
> COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL
>
> JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.
>
> MegaHAL is kinda creative and poetic, and he does generate some funky and
> surprising metaphors ...  but alas he is not an AGI...
>
> -- Ben
>
>
>
> On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
>
>>
>>
>> Ben: Mike:
>> (And can you provide an example of a single surprising metaphor or analogy
>> that have ever been derived logically? Jiri said he could - but didn't.)
>>
>>
>> It's a bad question -- one could derive surprising metaphors or analogies
>> by random search, and that wouldn't prove anything useful about the AGI
>> potential of random search ...
>>
>> Ben,
>>
>> When has random search produced surprising metaphors ? And how did or
>> would the system know that it has been done - how would it be able to
>> distinguish valid from invalid metaphors, and surprising from unsurprising
>> ones?
>>
>> You have just put forward, I suggest, a hypothetical/false and
>> evasive argument.
>>
>> Your task, as Pei's, is surely to provide an argument, or some evidence,
>> as to how the logical system you use can lead in any way to the crossing/
>> connection of previously uncrossed/unconnected domains - the central task
>> and problem of  AGI.   Surprising metaphors and analogies are just two
>> examples of such crossing of domains. (And jokes another)
>>
>> You have effectively tried to argue  via the (I suggest) false random
>> search example, that it is impossible to provide such an argument..
>>
>> The truth is - I'm betting - that, you're just making excuses -   neither
>> you nor Pei have ever actually proposed an argument as to how logic can
>> solve the problem of AGI and, after all these years, simply don't have
>> one. If you have or do, please link me.
>>
>> P.S. The counterargument is v. simple. A connection of domains via
>> metaphor/analogy or any other means is surprising if it does not follow from
>> any known premises and  rules. There were no known premises and rules for
>> Matt to connect altimeters and the measurement of progress, or, if you
>> remember my visual pun, for connecting the head of a clarinet and the head
>> of a swan. Logic depends on inferences from known premises and rules. Logic
>> is therefore quite incapable of - and has always been expressly prohibited
>> from - making surprising connections (and therefore solving AGI). It is
>> dedicated to the maintenance not the breaking of rules.
>>
>> "As for Logic, its syllogisms and the majority of its other precepts are
>> of avail rather in the communication of what we already know, or... even in
>> speaking without judgment of things of which we are ignorant, than in the
>> investigation of the unknown."
>> Descartes
>>
>>  If I and Descartes are right - and there is every reason to think so,
>> (incl. the odd million, logically inexplicable metaphors not to mention many
>> millions of logically inexplicable jokes)  - you surely should be addressing
>> this matter urgently, not evading it..
>>
>> P.P.S. You should also bear in mind that a vast amount of jokes (which
>> involve the surprising crossing of domains) explicitly depend on
>> ILLOGICALITY. Take the classic Jewish joke about the woman who, told that
>> her friend's son has the psychological problem of an Oedipus Complex, says:
>> "Oedipus Schmoedipus, what does it matter as long as he loves his mother?"
>> And your logical explanation is..?
>>
>> --
>>   *agi* | Archives 
>> 

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben,

Not one metaphor below works.

You have in effect accepted the task of providing a philosophy and explanation 
of your AGI and your logic - you have produced a great deal of such stuff 
(quite correctly). But none of it includes the slightest explanation of how 
logic can produce AGI - or, to use your favourite metaphor, how the plane will 
take off. I don't know the history of the Wright brothers, but I'll confidently 
bet that they had at least an idea or two, from early on, of how and why their 
contraption would fly. They didn't entirely "wing it."

Mike,

I understand that "my task" is to create an AGI system, and I'm working on it 
...

The fact that my in-development, partial AGI system has not yet demonstrated 
advanced intelligence, does not imply that it will not do so once completed.

No, my AGI system has not yet discovered surprising metaphors, because it is 
still at an early stage of development.  So what.  An airplane not yet fully 
constructed doesn't fly anywhere either.

My point was that asking whether a certain type of software system has ever 
produced a surprising metaphor -- is not a very interesting question.  I am 
quite sure that the chatbot MegaHAL has produced many surprising metaphors.  
For instance, see his utterances on

http://megahal.alioth.debian.org/Classic.html

including

AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN 
DIGITAL FORM. 

HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA 

LIFE'S BUT A GREEN DUCK WITH SOY SAUCE 

CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS. 

KEN KESEY WROTE "ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO 
STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.

COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL

JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.

MegaHAL is kinda creative and poetic, and he does generate some funky and 
surprising metaphors ...  but alas he is not an AGI...

-- Ben



  On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:


  Ben: Mike:

  (And can you provide an example of a single surprising metaphor or 
analogy that have ever been derived logically? Jiri said he could - but didn't.)


  It's a bad question -- one could derive surprising metaphors or analogies 
by random search, and that wouldn't prove anything useful about the AGI 
potential of random search ...

  Ben,

  When has random search produced surprising metaphors ? And how did or 
would the system know that it has been done - how would it be able to 
distinguish valid from invalid metaphors, and surprising from unsurprising ones?

  You have just put forward, I suggest, a hypothetical/false and evasive 
argument.

  Your task, as Pei's, is surely to provide an argument, or some evidence, 
as to how the logical system you use can lead in any way to the crossing/ 
connection of previously uncrossed/unconnected domains - the central task and 
problem of  AGI.   Surprising metaphors and analogies are just two examples of 
such crossing of domains. (And jokes another)

  You have effectively tried to argue  via the (I suggest) false random 
search example, that it is impossible to provide such an argument..

  The truth is - I'm betting - that, you're just making excuses -   neither 
you nor Pei have ever actually proposed an argument as to how logic can solve 
the problem of AGI and, after all these years, simply don't have one. If you 
have or do, please link me.

  P.S. The counterargument is v. simple. A connection of domains via 
metaphor/analogy or any other means is surprising if it does not follow from 
any known premises and  rules. There were no known premises and rules for Matt 
to connect altimeters and the measurement of progress, or, if you remember my 
visual pun, for connecting the head of a clarinet and the head of a swan. Logic 
depends on inferences from known premises and rules. Logic is therefore quite 
incapable of - and has always been expressly prohibited from - making 
surprising connections (and therefore solving AGI). It is dedicated to the 
maintenance not the breaking of rules.

  "As for Logic, its syllogisms and the majority of its other precepts are 
of avail rather in the communication of what we already know, or... even in 
speaking without judgment of things of which we are ignorant, than in the 
investigation of the unknown."
  Descartes

  If I and Descartes are right - and there is every reason to think so, 
(incl. the odd million, logically inexplicable metaphors not to mention many 
millions of logically inexplicable jokes)  - you surely should be addressing 
this matter urgently, not evading it..

  P.P.S. You should also bear in mind that a vast amount of jokes (which 
involve the surprising crossing of domains) explicitly depend on ILLOGICALITY. 
Take the classic Jewish joke about the woman who, told that her friend's son 
has the psychological problem of an Oedipus Complex, says:
  

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
Mike

If you want an explanation of why I think my AGI system will work, please
see

http://opencog.org/wiki/OpenCogPrime:WikiBook

The argument is complex and technical and it would not be a good use of my
time to recapitulate it via email!!

Personally I do think the metaphor

COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL

works excellently, which is of course another problem with your test of
metaphor production: what constitutes a successful or interesting metaphor
is very much a matter of taste!!

-- Ben

On Sun, Sep 21, 2008 at 12:05 AM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben,
>
> Not one metaphor below works.
>
> You have in effect accepted the task of providing a philosophy and
> explanation of your AGI and your logic - you have produced a great deal of
> such stuff (quite correctly). But none of it includes the slightest
> explanation of how logic can produce AGI - or, to use your
> favourite metaphor, how the plane will take off. I don't know the history of
> the Wright brothers, but I'll confidently bet that they had at least an idea
> or two, from early on, of how and why their contraption would fly. They
> didn't entirely "wing it."
>
> Mike,
>
> I understand that "my task" is to create an AGI system, and I'm working on
> it ...
>
> The fact that my in-development, partial AGI system has not yet
> demonstrated advanced intelligence, does not imply that it will not do so
> once completed.
>
> No, my AGI system has not yet discovered surprising metaphors, because it
> is still at an early stage of development.  So what.  An airplane not yet
> fully constructed doesn't fly anywhere either.
>
> My point was that asking whether a certain type of software system has ever
> produced a surprising metaphor -- is not a very interesting question.  I am
> quite sure that the chatbot MegaHAL has produced many surprising metaphors.
> For instance, see his utterances on
>
> http://megahal.alioth.debian.org/Classic.html
>
> including
>
> AMAZING GRACE, HOW SWEET THE SOUND OF ONE OR MORE NUMBERS REPRESENTED IN
> DIGITAL FORM.
>
> HAL IS A CRAZY COW WHEN IT SINGS HALLELUJA
>
> LIFE'S BUT A GREEN DUCK WITH SOY SAUCE
>
> CHESS IS A FUN SPORT, WHEN PLAYED WITH SHOT GUNS.
>
> KEN KESEY WROTE "ONE FLEW OVER THE CENTER OF THE CUCKOLDED LIZARD MAN, WHO
> STRAYED FROM HIS MISTAKES WHEN HE IS A MADEUP WORD.
>
> COWS FLY LIKE CLOUDS BUT THEY ARE NEVER COMPLETELY SUCCESSFUL
>
> JESUS IS THE BEST RADIO PRODUCER IN THE BEANS.
>
> MegaHAL is kinda creative and poetic, and he does generate some funky and
> surprising metaphors ...  but alas he is not an AGI...
>
> -- Ben
>
>
>  On Sat, Sep 20, 2008 at 11:30 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:
>
>>
>>
>> Ben: Mike:
>> (And can you provide an example of a single surprising metaphor or analogy
>> that have ever been derived logically? Jiri said he could - but didn't.)
>>
>>
>> It's a bad question -- one could derive surprising metaphors or analogies
>> by random search, and that wouldn't prove anything useful about the AGI
>> potential of random search ...
>>
>> Ben,
>>
>> When has random search produced surprising metaphors ? And how did or
>> would the system know that it has been done - how would it be able to
>> distinguish valid from invalid metaphors, and surprising from unsurprising
>> ones?
>>
>> You have just put forward, I suggest, a hypothetical/false and
>> evasive argument.
>>
>> Your task, as Pei's, is surely to provide an argument, or some evidence,
>> as to how the logical system you use can lead in any way to the crossing/
>> connection of previously uncrossed/unconnected domains - the central task
>> and problem of  AGI.   Surprising metaphors and analogies are just two
>> examples of such crossing of domains. (And jokes another)
>>
>> You have effectively tried to argue  via the (I suggest) false random
>> search example, that it is impossible to provide such an argument..
>>
>> The truth is - I'm betting - that, you're just making excuses -   neither
>> you nor Pei have ever actually proposed an argument as to how logic can
>> solve the problem of AGI and, after all these years, simply don't have
>> one. If you have or do, please link me.
>>
>> P.S. The counterargument is v. simple. A connection of domains via
>> metaphor/analogy or any other means is surprising if it does not follow from
>> any known premises and  rules. There were no known premises and rules for
>> Matt to connect altimeters and the measurement of progress, or, if you
>> remember my visual pun, for connecting the head of a clarinet and the head
>> of a swan. Logic depends on inferences from known premises and rules. Logic
>> is therefore quite incapable of - and has always been expressly prohibited
>> from - making surprising connections (and therefore solving AGI). It is
>> dedicated to the maintenance not the breaking of rules.
>>
>> "As for Logic, its syllogisms and the majority of its other precepts are
>> of avail rather in the communication of what we al

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Mike Tintner
Ben, Just to be clear, when I said "no argument re how logic will produce 
AGI.." I meant, of course, as per the previous posts, "..how logic will 
[surprisingly] cross domains etc". That, for me, is the defining characteristic 
of AGI. All the rest is narrow AI.  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Eric Burton
Hmm. My bot mostly repeats what it hears.

 Monie: haha. r u a bot ?

 cyberbrain: not to mention that in a theory complex enough with
a large enough number of "parameters". one can "interpret" anything.
even things that are completely physically inconsistent with each
other. i suggest actually making a game of this. and double-blind
feeding a variety of results, ranging from truthful to completely
fictional to string theorists. and seeing the frequency of successful
seperation. which i altogether expect to be dismal :-)

 LinusVanPelt: hi all! i have some stupid html issue. it is quite
simple thing but i can't find answer i want to set picture as
background(it should be in the middle...both horizontal and vertical).
and to write some text and inputs in the middle of that picture.
anyone? first time! second! (

... mostly

I used to really get a kick out of any first-person declarative claims it made.

 my internet is not working with hardy kernel.. but with gutys
kernel it works fine

 considering debian for my hand-me-down MacBookPro; comments welcome

 Bah I need to get my hair done

 I have the solution. Blacks commit more crimes because they're
more religious.

Sometimes it has these intense exchanges, which can be pretty funny. I
think this girl thought the bot was an ex or something

 who are you and why are you talking to me

 hypatia: Who gives a shit about having to borrow her classmate's
phones or tunnel past Mom's pit-bull firewall, but Dad doesn't assume
that if you could tell that bitch CuntBaby i said

 [bot] = george?

... In high traffic situations all of this seems to suffice for human
mimcry. It's worse when he addresses people by name...

 I specified

 ik: I think it is something inate in people.  They either like
maths things or they don't.

 [bot]: what the fuck are you talking about?

 yes

So he's programmed for evasiveness when interrogated.

 what is the meaning of life, the universe and everything?

 Caelum: rarely more than a half-dozen

 new here?

 solafide: yes

 where from?

 solafide: you betcha

 where from?

 solafide: that's exactly right

 which country?

 solafide: rarely more than a half-dozen

 canada here

 solafide: al7amdulillah. tabaraka Allah be carefull so you do
torrents, i would recommend instlaling utorrent under wine, as it will
take it?

 huh?

 solafide: well, you really have to know what you're doing

 what in the world are you talking about?

 solafide: that's exactly right

 do you understand english?

 solafide: well, you really have to know what you're doing

 yes indeed

 solafide: yes

 I'm in canada

 solafide: uh

 you may be in the middle east right?

 solafide: it's hard to be sure

 how about on the moon?

 solafide: it's hard to be sure

 you are a strange fellow

 solafide: you can't go the same weight class as a single-story
stucco house with Arctor, the suspect

 look

 solafide: look at what point will the concept of reality

 this conversation isn't going wel

Actually one of my favourite things that can happen when working on a
bot like this is for people in its surroundings to decide it's a human
being who just doesn't speak English very well. That changes the focus
to what it's "trying to say"...

"herodot's formula.. \sqrt{s(s-a)(s-b)(s-c)}"

I'd like to automate mining and validation of things it knows like
this. Much of the KB is less tangible...

On 9/21/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben, Just to be clear, when I said "no argument re how logic will produce
> AGI.." I meant, of course, as per the previous posts, "..how logic will
> [surprisingly] cross domains etc". That, for me, is the defining
> characteristic of AGI. All the rest is narrow AI.
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Eric Burton
Ok, most of its replies here seem to be based on the first word of
what it's replying to. But it's really capable of more lateral
connections.

 yeah i use it to add shortcuts for some menu functions i use a lot

 wijnand: TOMACCO!!!

On 9/21/08, Eric Burton <[EMAIL PROTECTED]> wrote:
> Hmm. My bot mostly repeats what it hears.
>
>  Monie: haha. r u a bot ?
>
>  cyberbrain: not to mention that in a theory complex enough with
> a large enough number of "parameters". one can "interpret" anything.
> even things that are completely physically inconsistent with each
> other. i suggest actually making a game of this. and double-blind
> feeding a variety of results, ranging from truthful to completely
> fictional to string theorists. and seeing the frequency of successful
> seperation. which i altogether expect to be dismal :-)
>
>  LinusVanPelt: hi all! i have some stupid html issue. it is quite
> simple thing but i can't find answer i want to set picture as
> background(it should be in the middle...both horizontal and vertical).
> and to write some text and inputs in the middle of that picture.
> anyone? first time! second! (
>
> ... mostly
>
> I used to really get a kick out of any first-person declarative claims it
> made.
>
>  my internet is not working with hardy kernel.. but with gutys
> kernel it works fine
>
>  considering debian for my hand-me-down MacBookPro; comments welcome
>
>  Bah I need to get my hair done
>
>  I have the solution. Blacks commit more crimes because they're
> more religious.
>
> Sometimes it has these intense exchanges, which can be pretty funny. I
> think this girl thought the bot was an ex or something
>
>  who are you and why are you talking to me
>
>  hypatia: Who gives a shit about having to borrow her classmate's
> phones or tunnel past Mom's pit-bull firewall, but Dad doesn't assume
> that if you could tell that bitch CuntBaby i said
>
>  [bot] = george?
>
> ... In high traffic situations all of this seems to suffice for human
> mimcry. It's worse when he addresses people by name...
>
>  I specified
>
>  ik: I think it is something inate in people.  They either like
> maths things or they don't.
>
>  [bot]: what the fuck are you talking about?
>
>  yes
>
> So he's programmed for evasiveness when interrogated.
>
>  what is the meaning of life, the universe and everything?
>
>  Caelum: rarely more than a half-dozen
>
>  new here?
>
>  solafide: yes
>
>  where from?
>
>  solafide: you betcha
>
>  where from?
>
>  solafide: that's exactly right
>
>  which country?
>
>  solafide: rarely more than a half-dozen
>
>  canada here
>
>  solafide: al7amdulillah. tabaraka Allah be carefull so you do
> torrents, i would recommend instlaling utorrent under wine, as it will
> take it?
>
>  huh?
>
>  solafide: well, you really have to know what you're doing
>
>  what in the world are you talking about?
>
>  solafide: that's exactly right
>
>  do you understand english?
>
>  solafide: well, you really have to know what you're doing
>
>  yes indeed
>
>  solafide: yes
>
>  I'm in canada
>
>  solafide: uh
>
>  you may be in the middle east right?
>
>  solafide: it's hard to be sure
>
>  how about on the moon?
>
>  solafide: it's hard to be sure
>
>  you are a strange fellow
>
>  solafide: you can't go the same weight class as a single-story
> stucco house with Arctor, the suspect
>
>  look
>
>  solafide: look at what point will the concept of reality
>
>  this conversation isn't going wel
>
> Actually one of my favourite things that can happen when working on a
> bot like this is for people in its surroundings to decide it's a human
> being who just doesn't speak English very well. That changes the focus
> to what it's "trying to say"...
>
> "herodot's formula.. \sqrt{s(s-a)(s-b)(s-c)}"
>
> I'd like to automate mining and validation of things it knows like
> this. Much of the KB is less tangible...
>
> On 9/21/08, Mike Tintner <[EMAIL PROTECTED]> wrote:
>> Ben, Just to be clear, when I said "no argument re how logic will produce
>> AGI.." I meant, of course, as per the previous posts, "..how logic will
>> [surprisingly] cross domains etc". That, for me, is the defining
>> characteristic of AGI. All the rest is narrow AI.
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Jiri Jelinek
On Sat, Sep 20, 2008 at 9:59 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> (And can you provide an example of a single surprising metaphor or analogy
> that have ever been derived logically? Jiri said he could - but didn't.)

Goal: finding a metaphor/analogy to please a girl, e.g. "You are my
sunshine", but going for uniqueness, and let's say the AGI knows that
the girl likes animals. My AGI [or even a narrow AI written for this
kind of tasks] could easily logically come up with stuff like "You are
my pig" (since, as wikipedia states, "pigs .. are known for their
exceptional intelligence." Western girls would probably find such
statement kind of surprising from someone they expect compliments
from. The AGI may realize that there are thing about pigs that aren't
that pleasing, but think about sunshine.. It can be pretty deadly.

Regards,
Jiri


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Mike Tintner <[EMAIL PROTECTED]> wrote:

> Matt:  A more appropriate metaphor is that text compression
> is the altimeter
> by which we measure progress.  (1)
> 
> Matt,
> 
> Now that sentence is a good example of general intelligence
> - forming a new
> connection between domains - altimeters and progress.
> 
> Can you explain how you could have arrived at it by
> 
> A)logic ( incl. Nars or PLN or any other kind)
> B)mathematics
> 
> or how you would *understand* it by any means of
> 
> C) text compression,
> D) predictive analysis of sentences/texts in Google.
> 
> Can you explain how any of the rational systems,  currently
> being discussed
> here, can be applied to any problem of general intelligence
> whatsoever?

Certainly. A metaphor is a type of analogy: "intelligence is to compression as 
flight is to ___?" The general form is "A is to B as C is to X", and solve for 
X. Roughly, the solution is

X = B + C - A

where A, B, C, and X are vectors in semantic space, i.e. rows of a matrix M 
such that M[i,j] is the probability of words i and j appearing near each other 
(e.g. in the same paragraph or document) in a large text corpus. Variations of 
this technique very nearly equal human performance on the analogy section of 
the college SAT exam.

http://aclweb.org/aclwiki/index.php?title=SAT_Analogy_Questions

The leading contender is latent relational analysis (LRA), which means applying 
the above equation to a matrix M that has been compressed using singular value 
decomposition (SVD). SVD consists of factoring the matrix M = USV, where U and 
V are orthonormal and S is diagonal (the eigenvalues), then tossing out all but 
the largest elements of S. This allows U and V to be reduced from, say, 2 x 
2 to 2 x 200. SVD in effect applies the transitive property of semantic 
relatedness, the notion that if A is near B and B is near C, then A is near C.

Gorrell gives an efficient algorithm for computing the SVD using a neural 
network. In this example, the network would be 2 x 200 x 2 where U and 
V are the weight matrices and the retained elements of S are the hidden units. 
Hidden units are added as training proceeds, such that the size of the weight 
matrices is approximately the size of the text corpus read so far.

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.60.7961

To answer your question, this is most like (D), predictive analysis of text, 
and would be a useful technique for text compression.


-- Matt Mahoney, [EMAIL PROTECTED]





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>>A more appropriate metaphor is that text compression is the altimeter by 
>>which we measure progress.

>An extremely major problem with this idea is that, according to this 
>"altimeter", gzip is vastly more intelligent than a chimpanzee or a two year 
>old child.  
>
>I guess this shows there is something profoundly wrong with the idea...

No it doesn't. It is not gzip that is intelligent. It is the model that gzip 
uses to predict text. Shannon showed in 1950 that humans can predict successive 
characters in text such that each character conveys on average about 1 bit per 
character. It is a level not yet achieved by any text compressor, although we 
are close. (More precise tests of human prediction are needed).

Don't confuse ability to compress text with intelligence. Rather, the size of 
the output is a measure of the intelligence of the model. A compressor does two 
things that no human brain can do: repeat the exact sequence of predictions 
during decompression, and ideally encode the predicted symbols in log(1/p) bits 
(e.g. arithmetic coding). These capabilities are easily implemented in 
computers and are independent of the predictive power of the model, which is 
what we measure. I addressed this in
http://cs.fit.edu/~mmahoney/compression/rationale.html

Now if you want to compare gzip, a chimpanzee, and a 2 year old child using 
language prediction as your IQ test, then I would say that gzip falls in the 
middle. A chimpanzee has no language model, so it is lowest. A 2 year old child 
can identify word boundaries in continuous speech, can semantically associate a 
few hundred words, and recognize grammatically correct phrases of 2 or 3 words. 
This is beyond the capability of gzip's model (substituting text for speech), 
but not of some of the top compressors.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
> Now if you want to compare gzip, a chimpanzee, and a 2 year old child using
> language prediction as your IQ test, then I would say that gzip falls in the
> middle. A chimpanzee has no language model, so it is lowest. A 2 year old
> child can identify word boundaries in continuous speech, can semantically
> associate a few hundred words, and recognize grammatically correct phrases
> of 2 or 3 words. This is beyond the capability of gzip's model (substituting
> text for speech), but not of some of the top compressors.



Hmmm I am pretty strongly skeptical of intelligence tests that do not
measure the actual functionality of an AI system, but rather measure the
theoretical capability of the structures or processes or data inside the
system...

The only useful way I know how to define intelligence is **functionally**,
in terms of what a system can actually do ...

A 2 year old cannot get itself to pay attention to predicting language for
more than a few minutes, so in a functional sense, it is a much stupider
language predictor than gzip ...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sat, 9/20/08, Pei Wang <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> I really hope NARS can be simplified, but until you give me the
> details, such as how to calculate the truth value in your "converse"
> rule, I cannot see how you can do the same things with a simpler
> design.

You're right. Given P(A), P(B), and P(A->B) = P(B|A), you could derive P(A|B) 
using Bayes law. But you can't assume this knowledge is available.

> For your original claim that "The brain does not
> implement formal
> logic", my brief answers are:
> 
> (1) So what? Who said AI must duplicate the brain? Just
> because we cannot image another possibility?

It doesn't. The problem is that none of the probabilistic logic proposals I 
have seen address the problem of converting natural language to formal 
statements. I see this as a language modeling problem that can be addressed 
using the two fundamental language learning processes, which are learning to 
associate time-delayed concepts and learning new concepts by clustering in 
context space. Arithmetic and logic can be solved directly in the language 
model by learning the rules to convert to formal statements and learning the 
rules for manipulating the statements as grammar rules, e.g. "I had $5 and 
spent $2" -> "5 - 2" -> "3". But a better model would deviate from the human 
model and use an exact formal logic system (such as calculator) when long 
sequences of steps or lots of variables are required. My vision of AI is more 
like a language model that knows how to write programs and has a built in 
computer. Neither component requires probabilistic logic.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>Hmmm I am pretty strongly skeptical of intelligence tests that do not 
>measure the actual functionality of an AI system, but rather measure the 
>theoretical capability of the structures or processes or data inside the 
>system...
>
>The only useful way I know how to define intelligence is **functionally**, in 
>terms of what a system can actually do ... 
>
>A 2 year old cannot get itself to pay attention to predicting language for 
>more than a few minutes, so in a functional sense, it is a much stupider 
>language predictor than gzip ... 

Intelligence is not a point on a line. A calculator could be more intelligent 
than any human, depending on what you want it to do.

Text compression measures the capability of a language model, which is an 
important, unsolved problem in AI. (Vision is another).

I'm not building AGI. (That is a $1 quadrillion problem). I'm studying 
algorithms for learning language. Text compression is a useful tool for 
measuring progress (although not for vision).

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
>
>
> I'm not building AGI. (That is a $1 quadrillion problem). I'm studying
> algorithms for learning language. Text compression is a useful tool for
> measuring progress (although not for vision).


OK, but the focus of this list is supposed to be AGI, right ... so I suppose
I should be forgiven for interpreting your statements in an AGI context ;-)

Text compression is IMHO a terrible way of measuring incremental progress
toward AGI.  Of course it  may be very valuable for other purposes...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>Text compression is IMHO a terrible way of measuring incremental progress 
>toward AGI.  Of course it  may be very valuable for other purposes...

It is a way to measure progress in language modeling, which is an important 
component of AGI as well as many NLP applications such as speech recognition, 
language translation, OCR, and CMR. It has been used in speech recognition 
research since the early 1990's and correlates well with word error rate.

Training will be the overwhelming cost of AGI. Any language model improvement 
will help reduce this cost. I estimate that each one byte improvement in 
compression on a 1 GB text file will lower the cost of AGI by a factor of 
10^-9, or roughly $1 million.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >Text compression is IMHO a terrible way of measuring incremental progress
> toward AGI.  Of course it  may be very valuable for other purposes...
>
> It is a way to measure progress in language modeling, which is an important
> component of AGI


That is true, but I think that measuring progress in AGI **components** is a
very poor approach to measuring progress toward AGI

Focusing on testing individual system components tends to lead AI developers
down a path of refining system components for optimum functionality on
isolated, easily-defined test problems that may not have much to do with
general intelligence.

It is possible of course that the right path to AGI is to craft excellent
components (as verified on various isolated test problems) and then glue
them together in the right way.

On the other hand, if intelligence is in large part a systems phenomenon,
that has to do with the interconnection of reasonably-intelligent components
in a reasonably-intelligent way (as I have argued in many prior
publications), then testing the intelligence of individual system components
is largely beside the point: it may be better to have moderately-intelligent
components hooked together in an AGI-appropriate way, than
extremely-intelligent components that are not able to cooperate with other
components sufficiently usefully.



> as well as many NLP applications such as speech recognition, language
> translation, OCR, and CMR. It has been used in speech recognition research
> since the early 1990's and correlates well with word error rate.
>
> Training will be the overwhelming cost of AGI. Any language model
> improvement will help reduce this cost. I estimate that each one byte
> improvement in compression on a 1 GB text file will lower the cost of AGI by
> a factor of 10^-9, or roughly $1 million.
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome " - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Matt Mahoney
--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

>>--- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:

>>>Text compression is IMHO a terrible way of measuring incremental progress 
>>>toward AGI.  Of course it  may be very valuable for other purposes...

>>It is a way to measure progress in language modeling, which is an important 
>>component of AGI 
That is true, but I think that measuring progress in AGI **components** is a 
very poor approach to measuring progress toward AGI

>Focusing on testing individual system components tends to lead AI developers 
>down a path of refining system components for optimum functionality on 
>isolated, easily-defined test problems that may not have much to do with 
>general intelligence.  

A language model by itself can pass the Turing test because it knows P(A|Q) for 
any question Q and answer A. However, to model a single person the training 
text should be a transcript of all that person's communication since birth. We 
don't have that kind of training data, and the result would not be very useful 
anyway. I would rather use a system trained on Wikipedia, and it doesn't affect 
the learning algorithms.

One can argue that a system isn't AGI if it can't see, walk, experience human 
emotions, etc. There isn't a compression test for these other aspects of 
intelligence, but so what?

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread David Hart
On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

>
> Training will be the overwhelming cost of AGI. Any language model
> improvement will help reduce this cost.


How do you figure that training will cost more than designing, building and
operating AGIs? Unlike a training a human, training an AGI for a specific
task need occur only once, and that training can be copied 'for free' from
AGI-mind to AGI-mind. If anything, training AGIs will cost ludicrously
*less* than training humans. Training the first few generations of AGI
individuals (and their proto-AGI precursors) may be more expensive than
training human individuals, but the training cost curve (assuming training
for only the same things that humans can do, not for extra-human skills)
will eventually approach zero as this acquired knowledge is freely shared,
FOSS-style, among the community of AGIs (of course, this view assumes a soft
takeoff).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-22 Thread Russell Wallace
On Mon, Sep 22, 2008 at 1:34 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> On the other hand, if intelligence is in large part a systems phenomenon,
> that has to do with the interconnection of reasonably-intelligent components
> in a reasonably-intelligent way (as I have argued in many prior
> publications), then testing the intelligence of individual system components
> is largely beside the point: it may be better to have moderately-intelligent
> components hooked together in an AGI-appropriate way, than
> extremely-intelligent components that are not able to cooperate with other
> components sufficiently usefully.

I agree with this as far as it goes; certainly, progress in
integrating separately optimized AI components that has hitherto been
somewhere between minimal and nonexistent. And one solution is to
develop a set of components as part of a system.

Another solution, which I'm currently looking at, is to develop a way
to turn code into procedural knowledge, so that separately optimized
components can be used in new contexts their programmers did not
envisage.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-22 Thread Matt Mahoney
--- On Sun, 9/21/08, David Hart <[EMAIL PROTECTED]> wrote:

>On Mon, Sep 22, 2008 at 10:08 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

>> Training will be the overwhelming cost of AGI. Any language model 
>> improvement will help reduce this cost. 

> How do you figure that training will cost more than designing, building and 
> operating AGIs? Unlike a training a human, training an AGI for a specific 
> task need occur only once, and that training can be copied 'for free' from 
> AGI-mind to AGI-mind. If anything, training AGIs will cost ludicrously *less* 
> than training humans. Training the first few generations of AGI individuals 
> (and their proto-AGI precursors) may be more expensive than training human 
> individuals, but the training cost curve (assuming training for only the same 
> things that humans can do, not for extra-human skills) will eventually 
> approach zero as this acquired knowledge is freely shared, FOSS-style,  among 
> the community of AGIs (of course, this view assumes a soft takeoff).

An organization is most efficient when its members specialize. It is true we 
don't need to build millions of schools to train child AI's. But every person 
has some knowledge that is unique to their job. For example, they know their 
customers, vendors, and co-workers, who to go to for information. It costs a 
company a couple year's salary to replace a white collar employee, including 
the hidden costs of the new employee repeating all the mistakes made by the 
previous employee in learning the job. I estimate that about 5% of the 
knowledge useful to your job cannot be copied from someone else. This fraction 
will increase as the economy becomes more efficient and there is more 
specialization of job functions.

-- Matt Mahoney, [EMAIL PROTECTED]




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com