Re: [agi] Seeking CYC critiques

2008-11-30 Thread Robin Hanson


Hi Stephen, nice to meet you. When I search the web for critiques
of CYC, I can only find stuff from '90-95. If no one has written
critiques of CYC since then, perhaps you could comment on how applicable
those early critiques would be to the current system. 
For example, would CYC today at least better answer Vaughan Pratt's
test questions from

http://boole.stanford.edu/cyc.html? Has there been more
progress toward developing a neutral source of questions to use to
evaluate how performance improves with time and with implementation
variations? 
At 01:57 AM 11/30/2008, Stephen Reed wrote:
Hi Robin,
There are no Cyc critiques that I know of in the last few years. I
was employed seven years at Cycorp until August 2006 and my non-compete
agreement expired a year later. 
An interesting competition was held by
Project
Halo in which Cycorp participated along with two other research
groups to demonstrate human-level competency answering chemistry
questions. Results are

here. Although Cycorp performed principled deductive inference
giving detailed justifications, it was judged to have performed inferior
due to the complexity of its justifications and due to its long running
times. The other competitors used special purpose problem solving
modules whereas Cycorp used its general purpose inference engine,
extended for chemistry equations as needed.
My own interest is in natural language dialog systems for rapid knowledge
formation. I was Cycorp's first project manager for its
participation in the the DARPA Rapid Knowledge Formation project where it
performed to DARPA's satisfaction, but subsequently its RKF tools never
lived up to Cycorp's expectations that subject matter experts could
rapidly extend the Cyc KB without Cycorp ontological engineers having to
intervene. A Cycorp paper describing its KRAKEN system is

here.

I would be glad to answer questions about Cycorp and Cyc technology to
the best of my knowledge, which is growing somewhat stale at this
point.
What are the best available critiques of CYC as it exists now (vs.
soon after project started)?

Robin Hanson [EMAIL PROTECTED]
http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford
University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326 FAX: 703-993-2323
 



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  








Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Could you give me a little more detail about your thoughts on this?
 Do you think the problem of increasing uncomputableness of complicated
 complexity is the common thread found in all of the interesting,
 useful but unscalable methods of AI?
 Jim Bromer

 Well, I think that dealing with combinatorial explosions is, in
 general, the great unsolved problem of AI. I think the opencog prime
 design can solve it, but this isn't proved yet...

Good luck with that!

 In general, the standard AI methods can't handle pattern recognition
 problems requiring finding complex interdependencies among multiple
 variables that are obscured among scads of other variables
 The human mind seems to do this via building up intuition via drawing
 analogies among multiple problems it confronts during its history.

Yes, so that people learn one problem, then it helps them to learn
other similar ones. Is there any AI software that does this? I'm not
aware of any.

I have proposed a problem domain called function predictor whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

I also think it would be useful if there was a regular (maybe annual)
competition in the function predictor domain (or some similar domain).
A bit like the Loebner Prize, except that it would be more useful to
the advancement of AI, since the Loebner prize is silly.

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Ben Goertzel
Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

How does that differ from what is generally called transfer learning ?

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] Re: Glocal memory

2008-11-30 Thread Ben Goertzel
A little more poking around reveals further evidence that supports the
glocal model of brain memory (they talk about a
distributed plus hub model, which is part of the glocality idea,
though missing the nonlinear-attractor aspect that I think is critical
to distributed memory)

http://brain.guides.britannica.com/what-happens-when-things-go-wrong/on-the-cutting-edge-of-brain-research/on-the-cutting-edge-of-brain-research/82/3/

The paper is at

http://www.nature.com/nrn/journal/v8/n12/full/nrn2277.html

and some mildly critical commentary at

http://talkingbrains.blogspot.com/2008/01/semantics-and-brain-more-on-atl-as-hub.html

As Richard L would likely point out, the authors' data supports plenty
of different interpretations, and the one presented is only one of the
many plausible ones...

-- ben G


On Tue, Nov 25, 2008 at 12:45 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 A semi-technical essay on the global/local (aka glocal) nature of
 memory is linked to from here

 http://multiverseaccordingtoben.blogspot.com/

 I wrote this a long while ago but just got around to posting it now...

 ben



 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 The empires of the future are the empires of the mind.
 -- Sir Winston Churchill




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Philip Hunt
2008/11/29 Matt Mahoney [EMAIL PROTECTED]:

 The general problem of detecting overfitting is not computable. The principle 
 according to Occam's Razor, formalized and proven by Hutter's AIXI model, is 
 to choose the shortest program (simplest hypothesis) that generates the data. 
 Overfitting is the case of choosing a program that is too large.


Can someone explain AIXI to me? My understanding is that you've got
some black-box process emitting output, and you generate all possible
programs that emit the same output, then choose the shortest one. You
then run this program and its subsequent output is what you predict
the black-box process will do. This has the minor drawback, of course,
that it requires infinite processing power and is therefore slightly
impractical.

I've read Hutter's paper Universal algorithmic intelligence, A
mathematical top-down approach which amusingly describes itself as
a gentle introduction to the AIXI model.

Hutter also describes AIXItl of computation time Ord(t*2^L) where I
assume L is the length of the program and I'm not sure what t is. Is
AIXItl something that could be practically written or is it purely a
theoretical construct?

In short, is there something to AIXI or is it something I can safely ignore?

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Ben Goertzel
AIXI is a purely theoretic construct, requiring infinite computational resources

AIXItl is a version that could be implemented in principle, but not in
practice due to truly insane computational resource requirements

Whether the line of thinking and body of theory underlying these
things can be useful for inspiring more practical AGI designs, is a
matter of opinion and intuition, at this point...

-- Ben G

On Sun, Nov 30, 2008 at 10:58 AM, Philip Hunt [EMAIL PROTECTED] wrote:
 2008/11/29 Matt Mahoney [EMAIL PROTECTED]:

 The general problem of detecting overfitting is not computable. The 
 principle according to Occam's Razor, formalized and proven by Hutter's AIXI 
 model, is to choose the shortest program (simplest hypothesis) that 
 generates the data. Overfitting is the case of choosing a program that is 
 too large.


 Can someone explain AIXI to me? My understanding is that you've got
 some black-box process emitting output, and you generate all possible
 programs that emit the same output, then choose the shortest one. You
 then run this program and its subsequent output is what you predict
 the black-box process will do. This has the minor drawback, of course,
 that it requires infinite processing power and is therefore slightly
 impractical.

 I've read Hutter's paper Universal algorithmic intelligence, A
 mathematical top-down approach which amusingly describes itself as
 a gentle introduction to the AIXI model.

 Hutter also describes AIXItl of computation time Ord(t*2^L) where I
 assume L is the length of the program and I'm not sure what t is. Is
 AIXItl something that could be practically written or is it purely a
 theoretical construct?

 In short, is there something to AIXI or is it something I can safely ignore?

 --
 Philip Hunt, [EMAIL PROTECTED]
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Philip Hunt
2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

 How does that differ from what is generally called transfer learning ?

I don't think it does differ. (Transfer learning is not a term I'd
previously come across).

-- 
Philip Hunt, [EMAIL PROTECTED]
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques

2008-11-30 Thread Stephen Reed
Robin,

While I was at Cycorp, a concerted effort was made to address Vaughan Pratt's 
test questions.  I recall that most of them required the addition of facts and 
rules into the Cyc KB so that they would answer.  I believe that a substantial 
portion are included in the Cyc query regression test used to maintain the KB 
quality.  This regression test is proprietary to Cycorp, and has not been 
released even in ResearchCyc that I know of.

Also, Cycorp has been working on using an extract of the Cyc KB as a resource 
for evaluating theorem provers, described here.

 
Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Robin Hanson [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 7:43:38 AM
Subject: Re: [agi] Seeking CYC critiques

Hi Stephen, nice to meet you.  When I search the web for critiques of CYC, I 
can only find stuff from '90-95.  If no one has written critiques of CYC since 
then, perhaps you could comment on how applicable those early critiques would 
be to the current system.  

For example, would CYC today at least better answer  Vaughan Pratt's test 
questions from http://boole.stanford.edu/cyc.html?  Has there been more 
progress toward developing a neutral source of questions to use to evaluate how 
performance improves with time and with implementation variations?  

At 01:57 AM 11/30/2008, Stephen Reed wrote:

Hi Robin,
There are no Cyc critiques that I know of in the last few years.  I was 
employed seven years at Cycorp until August 2006 and my non-compete agreement 
expired a year later.   

An interesting competition was held by Project Halo in which Cycorp 
participated along with two other research groups to demonstrate human-level 
competency answering chemistry questions.  Results are here.  Although Cycorp 
performed principled deductive inference giving detailed justifications, it was 
judged to have performed inferior due to the complexity of its justifications 
and due to its long running times.  The other competitors used special purpose 
problem solving modules whereas Cycorp used its general purpose inference 
engine, extended for chemistry equations as needed.

My own interest is in natural language dialog systems for rapid knowledge 
formation.  I was Cycorp's first project manager for its participation in the 
the DARPA Rapid Knowledge Formation project where it performed to DARPA's 
satisfaction, but subsequently its RKF tools never lived up to Cycorp's 
expectations that subject matter experts could rapidly extend the Cyc KB 
without Cycorp ontological engineers having to intervene.  A Cycorp paper 
describing its KRAKEN system is here.
 
I would be glad to answer questions about Cycorp and Cyc technology to the best 
of my knowledge, which is growing somewhat stale at this point.

What are the best available critiques of CYC as it exists now (vs. soon after 
project started)?
Robin Hanson  [EMAIL PROTECTED]  http://hanson.gmu.edu 
Research Associate, Future of Humanity Institute at Oxford University
Associate Professor of Economics, George Mason University
MSN 1D3, Carow Hall, Fairfax VA 22030-
703-993-2326  FAX: 703-993-2323
  


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Stephen Reed
Ben,

Cycorp participated in the DARPA Transfer Learning project, as a subcontractor. 
  My project role was simply a team member and I did not attend any PI 
meetings.  But I did work on getting a Quake III Arena environment working at 
Cycorp which was to be a transfer learning testbed.   I also enhanced Cycorp's 
Java application that gathered facts from the web using the Google API.

Regarding winning a DARPA contract, I believe that teaming with an established 
contractor, e.g. SAIC, SRI, is beneficial.

 
Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 10:17:44 AM
Subject: Re: [agi] Mushed Up Decision Processes

There was a DARPA program on transfer learning a few years back ...
I believe I applied and got rejected (with perfect marks on the
technical proposal, as usual ...) ... I never checked to see who got
the $$ and what they did with it...

ben g

On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt [EMAIL PROTECTED] wrote:
 2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

 How does that differ from what is generally called transfer learning ?

 I don't think it does differ. (Transfer learning is not a term I'd
 previously come across).

 --
 Philip Hunt, [EMAIL PROTECTED]
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Stephen Reed
Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer 
Learning team with me.
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 10:48:59 AM
Subject: Re: [agi] Mushed Up Decision Processes

On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 There was a DARPA program on transfer learning a few years back ...
 I believe I applied and got rejected (with perfect marks on the
 technical proposal, as usual ...) ... I never checked to see who got
 the $$ and what they did with it...

See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf

Pei

 ben g

 On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt [EMAIL PROTECTED] wrote:
 2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

 How does that differ from what is generally called transfer learning ?

 I don't think it does differ. (Transfer learning is not a term I'd
 previously come across).

 --
 Philip Hunt, [EMAIL PROTECTED]
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Ben Goertzel

 Regarding winning a DARPA contract, I believe that teaming with an
 established contractor, e.g. SAIC, SRI, is beneficial.

 Cheers,
 -Steve

Yeah, I've tried that approach too ...

As it happens, I've had significant more success getting funding from
various other government agencies ... but DARPA has been the *least*
favorable toward my work of any of them I've tried to deal with

It seems that, in the 5 years I've been applying for such grants,
DARPA hasn't happened to have a program manager whose particular taste
in AI is compatible with mine...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Pei Wang
Stephen,

Does that mean what you did at Cycorp on transfer learning is similar
to what Taylor presented to AGI-08?

Pei

On Sun, Nov 30, 2008 at 1:01 PM, Stephen Reed [EMAIL PROTECTED] wrote:
 Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer
 Learning team with me.
 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860

 
 From: Pei Wang [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Sunday, November 30, 2008 10:48:59 AM
 Subject: Re: [agi] Mushed Up Decision Processes

 On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 There was a DARPA program on transfer learning a few years back ...
 I believe I applied and got rejected (with perfect marks on the
 technical proposal, as usual ...) ... I never checked to see who got
 the $$ and what they did with it...

 See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf

 Pei

 ben g

 On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt [EMAIL PROTECTED]
 wrote:
 2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

 How does that differ from what is generally called transfer learning ?

 I don't think it does differ. (Transfer learning is not a term I'd
 previously come across).

 --
 Philip Hunt, [EMAIL PROTECTED]
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] AIXI (was: Mushed Up Decision Processes)

2008-11-30 Thread Richard Loosemore

Philip Hunt wrote:

2008/11/29 Matt Mahoney [EMAIL PROTECTED]:

The general problem of detecting overfitting is not computable. The
principle according to Occam's Razor, formalized and proven by
Hutter's AIXI model, is to choose the shortest program (simplest
hypothesis) that generates the data. Overfitting is the case of
choosing a program that is too large.



Can someone explain AIXI to me? My understanding is that you've got 
some black-box process emitting output, and you generate all possible

 programs that emit the same output, then choose the shortest one.
You then run this program and its subsequent output is what you
predict the black-box process will do. This has the minor drawback,
of course, that it requires infinite processing power and is
therefore slightly impractical.

I've read Hutter's paper Universal algorithmic intelligence, A 
mathematical top-down approach which amusingly describes itself as 
a gentle introduction to the AIXI model.


Hutter also describes AIXItl of computation time Ord(t*2^L) where I 
assume L is the length of the program and I'm not sure what t is. Is 
AIXItl something that could be practically written or is it purely a 
theoretical construct?


In short, is there something to AIXI or is it something I can safely
ignore?



It is something that, if you do not ignore it, will waste every second
of brain cpu time that you devote to it ;-).

Matt comes has a habit of repeating some version of the above statement 
... according to Occam's Razor, [which was] formalized and proven by 
Hutter's AIXI model... on a semi-periodic basis.  The first n times I 
took the trouble to explain why this statement is nonsense.  Now I don't 
bother.


AIXI is mathematical abstraction taken to the point of absurdity and 
beyond.  By introducing infinite numbers of copies of all possible 
universes into your formalism, and by implying that functions can be 
computed on such structures, and by redefining common terms like 
intelligence to be abstractions based on that formalism, you can prove 
anything under the sun.


That fact seems to escape some people.



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Stephen Reed
Pei,
Matt Taylor's work at Cycorp was not closely related to his published work at 
AGI-08.

Matt contributed to a variety of other Transfer Learning tasks, and I cannot 
recall exactly what those were.  
-Steve

 Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





From: Pei Wang [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Sunday, November 30, 2008 12:16:41 PM
Subject: Re: [agi] Mushed Up Decision Processes

Stephen,

Does that mean what you did at Cycorp on transfer learning is similar
to what Taylor presented to AGI-08?

Pei

On Sun, Nov 30, 2008 at 1:01 PM, Stephen Reed [EMAIL PROTECTED] wrote:
 Matt Taylor was also an intern at Cycorp where was on Cycorp's Transfer
 Learning team with me.
 -Steve

 Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860

 
 From: Pei Wang [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Sunday, November 30, 2008 10:48:59 AM
 Subject: Re: [agi] Mushed Up Decision Processes

 On Sun, Nov 30, 2008 at 11:17 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 There was a DARPA program on transfer learning a few years back ...
 I believe I applied and got rejected (with perfect marks on the
 technical proposal, as usual ...) ... I never checked to see who got
 the $$ and what they did with it...

 See http://www.cs.utexas.edu/~mtaylor/Publications/AGI08-taylor.pdf

 Pei

 ben g

 On Sun, Nov 30, 2008 at 11:12 AM, Philip Hunt [EMAIL PROTECTED]
 wrote:
 2008/11/30 Ben Goertzel [EMAIL PROTECTED]:
 Hi,

 I have proposed a problem domain called function predictor whose
 purpose is to allow an AI to learn across problem sub-domains,
 carrying its learning from one domain to another. (See
 http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )

 I also think it would be useful if there was a regular (maybe annual)
 competition in the function predictor domain (or some similar domain).
 A bit like the Loebner Prize, except that it would be more useful to
 the advancement of AI, since the Loebner prize is silly.

 --
 Philip Hunt, [EMAIL PROTECTED]

 How does that differ from what is generally called transfer learning ?

 I don't think it does differ. (Transfer learning is not a term I'd
 previously come across).

 --
 Philip Hunt, [EMAIL PROTECTED]
 Please avoid sending me Word or PowerPoint attachments.
 See http://www.gnu.org/philosophy/no-word-attachments.html


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com

 
 agi | Archives | Modify Your Subscription


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
Ed,

Unfortunately to reply to your message in detail would absorb a lot of
time, because there are two issues mixed up

1) you don't know much about computability theory, and educating you
on it would take a lot of time (and is not best done on an email list)

2) I may not have expressed some of my weird philosophical ideas about
computability and mind and reality clearly ... though Abram, at least,
seemed to get them ;)  [but he has a lot of background in the area]

Just to clarify some simple things though: Pi is a computable number,
because there's a program that would generate it if allowed to run
long enough  Also, pi has been proved irrational; and, quantum
theory really has nothing directly to do with uncomputability...

About

How can several pounds of matter that is the human brain model
 the true complexity of an infinity of infinitely complexity things?

it is certainly thinkable that the brain is infinite not finite in its
information content, or that it's a sort of antenna that receives
information from some infinite-information-content source.  I'm not
saying I believe this, just saying it's a logical possibility, and not
really ruled out by available data...

Your reply seems to assume that the brain is a finite computational
system and that other alternatives don't make sense.  I think this is
an OK working assumption for AGI engineers but it's not proved by any
means.

My main point in that post was, simply, that science and language seem
intrinsically unable to distinguish computable from uncomputable
realities.  That doesn't necessarily mean the latter don't exist but
it means they're not really scientifically useful entities.  But, my
detailed argument in favor of this point requires some basic
understanding of computability math to appreciate, and I can't review
those basics in an email, it's too much...

ben g

On Sun, Nov 30, 2008 at 4:20 PM, Ed Porter [EMAIL PROTECTED] wrote:
 Ben,



 On November 19, 2008 5:39 you wrote the following under the above titled
 thread:



 --

 Ed,



 I'd be curious for your reaction to



 http://multiverseaccordingtoben.blogspot.com/2008/10/are-uncomputable-entities-useless-forhtml



 which explores the limits of scientific and linguistic explanation, in

 a different but possibly related way to Richard's argument.



 --



 In the below email I asked you some questions about your article, which
 capture my major problem in understanding it, and I don't think I ever
 receive a reply



 The questions were at the bottom of such a long post you may well never have
 even seen them.  I know you are busy, but if you have time I would be
 interested in hearing your answers to the following questions about the
 following five quoted parts (shown in red if you are seeing this in rich
 text) from you article.  If you are too busy to respond just say so, either
 on or off list.



 -



 (1) In the simplest case, A2 may represent U directly in the language,
 using a single expression



 How, can U be directly represented in the language if it is uncomputable?



 I assume you consider any irational number, such as pi to be uncomputable
 (although, at least pi has a forumula that with enough computation can
 approach it as a limit –I assume that for most real numbers if there is such
 a formula, we do not know it.) (By the way, do we know for a fact that pi is
 irational, and if so how do we know other than that we have caluclated it to
 millions of places and not yet found an exact solution?)



 Merely communicating the symbol pi only represents the number if the agent
 receiving the communication has a more detailed definition, but any
 definition, such as a formula for iteratively approaching pi, which
 presumably is what you mean by R_U would only be an approximation.



 So U could never by fully represented unless one had infinite time --- and I
 generally consider it a waste of time to think about infinate time unless
 there is something valuable about such considerations that has a use in much
 more human-sized chunks of time.



 In fact, it seems the major message of quantum mechanics is that even
 physical reality doesn't have the time or machinery to compute uncomputable
 things, like a space constructed of dimensions each correspond to all the
 real numbers within some astronomical range .  So the real number line is
 not really real.  It is at best a construct of the human mind that can at
 best only be approximated in part.



 (2) complexity(U)  complexity(R_U)



 Because I did not understand how U could be represented, and how R_U could
 be anything other than an approximation for any practical purposes, I didn't
 understand the meaning of the above line from your article?



 If U and R_U have the meaning I guessed in my discussion of quote (1), then
 U could not be meaningfully representable in the language, other than by a
 symbol that references some definition 

Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread Jim Bromer
Ed,
I think that we must rely on large collections of relatively simple
patterns that are somehow capable of being mixed and used in
interactions with the others.  These interacting patterns (to use your
term) would have extensive variations to make them flexible and useful
with other patterns.

When we learn that national housing prices did not provide us with the
kind of detail that we needed we go and figure other ways to find data
that showed some of the variations that would have helped us to
prepare better for a situation like the one we are currently in.

I was thinking of that exact example when I wrote about mushy decision
making, because the national average price would be more mushy than
the regional prices, or a multiple price level index.  The mush index
of an index does not mean that the index is garbage, but since
something like this is derived from finer grained statistics, it
really exemplifies the problem.

My idea is that an agi program would have to go further than data
mining.  It would have to be able to shape its own use of statistics
in order to establish validity for itself.  I really feel that there
is something really important about the classifiers of statistical
methods that I just haven't grasped yet.  My example for this this
comes from statistics that are similar but just different enough so
that they don't mesh quite right.  Like two different marketing
surveys that provide similar information which is so close that a
marketer can draw conclusions from their combination but which aren't
actually close enough to justify this process.  Like asking different
representative groups if they are planning to buy a television in one
survey, and asking how much they think they will spend on appliances
during the next two years.  The two surveys are so close that you know
the results can be combined, but they are so different that it is
almost impossible to justify the combination in any reasonable way. If
I could only figure this one out I think the other problems I am
interested in would start to solve themselves.

Jim Bromer

On Sat, Nov 29, 2008 at 11:40 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Jim

 My understanding is that a Novamente-like system would have a process of
 natural selection that tends to favor the retention and use of patterns
 (perceptive, cognative, behaviors) prove themselves useful in achieving
 goals in the word in which it is embodied.

 It seems to me t such a process of natural selection would tend to naturally
 put some sort of limit on how out-of-touch many of an AGI's patterns would
 be, at least with regard to patterns about things for which the AGI has had
 considerable experience from the world in which it is embodied.

 However, we humans often get pretty out of touch with real world
 probabilities, as the recent bubble in housing prices, and the commonly
 said, although historically inaccurate, statement of several years ago ---
 that housing prices never go down on a national --- shows.

 It would be helpful to make AGI's be a little more accurate in their
 evaluation of the evidence for many of their assumptions --- and what that
 evidence really says --- than we humans are.

 Ed Porter

 -Original Message-
 From: Jim Bromer [mailto:[EMAIL PROTECTED]
 Sent: Saturday, November 29, 2008 10:49 AM
 To: agi@v2.listbox.com
 Subject: [agi] Mushed Up Decision Processes

 One of the problems that comes with the casual use of analytical
 methods is that the user becomes inured to their habitual misuse. When
 a casual familiarity is combined with a habitual ignorance of the
 consequences of a misuse the user can become over-confident or
 unwisely dismissive of criticism regardless of how on the mark it
 might be.

 The most proper use of statistical and probabilistic methods is to
 base results on a strong association with the data that they were
 derived from.  The problem is that the AI community cannot afford this
 strong a connection to original source because they are trying to
 emulate the mind in some way and it is not reasonable to assume that
 the mind is capable of storing all data that it has used to derive
 insight.

 This is a problem any AI method has to deal with, it is not just a
 probability thing.  What is wrong with the AI-probability group
 mind-set is that very few of its proponents ever consider the problem
 of statistical ambiguity and its obvious consequences.

 All AI programmers have to consider the problem.  Most theories about
 the mind posit the use of similar experiences to build up theories
 about the world (or to derive methods to deal effectively with the
 world).  So even though the methods to deal with the data environment
 are detached from the original sources of those methods, they can
 still be reconnected by the examination of similar experiences that
 may subsequently occur.

 But still it is important to be able to recognize the significance and
 necessity of doing this from time to time.  It is important to 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

Not really.  They're limitations on what  measurements of physical
reality can be simultaneously made.

Quantum systems can compute *exactly* the class of Turing computable
functions ... this has been proved according to standard quantum
mechanics math.  however, there are some things they can compute
faster than any Turing machine, in the average case but not the worst
case.

 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.

the key point of the blog post you didn't fully grok, was a careful
argument that (under certain, seemingly reasonable assumptions)
science can never provide evidence in favor of infinite mechanisms...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Trent Waddington
On Mon, Dec 1, 2008 at 11:19 AM, Ed Porter [EMAIL PROTECTED] wrote:
 You said QUANTUM THEORY REALLY HAS NOTHING DIRECTLY TO DO WITH
 UNCOMPUTABILITY.

Please don't quote people using this style, it hurts my eyes.

 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

I don't even know what you're saying here.  Maybe you're trying to say
that it takes a really big computer to compute a very small box of
physical reality.. which is true.. I just don't know why you would be
saying that.

 You said  IT IS CERTAINLY THINKABLE THAT THE BRAIN IS INFINITE NOT FINITE
 IN ITS INFORMATION CONTENT, OR THAT IT'S A SORT OF ANTENNA THAT RECEIVES
 INFORMATION FROM SOME INFINITE-INFORMATION-CONTENT SOURCE 

 This certainly is thinkable.  And that is a non-trivial statement.  We
 should never forget that our concepts of reality could be nothing but
 illusions, and that our understanding of science and physical reality may be
 much more partial and flawed than we think.

It's also completely unscientific.  You might as well say that magic
pixies deliver your thoughts from big invisible bucket made of gold.

 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

So why are you entertaining notions of magic antennas to God?

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.

I wouldn't.  It's untestable non-sense.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ed Porter
Regarding the uncertainty principal, Wikipedia says:

 

In quantum physics, the Heisenberg uncertainty principle states that the
values of certain pairs of conjugate variables (position and momentum, for
instance) cannot both be known with arbitrary precision. That is, the more
precisely one variable is known, the less precisely the other is known. THIS
IS NOT A STATEMENT ABOUT THE LIMITATIONS OF A RESEARCHER'S ABILITY TO
MEASURE PARTICULAR QUANTITIES OF A SYSTEM, BUT RATHER ABOUT THE NATURE OF
THE SYSTEM ITSELF. (emphasis added.)

 

I am sure you know more about quantum mechanics than I do.  But I have heard
many say the uncertainty controls limits not just on scientific measurement,
but the amount of information different parts of reality can have about each
other when computing in response to each other.  Perhaps such theories are
wrong, but they are not without support in the field.

 

With regard to the statement science can never provide evidence in favor of
infinite mechanisms I though you were saying there was no way the human
mind could fully represent or fully understand an infinite mechanism ---
which I agree with.  

 

You were correct in thinking that I did not grok that you were implying this
means if an infinite mechanism exited there could be no evidence in favor of
it infinity.  

 

In fact, it is not clear that this is the case, if you use provide
evidence considerably more loosely than provide proof for.  Until the
advent of quantum mechanics and/or the theory of the expanding universe,
based in part on observations and in part intuitions derived from them, many
people felt the universe was infinitely continuous and/or of infinite extent
in space and time.  I agree you would probably never be able to prove
infinite realities, but the mind is capable of conceiving of them, and of
seeing evidence that might suggest to some their existence, such as was
suggested to Einstein, who for many years I have been told believed in a
universe that was infinite in time.

 

Ed Porter

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 30, 2008 9:09 PM
To: agi@v2.listbox.com
Subject: Re:  RE: FW: [agi] A paper that actually does solve the problem
of consciousness

 

 But quantum theory does appear to be directly related to limits of the

 computations of physical reality.  The uncertainty theory and the

 quantization of quantum states are limitations on what can be computed by

 physical reality.

 

Not really.  They're limitations on what  measurements of physical

reality can be simultaneously made.

 

Quantum systems can compute *exactly* the class of Turing computable

functions ... this has been proved according to standard quantum

mechanics math.  however, there are some things they can compute

faster than any Turing machine, in the average case but not the worst

case.

 

 But, I am old fashioned enough to be more interested in things about the

 brain and AGI that are supported by what would traditionally be considered

 scientific evidence or by what can be reasoned or designed from such

 evidence.



 If there is any thing that would fit under those headings to support the

 notion of the brain either being infinite, or being an antenna that
receives

 decodable information from some infinite-information-content source, I
would

 love to hear it.

 

the key point of the blog post you didn't fully grok, was a careful

argument that (under certain, seemingly reasonable assumptions)

science can never provide evidence in favor of infinite mechanisms...

 

ben g

 

 

---

agi

Archives: https://www.listbox.com/member/archive/303/=now

RSS Feed: https://www.listbox.com/member/archive/rss/303/

Modify Your Subscription:
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
HI,

 In quantum physics, the Heisenberg uncertainty principle states that the
 values of certain pairs of conjugate variables (position and momentum, for
 instance) cannot both be known with arbitrary precision. That is, the more
 precisely one variable is known, the less precisely the other is known. THIS
 IS NOT A STATEMENT ABOUT THE LIMITATIONS OF A RESEARCHER'S ABILITY TO
 MEASURE PARTICULAR QUANTITIES OF A SYSTEM, BUT RATHER ABOUT THE NATURE OF
 THE SYSTEM ITSELF. (emphasis added.)



 I am sure you know more about quantum mechanics than I do.  But I have heard
 many say the uncertainty controls limits not just on scientific measurement,
 but the amount of information different parts of reality can have about each
 other when computing in response to each other.  Perhaps such theories are
 wrong, but they are not without support in the field.


Yeah, the interpretation of quantum theory is certainly contentious
and there are multiple conflicting views...

However, regarding quantum computing, it is universally agreed that
the class of quantum computable functions is identical to the class of
classically Turing computable functions.


 With regard to the statement science can never provide evidence in favor of 
 infinite mechanisms I
 though you were saying there was no way the human mind could fully represent 
 or fully understand
 an infinite mechanism --- which I agree with.

No, I was not saying that there was no way the human mind could fully
represent or fully understand
an infinite mechanism

What I argued is that **scientific data** can never convincingly be
used to argue in favor of an infinite mechanism, due to the
intrinsically finite nature of scientific data.

This says **nothing** about any intrinsic limitations on the human
mind ... unless one adds the axiom that the human mind must be
entirely comprehensible via science ... which seems an unnecessary
assumption to make

 In fact, it is not clear that this is the case, if you use provide
 evidence considerably more loosely than provide proof for.  Until the
 advent of quantum mechanics and/or the theory of the expanding universe,
 based in part on observations and in part intuitions derived from them, many
 people felt the universe was infinitely continuous and/or of infinite extent
 in space and time.  I agree you would probably never be able to prove
 infinite realities, but the mind is capable of conceiving of them, and of
 seeing evidence that might suggest to some their existence, such as was
 suggested to Einstein, who for many years I have been told believed in a
 universe that was infinite in time.

well, my argument implies that you can never use science to prove that
the mind is capable of conceiving of infinite realities

This may be true in some other sense, but I argue, not in a scientific sense...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
OTOH, there is no possible real-world test to distinguish a true
random sequence from a high-algorithmic-information quasi-random
sequence

So I don't find this argument very convincing...

On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

I intend to live forever, or die trying.
-- Groucho Marx


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


Sorry, I am not really following the discussion but I just read that
there is some misinterpretation here. It is the standard model of
quantum computation that effectively computes exactly the Turing
computable functions, but that was almost hand tailored to do so,
perhaps because adding to the theory an assumption of continuum
measurability was already too much (i.e. distinguishing infinitely
close quantum states). But that is far from the claim that quantum
systems can compute exactly the class of Turing computable functions.
Actually the Hilbert space and the superposition of particles in an
infinite number of states would suggest exactly the opposite. While
the standard model of quantum computation only considers a
superposition of 2 states (the so-called qubit, capable of
entanglement in 0 and 1). But even if you stick to the standard model
of quantum computation, the proof that it computes exactly the set
of recursive functions [Feynman, Deutsch] can be put in jeopardy very
easy : Turing machines are unable to produce non-deterministic
randomness, something that quantum computers do as an intrinsic
property of quantum mechanics (not only because of measure limitations
of the kind of the Heisenberg principle but by quantum non-locality,
i.e. the violation of Bell's theorem). I just exhibited a non-Turing
computable function that standard quantum computers compute...
[Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that receives
 decodable information from some infinite-information-content source, I would
 love to hear it.


You and/or other people might be interested in a paper of mine
published some time ago on the possible computational power of the
human mind and the way to encode infinite information in the brain:

http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://www.mathrix.org


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

I know, but the point is not whether we can distinguish it, but that
quantum mechanics actually predicts to be intrinsically capable of
non-deterministic randomness, while for a Turing machine that is
impossible by definition. I find quite convincing and interesting the
way in which the mathematical proof of the standard model of quantum
computation as Turing computable has been put in jeopardy by physical
reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://www.mathrix.org



Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:53 AM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.

or at least by a model of physical reality... =)  (a reality by the
way, that the authors of the mathematical proof believe in as the most
basic)



 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I intend to live forever, or die trying.
 -- Groucho Marx


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
But I don't get your point at all, because the whole idea of
nondeterministic randomness has nothing to do with physical
reality... true random numbers are uncomputable entities which can
never be existed, and any finite series of observations can be modeled
equally well as the first N bits of an uncomputable series or of a
computable one...

ben g

On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some time ago on the possible computational power of the
 human mind and the way to encode infinite information in the brain:

 http://arxiv.org/abs/cs/0605065


 the key point of the blog post you didn't fully grok, was a careful
 argument that (under certain, seemingly reasonable assumptions)
 science can never provide evidence in favor of infinite mechanisms...

 ben g


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Hector Zenilhttp://www.mathrix.org


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 I 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

It has all to do when it is about quantum mechanics. Quantum mechanics
is non-deterministic by nature. A quantum computer, even within the
standard model of quantum computation, could then take advantage of
this intrinsic property of the physical (quantum) reality (assuming
the model correct, as most physicists would).

 true random numbers are uncomputable entities which can
 never be existed, and any finite series of observations can be modeled
 equally well as the first N bits of an uncomputable series or of a
 computable one...

That's the point, that's what the classical theory of computability
would say (also making some assumptions, namely Church's thesis), but
again quantum mechanics says something else :

The fact that quantum computers are able of non-deterministic
randomness by definition and Turing machines are unable of
non-deterministic randomness also by definition seems incompatible
with the claim (or mathematical proof) that standard quantum computers
compute exactly the same functions than Turing machines, and that's
only when dealing with standard quantum computation, because
non-standard quantum computation is far from being proved to be
reduced to Turing-computable (modulo their speed-up).

Concerning the observations, you don't need to do an infinite number
of them to get a non-computable answer from an Oracle (although you
would need in case you want to finitely verify it). And even if you
can model equally well the first N bits of a non-deterministic random
sequence, the fact that a random sequence is ontologically of a
non-deterministic nature, makes it a priori a different one in essence
from a pseudo random sequence. The point is not epistemological.

In any case, whether we agree on the philosophical matter, my point is
that it is not the case that there is a mathematical proof about
quantum systems computing exactly the same functions than Turing
machines. There is a mathematical proof that the standard model of
quantum computation computes the same set of functions than Turing
machines.


 ben g

 On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

I don't get it. You don't think that quantum mechanics is part of our
physical reality (if it is not all of it)?

 true random numbers are uncomputable entities which can
 never be existed,

you can say that either they don't exist or they do exist but that we
don't have access to them. That's a rather philosophical matter. But
scientifically QM says the latter. Even more, since bits from a
non-deterministic random source are truly independent from each other,
something that does not happen when produced by a Turing machine, then
any sequence (even finite) is of different nature from one produced by
a Turing machine. In practice, if your claim is that you will not be
able to distinguish the difference, you actually would if you let the
machine run for a longer period of time, once finished its physical
resources it will either halt or start over (making the random
string periodic), while QM says that resources don't matter, a quantum
computer will always continue producing non-deterministic (e.g. never
periodic) strings of any length independently of any constraint of
time or space!

 and any finite series of observations can be modeled
 equally well as the first N bits of an uncomputable series or of a
 computable one...

 ben g

 On Sun, Nov 30, 2008 at 10:53 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:44 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OTOH, there is no possible real-world test to distinguish a true
 random sequence from a high-algorithmic-information quasi-random
 sequence

 I know, but the point is not whether we can distinguish it, but that
 quantum mechanics actually predicts to be intrinsically capable of
 non-deterministic randomness, while for a Turing machine that is
 impossible by definition. I find quite convincing and interesting the
 way in which the mathematical proof of the standard model of quantum
 computation as Turing computable has been put in jeopardy by physical
 reality.


 So I don't find this argument very convincing...

 On Sun, Nov 30, 2008 at 10:42 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 3:09 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But quantum theory does appear to be directly related to limits of the
 computations of physical reality.  The uncertainty theory and the
 quantization of quantum states are limitations on what can be computed by
 physical reality.

 Not really.  They're limitations on what  measurements of physical
 reality can be simultaneously made.

 Quantum systems can compute *exactly* the class of Turing computable
 functions ... this has been proved according to standard quantum
 mechanics math.  however, there are some things they can compute
 faster than any Turing machine, in the average case but not the worst
 case.


 Sorry, I am not really following the discussion but I just read that
 there is some misinterpretation here. It is the standard model of
 quantum computation that effectively computes exactly the Turing
 computable functions, but that was almost hand tailored to do so,
 perhaps because adding to the theory an assumption of continuum
 measurability was already too much (i.e. distinguishing infinitely
 close quantum states). But that is far from the claim that quantum
 systems can compute exactly the class of Turing computable functions.
 Actually the Hilbert space and the superposition of particles in an
 infinite number of states would suggest exactly the opposite. While
 the standard model of quantum computation only considers a
 superposition of 2 states (the so-called qubit, capable of
 entanglement in 0 and 1). But even if you stick to the standard model
 of quantum computation, the proof that it computes exactly the set
 of recursive functions [Feynman, Deutsch] can be put in jeopardy very
 easy : Turing machines are unable to produce non-deterministic
 randomness, something that quantum computers do as an intrinsic
 property of quantum mechanics (not only because of measure limitations
 of the kind of the Heisenberg principle but by quantum non-locality,
 i.e. the violation of Bell's theorem). I just exhibited a non-Turing
 computable function that standard quantum computers compute...
 [Calude, Casti]


 But, I am old fashioned enough to be more interested in things about the
 brain and AGI that are supported by what would traditionally be 
 considered
 scientific evidence or by what can be reasoned or designed from such
 evidence.

 If there is any thing that would fit under those headings to support the
 notion of the brain either being infinite, or being an antenna that 
 receives
 decodable information from some infinite-information-content source, I 
 would
 love to hear it.


 You and/or other people might be interested in a paper of mine
 published some 

Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Ben Goertzel
On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

 I don't get it. You don't think that quantum mechanics is part of our
 physical reality (if it is not all of it)?

Of course it isn't -- quantum mechanics is a mathematical and
conceptual model that we use in order to predict certain finite sets
of finite-precision observations, based on other such sets

 true random numbers are uncomputable entities which can
 never be existed,

 you can say that either they don't exist or they do exist but that we
 don't have access to them. That's a rather philosophical matter. But
 scientifically QM says the latter.

Sure it does: but there is an equivalent mathematical theory that
explains all observations identically to QM, yet doesn't posit any
uncomputable entities

So, choosing to posit that these uncomputable entities exist in
reality, is just a matter of aesthetic or philosophical taste ... so
you can't really say they exist in reality, because they contribute
nothing to the predictive power of QM ...

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Hector Zenil
On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:
 On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 But I don't get your point at all, because the whole idea of
 nondeterministic randomness has nothing to do with physical
 reality...

 I don't get it. You don't think that quantum mechanics is part of our
 physical reality (if it is not all of it)?

 Of course it isn't -- quantum mechanics is a mathematical and
 conceptual model that we use in order to predict certain finite sets
 of finite-precision observations, based on other such sets


Oh I see! I think that's of philosophical taste as well. I don't think
everybody would agree with you. Specially if you poll physicists like
those that constructed the standard model of computation! We cannot
ask Feynman, but I actually asked Deutsch. He does not only think QM
is our most basic physical reality (he thinks math and computer
science lie in quantum mechanics), but he even takes quite seriously
his theory of parallel universes! and he is not alone. Speaking by
myself, I would agree with you, but I think we would need to
relativize the concept of agreement. I don't think QM is just another
model of merely mathematical value to make finite predictions. I think
physical models say something about our physical reality. If you deny
QM as part of our physical reality then I guess you deny any other
physical model. I wonder then what is left to you. You perhaps would
embrace total skepticism, perhaps even solipsism. Current trends have
moved from there to a more relativized positions, where models are
considered so, models, but still with some value as part of our actual
physical reality (just as Newtonian physics is not just completely
wrong after General Relativity since it still describes a huge part of
our physical reality).

At the end, even if you claim a Platonic physical reality to which we
have no access at all, not even through our best explanations in the
way of models, the world is either quantum or not (as we have defined
the theory), and as long as it remains as our best explanation of a
the phenomena that characterizes one has to face it to other models
describing other aspects or models of our best known physical reality.
It is not clear to me how you would deny the physical reality of QM
but defend the theory of computability or algorithmic information
theory as if they were more basic than QM.

If we take as equally basic QM and AIT, even in a practical sense,
there are incompatibilities in essence. QM cannot be said as Turing
computable, and AIT cannot posit the in-existence of non-deterministic
randomness specially when QM says something else. I am more in the
side of AIT but I think the question is open, is interesting (both
philosophically and scientific) and not trivial at all.


 true random numbers are uncomputable entities which can
 never be existed,

 you can say that either they don't exist or they do exist but that we
 don't have access to them. That's a rather philosophical matter. But
 scientifically QM says the latter.

 Sure it does: but there is an equivalent mathematical theory that
 explains all observations identically to QM, yet doesn't posit any
 uncomputable entities

 So, choosing to posit that these uncomputable entities exist in
 reality, is just a matter of aesthetic or philosophical taste ... so
 you can't really say they exist in reality, because they contribute
 nothing to the predictive power of QM ...



There are people that think that quantum randomness is actually the
source of the complexity we see in the universe [Bennett, Lloyd]. Even
when I do not agree with them (since AIT does not require
non-deterministic randomness) I think it is not that trivial since
even researchers think they contribute in some fundamental (not only
philosophical) way.


 -- Ben G


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Hector Zenilhttp://www.mathrix.org


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Mushed Up Decision Processes

2008-11-30 Thread J. Andrew Rogers


On Nov 30, 2008, at 7:31 AM, Philip Hunt wrote:

2008/11/30 Ben Goertzel [EMAIL PROTECTED]:


In general, the standard AI methods can't handle pattern recognition
problems requiring finding complex interdependencies among multiple
variables that are obscured among scads of other variables
The human mind seems to do this via building up intuition via drawing
analogies among multiple problems it confronts during its history.


Yes, so that people learn one problem, then it helps them to learn
other similar ones. Is there any AI software that does this? I'm not
aware of any.



To do this as a practical matter, you need to address *at least* two  
well-known hard-but-important unsolved algorithm problems in  
completely different areas of theoretical computer science that have  
nothing to do with AI per se.  That is no small hurdle, even if you  
are a bloody genius.


That said, I doubt most AI researchers could even tell you what those  
two big problems are which is, obliquely, the other part of the problem.




I have proposed a problem domain called function predictor whose
purpose is to allow an AI to learn across problem sub-domains,
carrying its learning from one domain to another. (See
http://www.includipedia.com/wiki/User:Cabalamat/Function_predictor )



In Feder/Merhav/Gutman's 1995 Reflections on... followup to their  
1992 paper on universal sequence prediction, they make the  
observation, which can be found at the following link, that it is  
probably useful to introduce the concept of prediction error  
complexity as an important metric which is similar to what you are  
talking about in the theoretical abstract:


http://www.itsoc.org/review/meir/node5.html

Our understanding of this area is better in 2008 than it was in 1995,  
but this is one of the earliest serious references to the idea in a  
theoretical way.  Somewhat obscure and primitive by current standards,  
but influential in the AIXI and related flavors of AI theory based on  
computational information theory. Or at least, I found it very  
interesting and useful a decade ago.


Cheers,

J. Andrew Rogers


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: RE: FW: [agi] A paper that actually does solve the problem of consciousness

2008-11-30 Thread Charles Hixson

Hector Zenil wrote:

On Mon, Dec 1, 2008 at 6:20 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

On Sun, Nov 30, 2008 at 11:48 PM, Hector Zenil [EMAIL PROTECTED] wrote:


On Mon, Dec 1, 2008 at 4:55 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
  

But I don't get your point at all, because the whole idea of
...


...


Oh I see! I think that's of philosophical taste as well. I don't think
everybody would agree with you. Specially if you poll physicists like
those that constructed the standard model of computation! We cannot
ask Feynman, but I actually asked Deutsch. He does not only think QM
is our most basic physical reality (he thinks math and computer
science lie in quantum mechanics), but he even takes quite seriously
his theory of parallel universes! and he is not alone. Speaking by...
when I do not agree with them (since AIT does not require
non-deterministic randomness) I think it is not that trivial since
even researchers think they contribute in some fundamental (not only
philosophical) way.

  

-- Ben G


Still, one must remember that there is Quantum Theory, and then there 
are the interpretations of Quantum Theory.  As I understand things there 
are still several models of the universe which yield the same 
observables, and choosing between them is a matter of taste.  They are 
all totally consistent with standard Quantum Theory...but ...well, which 
do you prefer?  Multi-world?  Action at a distance?  No objective 
universe? (I'm not sure what that means.)  The present is created by the 
future as well as the past?  As I understand things, these cannot be 
chosen between on the basis of Quantum Theory.  And somewhere in that 
mix is Wholeness and the Implicate Order.


When math gets translated into Language, interpretations add things.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com