Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow *demonstrate* 
that your mathematical idealization of these terms correspond with the 
real thing, ... so that we could believe that the mathematical 
idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are circular.  So you
win.


Sigh!

This is a waste of time:  you just (facetiously) rejected the 
fundamental tenet of science.  Which means that the stuff you were 
talking about was just pure mathematical fantasy, after all, and nothing 
to do with science, or the real world.



Richard Loosemre.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Richard Loosemore

Ben Goertzel wrote:

Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:

What I wanted was a set of non-circular definitions of such terms as 
intelligence and learning, so that you could somehow 
*demonstrate* that your mathematical idealization of these terms 
correspond with the real thing, ... so that we could believe that 
the mathematical idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are 
circular.  So you

win.


Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went through 
a bunch of trouble to precisely define all the component terms of that 
definition; you can consult the Appendix to my 2006 book The Hidden 
Pattern
Shane Legg and Marcus Hutter have proposed a related definition of 
intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an intelligence.

Such a definition would be pointless.  The question is *why* would it be 
pointless?  What criteria are applied, in order to determine whether the 
definition has something to the thing that in everyday life we call 
intelligence.


My protest to Matt was that I did not believe his definition could be 
made to lead to anything like a reasonable grounding.  I tried to get 
him to do the grounding, but to no avail:  he eventually resorted to the 
blanket denial that any definition means anything ... which is a cop out 
if he wanted to defend the claim that the formalism was something more 
than a mathematical fantasy.



Richard Loosemore


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel




Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went 
through a bunch of trouble to precisely define all the component 
terms of that definition; you can consult the Appendix to my 2006 
book The Hidden PatternShane Legg and Marcus Hutter 
have proposed a related definition of intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an 
intelligence.


Such a definition would be pointless.  The question is *why* would it 
be pointless?  What criteria are applied, in order to determine 
whether the definition has something to the thing that in everyday 
life we call intelligence.


The difficulty in comparing my definition against reality is that my 
definition defines intelligence relative to a complexity measure.


For this reason, it is fundamentally a subjective definition of 
intelligence, except in the unrealistic case where degree of complexity 
tends to infinity (in which case all reasonably general complexity 
measures become equivalent, due to bisimulation of Turing machines).


To qualitatively compare my definition to the everyday life definition 
of intelligence, we can check its consistency with our everyday life 
definition of complexity.   Informally, at least, my definition seems 
to check out to me: intelligence according to an IQ test does seem to 
have something to do with the ability to achieve complex goals; and, the 
reason we think IQ tests mean anything is that we think the ability to 
achieve complex goals in the test-context will correlate with the 
ability to achieve complex goals in various more complex environments 
(contexts).


Anyway, if I accept for instance **Richard Loosemore** as a measurer of 
the complexity of environments and goals, then relative to 
Richard-as-a-complexity-measure, I can assess the intelligence of 
various entities, using my definition


In practice, in building a system like Novamente, I'm relying on modern 
human culture's consensus complexity measure and trying to make a 
system that, according to this measure, can achieve a diverse variety of 
complex goals in complex situations...


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




Yes...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Bruce LaDuke

Definition is intelligence.

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: Ben Goertzel [EMAIL PROTECTED]
Reply-To: singularity@v2.listbox.com
To: singularity@v2.listbox.com
Subject: Re: [singularity] Scenarios for a simulated universe
Date: Sun, 04 Mar 2007 14:26:33 -0500




Richard, I long ago proposed a working definition of intelligence as 
Achieving complex goals in complex environments.  I then went through a 
bunch of trouble to precisely define all the component terms of that 
definition; you can consult the Appendix to my 2006 book The Hidden 
PatternShane Legg and Marcus Hutter have proposed a related 
definition of intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

A funtion that maps goals G and world states W onto action states A, where 
G, W and A are any mathematical entities whatsoever.


That would make any function that maps X [cross] Y into Z an 
intelligence.


Such a definition would be pointless.  The question is *why* would it be 
pointless?  What criteria are applied, in order to determine whether the 
definition has something to the thing that in everyday life we call 
intelligence.


The difficulty in comparing my definition against reality is that my 
definition defines intelligence relative to a complexity measure.


For this reason, it is fundamentally a subjective definition of 
intelligence, except in the unrealistic case where degree of complexity 
tends to infinity (in which case all reasonably general complexity 
measures become equivalent, due to bisimulation of Turing machines).


To qualitatively compare my definition to the everyday life definition of 
intelligence, we can check its consistency with our everyday life definition 
of complexity.   Informally, at least, my definition seems to check out to 
me: intelligence according to an IQ test does seem to have something to do 
with the ability to achieve complex goals; and, the reason we think IQ tests 
mean anything is that we think the ability to achieve complex goals in the 
test-context will correlate with the ability to achieve complex goals in 
various more complex environments (contexts).


Anyway, if I accept for instance **Richard Loosemore** as a measurer of the 
complexity of environments and goals, then relative to 
Richard-as-a-complexity-measure, I can assess the intelligence of various 
entities, using my definition


In practice, in building a system like Novamente, I'm relying on modern 
human culture's consensus complexity measure and trying to make a system 
that, according to this measure, can achieve a diverse variety of complex 
goals in complex situations...


P.S.  Quick sanity check:  you know the last comment in the quote you gave 
(about loking in the dictionary) was Matt's, not mine, right?




Yes...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

_
Play Flexicon: the crossword game that feeds your brain. PLAY now for FREE.  
 http://zone.msn.com/en/flexicon/default.htm?icid=flexicon_hmtagline


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  What I wanted was a set of non-circular definitions of such terms as 
  intelligence and learning, so that you could somehow *demonstrate* 
  that your mathematical idealization of these terms correspond with the 
  real thing, ... so that we could believe that the mathematical 
  idealizations were not just a fantasy.
  
  The last time I looked at a dictionary, all definitions are circular.  So
 you
  win.
 
 Sigh!
 
 This is a waste of time:  you just (facetiously) rejected the 
 fundamental tenet of science.  Which means that the stuff you were 
 talking about was just pure mathematical fantasy, after all, and nothing 
 to do with science, or the real world.
 
 
 Richard Loosemre.

What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Jef Allbright

On 3/4/07, Matt Mahoney wrote:


What does the definition of intelligence have to do with AIXI?  AIXI is an
optimization problem.  The problem is to maximize an accumulated signal in an
unknown environment.  AIXI says the solution is to guess the simplest
explanation for past observation (Occam's razor), and that this solution is
not computable in general.  I believe these principles have broad
applicability to the design of machine learning algorithms, regardless of
whether you consider such algorithms intelligent.


Matt, you might want to consider that while Occam's Razor is indeed a
very beautiful and powerful principle, it is a heuristic directly
applicable only to those situations of all else being equal (or made
effectively so by means of infinite computing power.)

[Observant readers may notice than I'm being slightly tongue in cheek
here, drawing a parallel with a recent mismatch of expressed views on
the AGI  and Extropy lists regarding the elegance of the  Principle of
Indifference. The analogy is sublime.]

My point is that nature never directly applies the perfect principle.
Every problem posed to nature carries an implicit bias, and this is
enough to start nature down the path toward a satisficing heuristic.

While the Principle of Parsimony and the Principle of Indifference
play unattainably objective roles in our epistemology, you may want to
consider their subjective cousin, Max Entropy, as one of your star
players in any practical AI.

- Jef

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] AGI+LE Poll Question (results)

2007-03-04 Thread Bruce Klein
9 people responded to the AGI+LE poll question (How much does life 
extension motivate your interest in AGI? - full question posted at 
bottom of this email). I've taken a few sentences from each reply and 
posted below.  I've also compiled a listing of % interest in AGI as 
motivated by Life Extension.  Where the % was not stated explicitly, I 
have taken liberty (divined) to come up with a guess.  Please feel free 
to correct me.  Also, feel free to reply to this thread with more / 
newer answers, etc.


For me, I was surprised to find how low the % was (28%). However, on 
reflection I can understand that I'm fairly obsessed with the idea of 
physical immortality as compared to most others ;-)


- Bruce

==Results:

25% Joel Pitt (explicit)
50% Stephen Reed (divined)
00% Bruce LaDuke (explicit)
25% Matt Mahoney (divined)
25% Stathis Papaioannou (divined)
75% Ben Quirk (divined)
25% Mark N. (explicit)
00% Patricia Manney (explicit)
25% Vishaka Datta (divined)
---
28% AVERAGE

==Excerpts from replies to AGI+LE Poll

Joel Pitt said:

So my belief is that the singularity a) enables us to have 
longer/indefinite life spans with which to experience more. b) will 
allow us to experience so much more than our current human senses allow 
us. Of course I also think AGI is an amazing puzzle and will answer 
questions (and raise new ones) about self awareness, consciousness and 
intelligence. I also believe that humanity is currently heading towards 
collapse if some major changes don't happen soon - so if the singularity 
can help us survive I'm all for it! :) In summary I'd say life extension 
is only 25% of my interest in it.


--
Stephen Reed said:

Since the early 1970's I've had as my life goal participation in 
technologies that would lead either to Life Extension or to Artificial 
Intelligence, on the theory that if one of these is achieved, the other 
will follow in my extended lifetime.  My confidence has grown over the 
years as others have taken up these goals and some, e.g. Kurtzweil have 
explored the connections between them.  


--
Bruce LaDuke said:

My Life Extension motivation is 0% of the reason why I'm interested in 
AGI+Singularity. I'm interested in AGI+Singularity because I want to 
bring the knowledge creation process to AGI researchers.  I believe that 
singularity is the realization of artificial knowledge creation.


--
Matt Mahoney said:

I don't know if I will live long enough to see the Singularity, but the 
more I think about it, the more I believe it is irrelevant.  Once AGI 
can start improving itself, I think it will quickly advance beyond human 
intellect as humans are advanced over bacteria


I believe the universe is simulated.  I don't know why the simulation 
exists. Maybe there is an AGI working on some problem whose purpose we 
cannot understand.  Maybe it is just experimenting with different 
universes for fun. Maybe there is no reason at all; the current universe 
is just one of an enumeration of all Turing machines.


---
Stathis Papaioannou said:

The important thing as far as survival goes is not that my memories are 
preserved or that aspects of my life can be repeated, but that I 
continue to have new experiences from here on, which experiences contain 
memories of me in their past and identify as being me. That is, if I had 
a choice between living for 200 years and living for 100 years repeated 
10 times (so that I had no idea which cycle I was in), I would not 
hesitate to choose the 200 years. In block universe theories of time, 
the past and present are always there, but this is no comfort at all 
if I can't expect future new experiences.


---
Ben Quirk said:

[Now] that I try to sit here and answer your question I find it 
extremely difficult to put into words. I keep erasing and rewriting what 
I've typed up...  I think my interest [is] motivated [by] the fact that 
greater-than-human intelligence is our best shot at solving all those 
eternal questions such as what is reality, why does something exist 
instead of nothing, what is the nature of consciousness... I'm also 
extremely [in to] life extension and cognitive enhancement.


--
Mark N. said:

Life extension is about 25% of the reason I am interested in the 
Singularity.  I do not want to live forever in a world like today's 
world.  I am quite unhappy with the state of the world and this country, 
and it seems like every year I become more cynical.  Who knows if this 
world as it is today is sustainable?  My motivations are creating a 
sustainable and enjoyable world that everybody will like, and reducing 
the amount of suffering and problems that exist today.


As for what I would get personally out of this?  It would be nice to 
party again without destroying brain cells :P.  But in all seriousness, 
I am not too concerned about personally being alive in a 
post-singularity world.  The concern lies with it actually happening.


---
Patricia (PJ) Manney said:

I'm interested in AGI+Singularity