Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Jiri Jelinek
Matt,

Printing ahh or ouch is just for show. The important observation is that
the program changes its behavior in response to a reinforcement signal in the
same way that animals do.

Let me remind you that the problem we were originally discussing was
about qualia and uploading. Not just about a behavior changes through
reinforcement based on given rules.

Good luck with this,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66443285-fe79dd


RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Gary Miller
Too complicate things further.

A small percentage of humans perceive pain as pleasure
and prefer it at least in a sexual context or else 
fetishes like sadomachism would not exist.

And they do in fact experience pain as a greater pleasure.

More than likely these people have an ample supply of endorphins 
which rush to supplant the pain with an even greater pleasure. 

Over time they are driven to seek out certain types of pain and
excitement to feel alive.

And although most try to avoid extreme life threatening pain many 
seek out greater and greater challanges such as climbing hazardous
mountains or high speed driving until at last many find death.

Although these behaviors should be anti-evolutionary and should have died
out it is possible that the tribe as a whole needs at least a few such
risk takers to take out that sabertoothed tiger that's been dragging off
the children.


-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 18, 2007 5:32 PM
To: agi@v2.listbox.com
Subject: Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana?
Never!)


--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 autobliss passes tests for awareness of its inputs and responds as if 
 it
 has
 qualia.  How is it fundamentally different from human awareness of 
 pain and pleasure, or is it just a matter of degree?
 
 If your code has feelings it reports then reversing the order of the 
 feeling strings (without changing the logic) should magically turn its 
 pain into pleasure and vice versa, right? Now you get some pain [or 
 pleasure], lie how great [or bad] it feels and see how reversed your 
 perception gets. BTW do you think computers would be as reliable as 
 they are if some numbers were truly painful (and other pleasant) from 
 their perspective?

Printing ahh or ouch is just for show.  The important observation is
that the program changes its behavior in response to a reinforcement signal
in the same way that animals do.

I propose an information theoretic measure of utility (pain and pleasure). 
Let a system S compute some function y = f(x) for some input x and output y.

Let S(t1) be a description of S at time t1 before it inputs a real-valued
reinforcement signal R, and let S(t2) be a description of S at time t2 after
input of R, and K(.) be Kolmogorov complexity.  I propose

  abs(R) = K(dS) = K(S(t2) | S(t1))

The magnitude of R is bounded by the length of the shortest program that
inputs S(t1) and outputs S(t2).

I use abs(R) because S could be changed in identical ways given positive,
negative, or no reinforcement, e.g.

- S receives input x, randomly outputs y, and is rewarded with R  0.
- S receives x, randomly outputs -y, and is penalized with R  0.
- S receives both x and y and is modified by classical conditioning.

This definition is consistent with some common sense notions about pain and
pleasure, for example:

- In animal experiments, increasing the quantity of a reinforcement signal
(food, electric shock) increases the amount of learning.

- Humans feel more pain or pleasure than insects because for humans, K(S) is
larger, and therefore the greatest possible change is larger.

- Children respond to pain or pleasure more intensely than adults because
they learn faster.

- Drugs which block memory formation (anesthesia) also block sensations of
pain and pleasure.

One objection might be to consider the following sequence:
1. S inputs x, outputs -y, is penalized with R  0.
2. S inputs x, outputs y, is penalized with R  0.
3. The function f() is unchanged, so K(S(t3)|S(t1)) = 0, even though
K(S(t2)|S(t1))  0 and K(S(t3)|S(t2))  0.

My response is that this situation cannot occur in animals or humans.  An
animal that is penalized regardless of its actions does not learn nothing.
It learns helplessness, or to avoid the experimenter.  However this
situation can occur in my autobliss program.

The state of autobliss can be described by 4 64-bit floating point numbers,
so for any sequence of reinforcement, K(dS) = 256 bits.  For humans, K(dS)
=
10^9 to 10^15 bits, according to various cognitive or neurological models of
the brain.  So I argue it is just a matter of degree.

If you accept this definition, then I think without brain augmentation,
there is a bound on how much pleasure or pain you can experience in a
lifetime.  In particular, if you consider t1 = birth, t2 = death, then K(dS)
= 0.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=6697-23a35c


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney
--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 Printing ahh or ouch is just for show. The important observation is
 that
 the program changes its behavior in response to a reinforcement signal in
 the
 same way that animals do.
 
 Let me remind you that the problem we were originally discussing was
 about qualia and uploading. Not just about a behavior changes through
 reinforcement based on given rules.

I have already posted my views on this.  People will upload because they
believe in qualia, but qualia is an illusion.  I wrote autobliss to expose
this illusion.

 Good luck with this,

I don't expect that any amount of logic will cause anyone to refute beliefs
programmed into their DNA, myself included.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66461747-04b852


RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney

--- Gary Miller [EMAIL PROTECTED] wrote:

 Too complicate things further.
 
 A small percentage of humans perceive pain as pleasure
 and prefer it at least in a sexual context or else 
 fetishes like sadomachism would not exist.
 
 And they do in fact experience pain as a greater pleasure.


More properly, they have associated positive reinforcement with sensory
experience that most people find painful.  It is like when I am running a race
and willing to endure pain to pass my competitors.

Any good optimization process will trade off short and long term utility.  If
an agent is rewarded for output y given input x, it must still experiment with
output -y to see if it results in greater reward.  Evolution rewards smart
optimization processes.  It explains why people climb mountains, create
paintings, and build rockets.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66463093-36cd0a


Re[2]: [agi] Nirvana? Manyana? Never!

2007-11-17 Thread Dennis Gorelik
Eliezer,

You asked that very personal question yourself and now you blame
Jiri for asking the same?
:-)

Ok, let's take a look into your answer.
You said that you prefer to be transported into a randomly selected
anime.

In my taste, Jiri's Endless AGI supervised pleasure is much wiser
choice than yours
:-)


Friday, November 2, 2007, 10:48:51 AM, you wrote:

 Jiri Jelinek wrote:
 
 Ok, seriously, what's the best possible future for mankind you can imagine?
 In other words, where do we want our cool AGIs to get us? I mean
 ultimately. What is it at the end as far as you can see?

 That's a very personal question, don't you think?

 Even the parts I'm willing to answer have long answers.  It doesn't 
 involve my turning into a black box with no outputs, though.  Nor 
 ceasing to act, nor ceasing to plan, nor ceasing to steer my own 
 future through my own understanding of it.  Nor being kept as a pet.
 I'd sooner be transported into a randomly selected anime.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66243567-558723


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Dennis Gorelik
Matt,

You algorithm is too complex.
What's the point of doing step 1?
Step 2 is sufficient.

Saturday, November 3, 2007, 8:01:45 PM, you wrote:

 So we can dispense with the complex steps of making a detailed copy of your
 brain and then have it transition into a degenerate state, and just skip to
 the final result.

 http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
 Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
 4-bit logic function and positive reinforcement for both right and wrong
 answers, e.g.

   g++ autobliss.cpp -o autobliss.exe
   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

 Step 2. Kill yourself.  Upload complete.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66253555-746bb4


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  Matt Mahoney wrote:
  --- Jiri Jelinek [EMAIL PROTECTED] wrote:
 
  On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.
  You can only control the goal system of the first iteration.
  ..and you can add rules for it's creations (e.g. stick with the same
  goals/rules unless authorized otherwise)
  You can program the first AGI to program the second AGI to be friendly. 
  You
  can program the first AGI to program the second AGI to program the third
  AGI
  to be friendly.  But eventually you will get it wrong, and if not you,
  then
  somebody else, and evolutionary pressure will take over.
  This statement has been challenged many times.  It is based on 
  assumptions that are, at the very least, extremely questionable, and 
  according to some analyses, extremely unlikely.
  
  I guess it will continue to be challenged until we can do an experiment to
  prove who is right.  Perhaps you should challenge SIAI, since they seem to
  think that friendliness is still a hard problem.
 
 I have done so, as many people on this list will remember.  The response 
 was deeply irrational.

Perhaps you have seen this paper on the nature of RSI by Stephen M. Omohundro,
http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/

Basically he says that self improving intelligences will evolve goals of
efficiency, self preservation, resource acquisition, and creativity.  Since
these goals are pretty much aligned with our own (which are also the result of
an evolutionary process), perhaps we shouldn't worry about friendliness.  Or
are there parts of the paper you disagree with?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66272291-daefc4


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Jiri Jelinek
Matt,

autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?

If your code has feelings it reports then reversing the order of the
feeling strings (without changing the logic) should magically turn its
pain into pleasure and vice versa, right? Now you get some pain [or
pleasure], lie how great [or bad] it feels and see how reversed your
perception gets. BTW do you think computers would be as reliable as
they are if some numbers were truly painful (and other pleasant) from
their perspective?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66309775-832549


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-14 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Jiri Jelinek [EMAIL PROTECTED] wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.

..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)
You can program the first AGI to program the second AGI to be friendly. 

You

can program the first AGI to program the second AGI to program the third

AGI

to be friendly.  But eventually you will get it wrong, and if not you,

then

somebody else, and evolutionary pressure will take over.
This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.


I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.


I have done so, as many people on this list will remember.  The response 
was deeply irrational.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64985895-75bf5b


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
   We just need to control AGIs goal system.
 
  You can only control the goal system of the first iteration.
 
 
 ..and you can add rules for it's creations (e.g. stick with the same
 goals/rules unless authorized otherwise)

You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.

But if consciousness does not exist...
  
   obviously, it does exist.
 
  Belief in consciousness exists.  There is no test for the truth of this
  belief.
 
 Consciousness is basically an awareness of certain data and there are
 tests for that.

autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64515425-65dd64


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Richard Loosemore

Matt Mahoney wrote:

--- Jiri Jelinek [EMAIL PROTECTED] wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)


You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.


This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64528236-2fa800


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Jiri Jelinek [EMAIL PROTECTED] wrote:
  
  On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.
  You can only control the goal system of the first iteration.
 
  ..and you can add rules for it's creations (e.g. stick with the same
  goals/rules unless authorized otherwise)
  
  You can program the first AGI to program the second AGI to be friendly. 
 You
  can program the first AGI to program the second AGI to program the third
 AGI
  to be friendly.  But eventually you will get it wrong, and if not you,
 then
  somebody else, and evolutionary pressure will take over.
 
 This statement has been challenged many times.  It is based on 
 assumptions that are, at the very least, extremely questionable, and 
 according to some analyses, extremely unlikely.

I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64668559-1aacd3


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-12 Thread Jiri Jelinek
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.

 You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)

   But if consciousness does not exist...
 
  obviously, it does exist.

 Belief in consciousness exists.  There is no test for the truth of this
 belief.

Consciousness is basically an awareness of certain data and there are
tests for that.

Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64449219-1a7532


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Bob Mottram
I've often heard people say things like qualia are an illusion or
consciousness is just an illusion, but the concept of an illusion
when applied to the mind is not very helpful, since all our thoughts
and perceptions could be considered as illusions reconstructed from
limited sensory data and knowledge.


On 06/11/2007, Jiri Jelinek [EMAIL PROTECTED] wrote:
 Of course you realize that qualia is an illusion? You believe that
 your environment is real, believe that pain and pleasure are real,

 real is meaningless. Perception depends on sensors and subsequent
 sensation processing.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61579379-f62acb


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Jiri Jelinek
Matt,

We can compute behavior, but nothing indicates we can compute
feelings. Qualia research needed to figure out new platforms for
uploading.

Regards,
Jiri Jelinek


On Nov 4, 2007 1:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Jiri Jelinek [EMAIL PROTECTED] wrote:

  Matt,
 
  Create a numeric pleasure variable in your mind, initialize it with
  a positive number and then keep doubling it for some time. Done? How
  do you feel? Not a big difference? Oh, keep doubling! ;-))

 The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
 can somehow through technology, AGI, and uploading, escape a world where we
 are not happy all the time, where we sometimes feel pain, where we fear death
 and then die.  Obviously my result is absurd.  But where is the mistake in my
 reasoning?  Is it if the brain is both conscious and computable?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61383577-33004b


Re: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Jiri Jelinek
Ed,

But I guess I am too much of a product of my upbringing and education
to want only bliss. I like to create things and ideas.

I assume it's because it provides pleasure you are unable to get in
other ways. But there are other ways and if those were easier for you,
you would prefer them over those you currently prefer.

And besides the notion of machines that could be trusted to run the
world for us while we seek to surf the endless rush and do nothing to
help support our own existence or that of the machines we would depend
upon, strikes me a nothing more than wishful thinking.

A number of scenarios were labeled wishful thinking in the past and
science later got us there.

The biggest truism about altruism is that it has never been the
dominant motivation in any system that has ever had it, and there is
no reason to believe that it could continue to be in machines for any
historically long period of time.  Survival of the fittest applies to
machines as well as biological life forms.

a) Systems correctly designed to be altruistic are altruistic.
b) Systems correctly designed to not self-change in particular way
don't self-change in that way.
c) The a) and b) hold true unless something [external] breaks the system.
d) *Many* independent and sophisticated safety mechanisms can be
utilized to mitigate c) related risks.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven
to maximize bliss why wouldn't they kill all the grooving humans and
replace them with grooving mice.  It would provide one hell of a lot
more bliss bang for the resource buck.

As an extension of our intelligence, they will be required to stick
with our value system.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60898198-756d29


RE: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Edward W. Porter
Jiri,

Thanks for your reply.  I think we have both stated our positions fairly
well. It doesn't seem either side is moving toward the other.  So I think
we should respect the fact we have very different opinions and values, and
leave it at that.

Ed Porter

-Original Message-
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 04, 2007 2:59 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!


Ed,

But I guess I am too much of a product of my upbringing and education
to want only bliss. I like to create things and ideas.

I assume it's because it provides pleasure you are unable to get in other
ways. But there are other ways and if those were easier for you, you would
prefer them over those you currently prefer.

And besides the notion of machines that could be trusted to run the
world for us while we seek to surf the endless rush and do nothing to help
support our own existence or that of the machines we would depend upon,
strikes me a nothing more than wishful thinking.

A number of scenarios were labeled wishful thinking in the past and
science later got us there.

The biggest truism about altruism is that it has never been the
dominant motivation in any system that has ever had it, and there is no
reason to believe that it could continue to be in machines for any
historically long period of time.  Survival of the fittest applies to
machines as well as biological life forms.

a) Systems correctly designed to be altruistic are altruistic.
b) Systems correctly designed to not self-change in particular way don't
self-change in that way.
c) The a) and b) hold true unless something [external] breaks the system.
d) *Many* independent and sophisticated safety mechanisms can be utilized
to mitigate c) related risks.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill all the grooving humans and replace
them with grooving mice.  It would provide one hell of a lot more bliss
bang for the resource buck.

As an extension of our intelligence, they will be required to stick with
our value system.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60919701-39703b


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Matt Mahoney
--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 Create a numeric pleasure variable in your mind, initialize it with
 a positive number and then keep doubling it for some time. Done? How
 do you feel? Not a big difference? Oh, keep doubling! ;-))

The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
can somehow through technology, AGI, and uploading, escape a world where we
are not happy all the time, where we sometimes feel pain, where we fear death
and then die.  Obviously my result is absurd.  But where is the mistake in my
reasoning?  Is it if the brain is both conscious and computable?


 
 Regards,
 Jiri Jelinek
 
 On Nov 3, 2007 10:01 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Edward W. Porter [EMAIL PROTECTED] wrote:
   If bliss without intelligence is the goal of the machines you imaging
   running the world, for the cost of supporting one human they could
   probably keep at least 100 mice in equal bliss, so if they were driven
 to
   maximize bliss why wouldn't they kill all the grooving humans and
 replace
   them with grooving mice.  It would provide one hell of a lot more bliss
   bang for the resource buck.
 
  Allow me to offer a less expensive approach.  Previously on the
 singularity
  and sl4 mailing lists I posted a program that can feel pleasure and pain:
 a 2
  input programmable logic gate trained by reinforcement learning.  You give
 it
  an input, it responds, and you reward it.  In my latest version, I
 automated
  the process.  You tell it which of the 16 logic functions you want it to
 learn
  (AND, OR, XOR, NAND, etc), how much reward to apply for a correct output,
 and
  how much penalty for an incorrect output.  The program then generates
 random
  2-bit inputs, evaluates the output, and applies the specified reward or
  punishment.  The program runs until you kill it.  As it dies it reports
 its
  life history (its age, what it learned, and how much pain and pleasure it
  experienced since birth).
 
  http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
 
  To put the program in an eternal state of bliss, specify two positive
 numbers,
  so that it is rewarded no matter what it does.  It won't learn anything,
 but
  at least it will feel good.  (You could also put it in continuous pain by
  specifying two negative numbers, but I put in safeguards so that it will
 die
  before experiencing too much pain).
 
  Two problems remain: uploading your mind to this program, and making sure
  nobody kills you by turning off the computer or typing Ctrl-C.  I will
 address
  only the first problem.
 
  It is controversial whether technology can preserve your consciousness
 after
  death.  If the brain is both conscious and computable, then Chalmers'
 fading
  qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
  computer simulation of your brain would also be conscious.
 
  Whether you *become* this simulation is also controversial.  Logically
 there
  are two of you with identical goals and memories.  If either one is
 killed,
  then you are in the same state as you were before the copy is made.  This
 is
  the same dilemma that Captain Kirk faces when he steps into the
 transporter to
  be vaporized and have an identical copy assembled on the planet below.  It
  doesn't seem to bother him.  Does it bother you that the atoms in your
 body
  now are not the same atoms that made up your body a year ago?
 
  Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
  this goal; they just don't know it).  The problem is that you would forgo
  food, water, and sleep until you died (we assume, from animal
 experiments).
  The solution is to upload to a computer where this could be done safely.
 
  Normally an upload would have the same goals, memories, and sensory-motor
 I/O
  as the original brain.  But consider the state of this program after self
  activation of its reward signal.  No other goals are needed, so we can
 remove
  them.  Since you no longer have the goal of learning, experiencing sensory
  input, or controlling your environment, you won't mind if we replace your
 I/O
  with a 2 bit input and 1 bit output.  You are happy, no?
 
  Finally, if your memories were changed, you would not be aware of it,
 right?
  How do you know that all of your memories were not written into your brain
 one
  second ago and you were some other person before that?  So no harm is done
 if
  we replace your memory with a vector of 4 real numbers.  That will be all
 you
  need in your new environment.  In fact, you won't even need that because
 you
  will cease learning.
 
  So we can dispense with the complex steps of making a detailed copy of
 your
  brain and then have it transition into a degenerate state, and just skip
 to
  the final result.
 
  Step 1. Download, compile, and run autobliss 1.0 in a secure location with
 any
  4-bit logic function and positive reinforcement for both right and wrong
  

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Russell Wallace
On 11/4/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
 this goal; they just don't know it).  The problem is that you would forgo
 food, water, and sleep until you died (we assume, from animal experiments).

We have no need to assume: the experiment has been done with human
volunteers. They reported that the experience was indeed pleasurable -
but unlike animals, they could and did choose to stop pressing the
button.

(The rest, I'll leave to the would-be wireheads to argue about :))

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60982051-57939c


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Jiri Jelinek
On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
 You are describing a very convoluted process of drug addiction.

The difference is that I have safety controls built into that scenario.

 If I can get you hooked on heroine or crack cocaine, I'm pretty confident
 that you will abandon your desire to produce AGI in order to get more
 of the drugs to which you are addicted.

Right. We are wired that way. Poor design.

 You mentioned in an earlier post that you expect to have this
 monstrous machine invade my world and 'offer' me these incredible
 benefits.  It sounds to me like you are taking the blue pill and
 living contentedly in the Matrix.

If the AGI that controls the Matrix sticks with the goal system
initially provided by the blue pill party then why would we want to
sacrifice the non-stop pleasure? Imagine you would get periodically
unplugged to double check if all goes well outside - over and over
again finding (after very-hard-to-do detailed investigation) that
things go much better than how would they likely go if humans were in
charge. I bet your unplug attitude would relatively soon change to
something like sh*t, not again!.

 If you are going to proselytize
 that view, I suggest better marketing.  The intellectual requirements
 to accept AGI-driven nirvana imply the rational thinking which
 precludes accepting it.

I'm primarily a developer, leaving most of the marketing stuff to
others ;-). What I'm trying to do here is to take a bit closer look at
the human goal system and investigate where it's likely to lead us. My
impression is that most of us have only very shallow understanding of
what we really want. When messing with AGI, we better know what we
really want.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60767090-3c4431


RE: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Edward W. Porter
I have skimmed many of the postings in this thread, and (although I have
not seen anyone say so) to a certain extent Jiri's positiion seems
somewhat similar to that in certain Eastern meditative traditions or
perhaps in certain Christian or other mystical Blind Faiths.

I am not a particularly good meditator, but when I am having trouble
sleeping, I often try to meditate.  There are moments when I have rushes
of pleasure from just breathing, and times when a clear empty mind is
calming and peaceful.

I think such times are valuable.  I like most people would like more
moments of bliss in my life.  But I guess I am too much of a product of my
upbringing and education to want only bliss. I like to create things and
ideas.

And besides the notion of machines that could be trusted to run the world
for us while we seek to surf the endless rush and do nothing to help
support our own existence or that of the machines we would depend upon,
strikes me a nothing more than wishful thinking.  The biggest truism about
altruism is that it has never been the dominant motivation in any system
that has ever had it, and there is no reason to believe that it could
continue to be in machines for any historically long period of time.
Survival of the fittest applies to machines as well as biological life
forms.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill all the grooving humans and replace
them with grooving mice.  It would provide one hell of a lot more bliss
bang for the resource buck.

Ed Porter


-Original Message-
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 3:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!


On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
 You are describing a very convoluted process of drug addiction.

The difference is that I have safety controls built into that scenario.

 If I can get you hooked on heroine or crack cocaine, I'm pretty
 confident that you will abandon your desire to produce AGI in order to
 get more of the drugs to which you are addicted.

Right. We are wired that way. Poor design.

 You mentioned in an earlier post that you expect to have this
 monstrous machine invade my world and 'offer' me these incredible
 benefits.  It sounds to me like you are taking the blue pill and
 living contentedly in the Matrix.

If the AGI that controls the Matrix sticks with the goal system initially
provided by the blue pill party then why would we want to sacrifice the
non-stop pleasure? Imagine you would get periodically unplugged to double
check if all goes well outside - over and over again finding (after
very-hard-to-do detailed investigation) that things go much better than
how would they likely go if humans were in charge. I bet your unplug
attitude would relatively soon change to something like sh*t, not
again!.

 If you are going to proselytize
 that view, I suggest better marketing.  The intellectual requirements
 to accept AGI-driven nirvana imply the rational thinking which
 precludes accepting it.

I'm primarily a developer, leaving most of the marketing stuff to others
;-). What I'm trying to do here is to take a bit closer look at the human
goal system and investigate where it's likely to lead us. My impression is
that most of us have only very shallow understanding of what we really
want. When messing with AGI, we better know what we really want.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60780377-9843bd

Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-03 Thread Matt Mahoney
--- Edward W. Porter [EMAIL PROTECTED] wrote:
 If bliss without intelligence is the goal of the machines you imaging
 running the world, for the cost of supporting one human they could
 probably keep at least 100 mice in equal bliss, so if they were driven to
 maximize bliss why wouldn't they kill all the grooving humans and replace
 them with grooving mice.  It would provide one hell of a lot more bliss
 bang for the resource buck.

Allow me to offer a less expensive approach.  Previously on the singularity
and sl4 mailing lists I posted a program that can feel pleasure and pain: a 2
input programmable logic gate trained by reinforcement learning.  You give it
an input, it responds, and you reward it.  In my latest version, I automated
the process.  You tell it which of the 16 logic functions you want it to learn
(AND, OR, XOR, NAND, etc), how much reward to apply for a correct output, and
how much penalty for an incorrect output.  The program then generates random
2-bit inputs, evaluates the output, and applies the specified reward or
punishment.  The program runs until you kill it.  As it dies it reports its
life history (its age, what it learned, and how much pain and pleasure it
experienced since birth).

http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)

To put the program in an eternal state of bliss, specify two positive numbers,
so that it is rewarded no matter what it does.  It won't learn anything, but
at least it will feel good.  (You could also put it in continuous pain by
specifying two negative numbers, but I put in safeguards so that it will die
before experiencing too much pain).

Two problems remain: uploading your mind to this program, and making sure
nobody kills you by turning off the computer or typing Ctrl-C.  I will address
only the first problem.

It is controversial whether technology can preserve your consciousness after
death.  If the brain is both conscious and computable, then Chalmers' fading
qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
computer simulation of your brain would also be conscious.

Whether you *become* this simulation is also controversial.  Logically there
are two of you with identical goals and memories.  If either one is killed,
then you are in the same state as you were before the copy is made.  This is
the same dilemma that Captain Kirk faces when he steps into the transporter to
be vaporized and have an identical copy assembled on the planet below.  It
doesn't seem to bother him.  Does it bother you that the atoms in your body
now are not the same atoms that made up your body a year ago?

Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
this goal; they just don't know it).  The problem is that you would forgo
food, water, and sleep until you died (we assume, from animal experiments). 
The solution is to upload to a computer where this could be done safely.

Normally an upload would have the same goals, memories, and sensory-motor I/O
as the original brain.  But consider the state of this program after self
activation of its reward signal.  No other goals are needed, so we can remove
them.  Since you no longer have the goal of learning, experiencing sensory
input, or controlling your environment, you won't mind if we replace your I/O
with a 2 bit input and 1 bit output.  You are happy, no?

Finally, if your memories were changed, you would not be aware of it, right? 
How do you know that all of your memories were not written into your brain one
second ago and you were some other person before that?  So no harm is done if
we replace your memory with a vector of 4 real numbers.  That will be all you
need in your new environment.  In fact, you won't even need that because you
will cease learning.

So we can dispense with the complex steps of making a detailed copy of your
brain and then have it transition into a degenerate state, and just skip to
the final result.

Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
4-bit logic function and positive reinforcement for both right and wrong
answers, e.g.

  g++ autobliss.cpp -o autobliss.exe
  autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

Step 2. Kill yourself.  Upload complete.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60819880-7c826a


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread BillK
On 11/2/07, Eliezer S. Yudkowsky wrote:
 I didn't ask whether it's possible.  I'm quite aware that it's
 possible.  I'm asking if this is what you want for yourself.  Not what
 you think that you ought to logically want, but what you really want.

 Is this what you lived for?  Is this the most that Jiri Jelinek wants
 to be, wants to aspire to?  Forget, for the moment, what you think is
 possible - if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?



Well, almost.
Absolute Power over others and being worshipped as a God would be neat as well.

Getting a dog is probably the nearest most humans can get to this.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60258273-c65ec9


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?


That's a very personal question, don't you think?

Even the parts I'm willing to answer have long answers.  It doesn't 
involve my turning into a black box with no outputs, though.  Nor 
ceasing to act, nor ceasing to plan, nor ceasing to steer my own 
future through my own understanding of it.  Nor being kept as a pet. 
I'd sooner be transported into a randomly selected anime.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60516560-38feaf


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
 On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?
 
 Yes. But don't forget I would also have AGI continuously looking into
 how to improve my (/our) way of perceiving the pleasure-like stuff.

This is a bizarre line of reasoning. One way that my AGI might improve 
my perception of pleasure is to make me dumber -- electroshock me -- 
so that I find gilligan's island reruns incredibly pleasurable. Or, 
I dunno, find that heroin addiction is a great way to live.

Or help me with fugue states: what is the sound of one hand clapping?
feed me zen koans till my head explodes.

But it might also decide that I should be smarter, so that I have a more
acute sense and discernement of pleasure. Make me smarter about roses,
so that I can enjoy my rose garden in a more refined way. And after I'm
smarter, perhaps I'll have a whole new idea of what pleasure is,
and what it takes to make me happy.

Personally, I'd opt for this last possibility.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60495742-7c46a3


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 01:19:19AM -0400, Jiri Jelinek wrote:
 Or do we know anything better?

I sure do. But ask me again, when I'm smarter, and have had more time to
think about the question.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60487277-501c1f


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I'm asking if this is what you want for yourself.

Then you could read just the first word from my previous response: YES

if you could have anything you wanted, is this the end you
would wish for yourself, more than anything else?

Yes. But don't forget I would also have AGI continuously looking into
how to improve my (/our) way of perceiving the pleasure-like stuff.

And because I'm influenced by my mirror neurons and care about others,
expect my monster robot-savior eventually breaking through your door,
grabbing you and plugging you into the pleasure grid. ;-)

Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60486164-589857


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

On Nov 2, 2007 4:54 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:


You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too,


Could you please provide one specific example of a human goal which
isn't feeling-based?


Saving your daughter's life.  Most mothers would prefer to save their 
daughter's life than to feel that they saved their daughter's life. 
In proof of this, mothers sometimes sacrifice their lives to save 
their daughters and never get to feel the result.  Yes, this is 
rational, for there is no truth that destroys it.  And before you 
claim all those mothers were theists, there was an atheist police 
officer, signed up for cryonics, who ran into the World Trade Center 
and died on September 11th.  As Tyrone Pow once observed, for an 
atheist to sacrifice their life is a very profound gesture.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60544283-64b657


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Vladimir Nesov
Jiri,

You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too, as they are 'vastly more intelligent', but now it's turned into
general 'they do what we want', which is generally what Friendly AI is
by definition (ignoring specifics about what 'what we want' actually
means).


On 11/2/07, Jiri Jelinek [EMAIL PROTECTED] wrote:
  Is this really what you *want*?
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?

 Yes, great feelings only (for as many people as possible) and the
 engine being continuously improved by AGI which would also take care
 of all related tasks including safety issues etc. The quality of our
 life is in feelings. Or do we know anything better? We do what we do
 for feelings and we alter them very indirectly. We can optimize and
 get the greatest stuff allowed by the current design by direct
 altering/stimulations (changes would be required so we can take it
 non-stop). Whatever you enjoy, it's not really the thing you are
 doing. It's the triggered feeling which can be obtained and
 intensified more directly. We don't know exactly how those great
 feelings (/qualia) work, but there is a number of chemicals and brain
 regions known to play key roles.

 Regards,
 Jiri Jelinek


 On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  Jiri Jelinek wrote:
  
   Let's go to an extreme: Imagine being an immortal idiot.. No matter
   what you do  how hard you try, the others will be always so much
   better in everything that you will eventually become totally
   discouraged or even afraid to touch anything because it would just
   always demonstrate your relative stupidity (/limitations) in some way.
   What a life. Suddenly, there is this amazing pleasure machine as a new
   god-like-style of living for poor creatures like you. What do you do?
 
  Jiri,
 
  Is this really what you *want*?
 
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?
 
  --
  Eliezer S. Yudkowsky  http://singinst.org/
  Research Fellow, Singularity Institute for Artificial Intelligence
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60236618-350050


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
Linas, BillK

It might currently be hard to accept for association-based human
minds, but things like roses, power-over-others, being worshiped
or loved are just waste of time with indirect feeling triggers
(assuming the nearly-unlimited ability to optimize).

Regards,
Jiri Jelinek

On Nov 2, 2007 12:56 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
  On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  if you could have anything you wanted, is this the end you
  would wish for yourself, more than anything else?
 
  Yes. But don't forget I would also have AGI continuously looking into
  how to improve my (/our) way of perceiving the pleasure-like stuff.

 This is a bizarre line of reasoning. One way that my AGI might improve
 my perception of pleasure is to make me dumber -- electroshock me --
 so that I find gilligan's island reruns incredibly pleasurable. Or,
 I dunno, find that heroin addiction is a great way to live.

 Or help me with fugue states: what is the sound of one hand clapping?
 feed me zen koans till my head explodes.

 But it might also decide that I should be smarter, so that I have a more
 acute sense and discernement of pleasure. Make me smarter about roses,
 so that I can enjoy my rose garden in a more refined way. And after I'm
 smarter, perhaps I'll have a whole new idea of what pleasure is,
 and what it takes to make me happy.

 Personally, I'd opt for this last possibility.

 --linas

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60582722-508dcb


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
On Nov 2, 2007 2:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Could you please provide one specific example of a human goal which
  isn't feeling-based?

 It depends on what you mean by 'based' and 'goal'. Does any choice
 qualify as a goal? For example, if I choose to write certain word in
 this e-mail, does a choice to write it form a goal of writing it?
 I can't track source of this goal, it happens subconsciously.

Choice to take particular action generates sub-goal (which might be
deep in the sub-goal chain). If you go up, asking why? on each
level, you eventually reach the feeling level where goals (not just
sub-goals) are coming from. In short, I'm writing these words because
I have reasons to believe that the discussion can in some way support
my /or someone else's AGI R /or D. I want to support it because I
believe AGI can significantly help us to avoid pain and get more
pleasure - which is basically what drives us [by design]. So when we
are 100% done, there will be no pain and an extreme pleasure. Of
course I'm simplifying a bit, but what are the key objections?

 Saying just 'Friendly AI' seems to be
 sufficient to specify a goal for human researchers, but not enough to
 actually build one.

Just build AGI that follows given rules.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60681447-d775a0


[agi] Nirvana? Manyana? Never!

2007-11-01 Thread Edward W. Porter
Jiri Jelinek wrote  on Thu 11/01/07 2:51 AM

JIRI Ok, here is how I see it: If we survive, I believe we will
eventually get plugged into some sort of pleasure machine and we will not
care about intelligence at all. Intelligence is a useless tool when there
are no problems and no goals to think about. We don't really want any
goals/problems in our minds.

ED So is the envisioned world is one in which people are on something
equivalent to a perpetual heroin or crystal meth rush?

If so, since most current humans wouldn’t have much use for such people, I
don’t know why self-respecting productive human-level AGIs would either.
And, if humans had no goals or never thought about intelligence or
problems, there is no hope they would ever be able to defend themselves
from the machines.

I think it is important to keep people in the loop and substantially in
control for as long as possible, at least until we make a transhumanist
transition.  I think it is important that most people have some sort of
work, even if it is only in helping raise children, taking care of the
old, governing society, and managing machines.  Freud said work of some
sort was important, and a lot of people think he was right.

Even as humans increasingly become more machine through intelligence
augmentation, we well have problems.  Even if the machines totally take
over they will have problems.  Shit happens -- even to machines.

So I think having more pleasure is good, but trying to have so much
pleasure that you have no goals, no concern for intelligence, and never
think of problems is a recipe for certain extinction.  You know, survival
of the fittest and all that other boring rot that just happens to dominate
reality.

Nirvana? Manyana? Never!

Of course, all this is IMHO.

Ed Porter

P.S. If you ever make one of your groove machines, you could make billions
with it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=59947465-e0a37a

Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
 Is this really what you *want*?
 Out of all the infinite possibilities, this is the world in which you
 would most want to live?

Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all related tasks including safety issues etc. The quality of our
life is in feelings. Or do we know anything better? We do what we do
for feelings and we alter them very indirectly. We can optimize and
get the greatest stuff allowed by the current design by direct
altering/stimulations (changes would be required so we can take it
non-stop). Whatever you enjoy, it's not really the thing you are
doing. It's the triggered feeling which can be obtained and
intensified more directly. We don't know exactly how those great
feelings (/qualia) work, but there is a number of chemicals and brain
regions known to play key roles.

Regards,
Jiri Jelinek


On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 Jiri Jelinek wrote:
 
  Let's go to an extreme: Imagine being an immortal idiot.. No matter
  what you do  how hard you try, the others will be always so much
  better in everything that you will eventually become totally
  discouraged or even afraid to touch anything because it would just
  always demonstrate your relative stupidity (/limitations) in some way.
  What a life. Suddenly, there is this amazing pleasure machine as a new
  god-like-style of living for poor creatures like you. What do you do?

 Jiri,

 Is this really what you *want*?

 Out of all the infinite possibilities, this is the world in which you
 would most want to live?

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60223315-7fc1f8


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
 ED So is the envisioned world is one in which people are on something
 equivalent to a perpetual heroin or crystal meth rush?

Kind of, except it would be safe.

 If so, since most current humans wouldn't have much use for such people, I
 don't know why self-respecting productive human-level AGIs would either.

It would not be supposed to think that way. It does what it's tasked
to do (no matter how smart it is).

 And, if humans had no goals or never thought about intelligence or problems,
 there is no hope they would ever be able to defend themselves from the
 machines.

Our machines would work for us and do everything much better so - no
reason for us to do anything.

 I think it is important to keep people in the loop and substantially in
 control for as long as possible,

My initial thought was the same but if we have narrow AI safety_tools
doing a better job in that area for *very* *very* long time, we will
get convinced that there is simply no need for us being directly
involved.

 at least until we make a transhumanist transition.
 I think it is important that most people have some sort of
 work, even if it is only in helping raise children, taking care of the old,
 governing society, and managing machines.

My thought was in very distant [potential] future. World will change
drastically. There will be no [desire for] children and no old (we
will live forever). Our cells are currently programed to die - that
code will be rewritten if we stick with cells. The meaning of the term
society will change and at certain stage, we will IMO not care about
any concept you can name today. But we better spend more time with
trying to figure out how to design the first powerful AGI at this
stage + how to keep extending our life so WE can make it to those
fairy tale future worlds.

 Freud said work of some sort was
 important, and a lot of people think he was right.

It will be valid for a while :-)

 Even as humans increasingly become more machine through intelligence
 augmentation, we well have problems.  Even if the machines totally take over
 they will have problems.  Shit happens -- even to machines.

Right, but they will be better shit-fighters.

 So I think having more pleasure is good, but trying to have so much pleasure
 that you have no goals, no concern for intelligence, and never think of
 problems is a recipe for certain extinction.

Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do  how hard you try, the others will be always so much
better in everything that you will eventually become totally
discouraged or even afraid to touch anything because it would just
always demonstrate your relative stupidity (/limitations) in some way.
What a life. Suddenly, there is this amazing pleasure machine as a new
god-like-style of living for poor creatures like you. What do you do?

Regards,
Jiri Jelinek


 You know, survival of the
 fittest and all that other boring rot that just happens to dominate reality.

 Nirvana? Manyana? Never!

 Of course, all this is IMHO.

 Ed Porter

 P.S. If you ever make one of your groove machines, you could make billions
 with it. 
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60220603-cef30c


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do  how hard you try, the others will be always so much
better in everything that you will eventually become totally
discouraged or even afraid to touch anything because it would just
always demonstrate your relative stupidity (/limitations) in some way.
What a life. Suddenly, there is this amazing pleasure machine as a new
god-like-style of living for poor creatures like you. What do you do?


Jiri,

Is this really what you *want*?

Out of all the infinite possibilities, this is the world in which you 
would most want to live?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60221250-a74559


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Stefan Pernar
On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:

  Is this really what you *want*?
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?

 Yes, great feelings only (for as many people as possible) and the
 engine being continuously improved by AGI which would also take care
 of all related tasks including safety issues etc. The quality of our
 life is in feelings. Or do we know anything better? We do what we do
 for feelings and we alter them very indirectly. We can optimize and
 get the greatest stuff allowed by the current design by direct
 altering/stimulations (changes would be required so we can take it
 non-stop). Whatever you enjoy, it's not really the thing you are
 doing. It's the triggered feeling which can be obtained and
 intensified more directly. We don't know exactly how those great
 feelings (/qualia) work, but there is a number of chemicals and brain
 regions known to play key roles.


Your feelings form a guide that has evolved in the course of natural
selection to reward you for doing things that increase your fitness and
punish you for things that decrease your fitness. If you abuse this
mechanism by merely pretending that you are increasing your fitness in the
form of releasing appropriate chemicals in your brain then you are hurting
yourself by closing your eyes to reality. This is bad because you
effectively deny yourself the potential for further increasing your fitness
and thereby will eventually be replaced by an agent that does concern itself
with increasing its fitness.

In short: your bliss wont last long.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60225009-df9d21

Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
Stefan,

 closing your eyes to reality. This is bad because you
 effectively deny yourself the potential for further increasing your fitness

I'm closing my eyes, but my AGI - which is an extension of my
intelligence (/me) - does not. I fact it opens them more than I could.
We and our AGI should be viewed as a whole in this respect.

Regards,
Jiri Jelinek

On Nov 2, 2007 1:37 AM, Stefan Pernar [EMAIL PROTECTED] wrote:
 On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:

 
   Is this really what you *want*?
   Out of all the infinite possibilities, this is the world in which you
   would most want to live?
 
  Yes, great feelings only (for as many people as possible) and the
  engine being continuously improved by AGI which would also take care
  of all related tasks including safety issues etc. The quality of our
  life is in feelings. Or do we know anything better? We do what we do
  for feelings and we alter them very indirectly. We can optimize and
  get the greatest stuff allowed by the current design by direct
  altering/stimulations (changes would be required so we can take it
  non-stop). Whatever you enjoy, it's not really the thing you are
  doing. It's the triggered feeling which can be obtained and
  intensified more directly. We don't know exactly how those great
  feelings (/qualia) work, but there is a number of chemicals and brain
  regions known to play key roles.

 Your feelings form a guide that has evolved in the course of natural
 selection to reward you for doing things that increase your fitness and
 punish you for things that decrease your fitness. If you abuse this
 mechanism by merely pretending that you are increasing your fitness in the
 form of releasing appropriate chemicals in your brain then you are hurting
 yourself by closing your eyes to reality. This is bad because you
 effectively deny yourself the potential for further increasing your fitness
 and thereby will eventually be replaced by an agent that does concern itself
 with increasing its fitness.

 In short: your bliss wont last long.

 --
 Stefan Pernar
 3-E-101 Silver Maple Garden
 #6 Cai Hong Road, Da Shan Zi
 Chao Yang District
 100015 Beijing
 P.R. CHINA
 Mobil: +86 1391 009 1931
 Skype: Stefan.Pernar 

  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60226663-83d320


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
would most want to live?


Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all related tasks including safety issues etc. The quality of our
life is in feelings. Or do we know anything better? We do what we do
for feelings and we alter them very indirectly. We can optimize and
get the greatest stuff allowed by the current design by direct
altering/stimulations (changes would be required so we can take it
non-stop). Whatever you enjoy, it's not really the thing you are
doing. It's the triggered feeling which can be obtained and
intensified more directly. We don't know exactly how those great
feelings (/qualia) work, but there is a number of chemicals and brain
regions known to play key roles.


I didn't ask whether it's possible.  I'm quite aware that it's 
possible.  I'm asking if this is what you want for yourself.  Not what 
you think that you ought to logically want, but what you really want.


Is this what you lived for?  Is this the most that Jiri Jelinek wants 
to be, wants to aspire to?  Forget, for the moment, what you think is 
possible - if you could have anything you wanted, is this the end you 
would wish for yourself, more than anything else?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60231781-e47c04