RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney

--- Gary Miller <[EMAIL PROTECTED]> wrote:

> Too complicate things further.
> 
> A small percentage of humans perceive pain as pleasure
> and prefer it at least in a sexual context or else 
> fetishes like sadomachism would not exist.
> 
> And they do in fact experience pain as a greater pleasure.


More properly, they have associated positive reinforcement with sensory
experience that most people find painful.  It is like when I am running a race
and willing to endure pain to pass my competitors.

Any good optimization process will trade off short and long term utility.  If
an agent is rewarded for output y given input x, it must still experiment with
output -y to see if it results in greater reward.  Evolution rewards smart
optimization processes.  It explains why people climb mountains, create
paintings, and build rockets.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66463093-36cd0a


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> >Printing "ahh" or "ouch" is just for show. The important observation is
> that
> the program changes its behavior in response to a reinforcement signal in
> the
> same way that animals do.
> 
> Let me remind you that the problem we were originally discussing was
> about qualia and uploading. Not just about a behavior changes through
> reinforcement based on given rules.

I have already posted my views on this.  People will upload because they
believe in qualia, but qualia is an illusion.  I wrote autobliss to expose
this illusion.

> Good luck with this,

I don't expect that any amount of logic will cause anyone to refute beliefs
programmed into their DNA, myself included.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66461747-04b852


RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Gary Miller
Too complicate things further.

A small percentage of humans perceive pain as pleasure
and prefer it at least in a sexual context or else 
fetishes like sadomachism would not exist.

And they do in fact experience pain as a greater pleasure.

More than likely these people have an ample supply of endorphins 
which rush to supplant the pain with an even greater pleasure. 

Over time they are driven to seek out certain types of pain and
excitement to feel alive.

And although most try to avoid extreme life threatening pain many 
seek out greater and greater challanges such as climbing hazardous
mountains or high speed driving until at last many find death.

Although these behaviors should be anti-evolutionary and should have died
out it is possible that the tribe as a whole needs at least a few such
risk takers to take out that sabertoothed tiger that's been dragging off
the children.


-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 18, 2007 5:32 PM
To: agi@v2.listbox.com
Subject: Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana?
Never!)


--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> >autobliss passes tests for awareness of its inputs and responds as if 
> >it
> has
> qualia.  How is it fundamentally different from human awareness of 
> pain and pleasure, or is it just a matter of degree?
> 
> If your code has feelings it reports then reversing the order of the 
> feeling strings (without changing the logic) should magically turn its 
> pain into pleasure and vice versa, right? Now you get some pain [or 
> pleasure], lie how great [or bad] it feels and see how reversed your 
> perception gets. BTW do you think computers would be as reliable as 
> they are if some numbers were truly painful (and other pleasant) from 
> their perspective?

Printing "ahh" or "ouch" is just for show.  The important observation is
that the program changes its behavior in response to a reinforcement signal
in the same way that animals do.

I propose an information theoretic measure of utility (pain and pleasure). 
Let a system S compute some function y = f(x) for some input x and output y.

Let S(t1) be a description of S at time t1 before it inputs a real-valued
reinforcement signal R, and let S(t2) be a description of S at time t2 after
input of R, and K(.) be Kolmogorov complexity.  I propose

  abs(R) <= K(dS) = K(S(t2) | S(t1))

The magnitude of R is bounded by the length of the shortest program that
inputs S(t1) and outputs S(t2).

I use abs(R) because S could be changed in identical ways given positive,
negative, or no reinforcement, e.g.

- S receives input x, randomly outputs y, and is rewarded with R > 0.
- S receives x, randomly outputs -y, and is penalized with R < 0.
- S receives both x and y and is modified by classical conditioning.

This definition is consistent with some common sense notions about pain and
pleasure, for example:

- In animal experiments, increasing the quantity of a reinforcement signal
(food, electric shock) increases the amount of learning.

- Humans feel more pain or pleasure than insects because for humans, K(S) is
larger, and therefore the greatest possible change is larger.

- Children respond to pain or pleasure more intensely than adults because
they learn faster.

- Drugs which block memory formation (anesthesia) also block sensations of
pain and pleasure.

One objection might be to consider the following sequence:
1. S inputs x, outputs -y, is penalized with R < 0.
2. S inputs x, outputs y, is penalized with R < 0.
3. The function f() is unchanged, so K(S(t3)|S(t1)) = 0, even though
K(S(t2)|S(t1)) > 0 and K(S(t3)|S(t2)) > 0.

My response is that this situation cannot occur in animals or humans.  An
animal that is penalized regardless of its actions does not learn nothing.
It learns helplessness, or to avoid the experimenter.  However this
situation can occur in my autobliss program.

The state of autobliss can be described by 4 64-bit floating point numbers,
so for any sequence of reinforcement, K(dS) <= 256 bits.  For humans, K(dS)
<=
10^9 to 10^15 bits, according to various cognitive or neurological models of
the brain.  So I argue it is just a matter of degree.

If you accept this definition, then I think without brain augmentation,
there is a bound on how much pleasure or pain you can experience in a
lifetime.  In particular, if you consider t1 = birth, t2 = death, then K(dS)
= 0.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=6697-23a35c


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Jiri Jelinek
Matt,

>Printing "ahh" or "ouch" is just for show. The important observation is that
the program changes its behavior in response to a reinforcement signal in the
same way that animals do.

Let me remind you that the problem we were originally discussing was
about qualia and uploading. Not just about a behavior changes through
reinforcement based on given rules.

Good luck with this,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66443285-fe79dd


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> >autobliss passes tests for awareness of its inputs and responds as if it
> has
> qualia.  How is it fundamentally different from human awareness of pain and
> pleasure, or is it just a matter of degree?
> 
> If your code has feelings it reports then reversing the order of the
> feeling strings (without changing the logic) should magically turn its
> pain into pleasure and vice versa, right? Now you get some pain [or
> pleasure], lie how great [or bad] it feels and see how reversed your
> perception gets. BTW do you think computers would be as reliable as
> they are if some numbers were truly painful (and other pleasant) from
> their perspective?

Printing "ahh" or "ouch" is just for show.  The important observation is that
the program changes its behavior in response to a reinforcement signal in the
same way that animals do.

I propose an information theoretic measure of utility (pain and pleasure). 
Let a system S compute some function y = f(x) for some input x and output y. 
Let S(t1) be a description of S at time t1 before it inputs a real-valued
reinforcement signal R, and let S(t2) be a description of S at time t2 after
input of R, and K(.) be Kolmogorov complexity.  I propose

  abs(R) <= K(dS) = K(S(t2) | S(t1))

The magnitude of R is bounded by the length of the shortest program that
inputs S(t1) and outputs S(t2).

I use abs(R) because S could be changed in identical ways given positive,
negative, or no reinforcement, e.g.

- S receives input x, randomly outputs y, and is rewarded with R > 0.
- S receives x, randomly outputs -y, and is penalized with R < 0.
- S receives both x and y and is modified by classical conditioning.

This definition is consistent with some common sense notions about pain and
pleasure, for example:

- In animal experiments, increasing the quantity of a reinforcement signal
(food, electric shock) increases the amount of learning.

- Humans feel more pain or pleasure than insects because for humans, K(S) is
larger, and therefore the greatest possible change is larger.

- Children respond to pain or pleasure more intensely than adults because they
learn faster.

- Drugs which block memory formation (anesthesia) also block sensations of
pain and pleasure.

One objection might be to consider the following sequence:
1. S inputs x, outputs -y, is penalized with R < 0.
2. S inputs x, outputs y, is penalized with R < 0.
3. The function f() is unchanged, so K(S(t3)|S(t1)) = 0, even though
K(S(t2)|S(t1)) > 0 and K(S(t3)|S(t2)) > 0.

My response is that this situation cannot occur in animals or humans.  An
animal that is penalized regardless of its actions does not learn nothing.  It
learns helplessness, or to avoid the experimenter.  However this situation can
occur in my autobliss program.

The state of autobliss can be described by 4 64-bit floating point numbers, so
for any sequence of reinforcement, K(dS) <= 256 bits.  For humans, K(dS) <=
10^9 to 10^15 bits, according to various cognitive or neurological models of
the brain.  So I argue it is just a matter of degree.

If you accept this definition, then I think without brain augmentation, there
is a bound on how much pleasure or pain you can experience in a lifetime.  In
particular, if you consider t1 = birth, t2 = death, then K(dS) = 0.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66439343-981277


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Mike Tintner

Jiri:If your code has feelings it reports then reversing the order of the
feeling strings (without changing the logic) should magically turn its
pain into pleasure and vice versa, right?

The notions above  - common in discussions here - are so badly in error.

*Codes don't have emotions.
*Computers don't have emotions
*You need a body to have emotions
*Computers don't have bodies - unless you are talking about robots.
*Emotions involve changing energy levels - they are arousals or depressions 
of the system

*Computers don't have changing energy levels (though they could)
*Codes & computers don't have feelings
*To have a feeling, a system has to have a self that decides to what extent 
to feel the emotions - and which of two or more conflicting emotions to feel 
more

*Your AGI systems don't have selves (though they could).
*Your systems don't have emotional conflicts (though they could).

IOW why not just talk about Harry Potter's emotional flying pigs? It would 
be more grounded than the above discussion. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66321180-1ca16c


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Jiri Jelinek
Matt,

>autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?

If your code has feelings it reports then reversing the order of the
feeling strings (without changing the logic) should magically turn its
pain into pleasure and vice versa, right? Now you get some pain [or
pleasure], lie how great [or bad] it feels and see how reversed your
perception gets. BTW do you think computers would be as reliable as
they are if some numbers were truly painful (and other pleasant) from
their perspective?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66309775-832549


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Matt Mahoney
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > --- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > 
> >> Matt Mahoney wrote:
> >>> --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >>>
>  On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> >> We just need to control AGIs goal system.
> > You can only control the goal system of the first iteration.
>  ..and you can add rules for it's creations (e.g. stick with the same
>  goals/rules unless authorized otherwise)
> >>> You can program the first AGI to program the second AGI to be friendly. 
> >> You
> >>> can program the first AGI to program the second AGI to program the third
> >> AGI
> >>> to be friendly.  But eventually you will get it wrong, and if not you,
> >> then
> >>> somebody else, and evolutionary pressure will take over.
> >> This statement has been challenged many times.  It is based on 
> >> assumptions that are, at the very least, extremely questionable, and 
> >> according to some analyses, extremely unlikely.
> > 
> > I guess it will continue to be challenged until we can do an experiment to
> > prove who is right.  Perhaps you should challenge SIAI, since they seem to
> > think that friendliness is still a hard problem.
> 
> I have done so, as many people on this list will remember.  The response 
> was deeply irrational.

Perhaps you have seen this paper on the nature of RSI by Stephen M. Omohundro,
http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/

Basically he says that self improving intelligences will evolve goals of
efficiency, self preservation, resource acquisition, and creativity.  Since
these goals are pretty much aligned with our own (which are also the result of
an evolutionary process), perhaps we shouldn't worry about friendliness.  Or
are there parts of the paper you disagree with?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66272291-daefc4


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Dennis Gorelik
Matt,

You algorithm is too complex.
What's the point of doing step 1?
Step 2 is sufficient.

Saturday, November 3, 2007, 8:01:45 PM, you wrote:

> So we can dispense with the complex steps of making a detailed copy of your
> brain and then have it transition into a degenerate state, and just skip to
> the final result.

> http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
> Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
> 4-bit logic function and positive reinforcement for both right and wrong
> answers, e.g.

>   g++ autobliss.cpp -o autobliss.exe
>   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

> Step 2. Kill yourself.  Upload complete.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=66253555-746bb4


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-14 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:


Matt Mahoney wrote:

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.

..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)
You can program the first AGI to program the second AGI to be friendly. 

You

can program the first AGI to program the second AGI to program the third

AGI

to be friendly.  But eventually you will get it wrong, and if not you,

then

somebody else, and evolutionary pressure will take over.
This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.


I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.


I have done so, as many people on this list will remember.  The response 
was deeply irrational.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64985895-75bf5b


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

> Matt Mahoney wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > 
> >> On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>  We just need to control AGIs goal system.
> >>> You can only control the goal system of the first iteration.
> >>
> >> ..and you can add rules for it's creations (e.g. stick with the same
> >> goals/rules unless authorized otherwise)
> > 
> > You can program the first AGI to program the second AGI to be friendly. 
> You
> > can program the first AGI to program the second AGI to program the third
> AGI
> > to be friendly.  But eventually you will get it wrong, and if not you,
> then
> > somebody else, and evolutionary pressure will take over.
> 
> This statement has been challenged many times.  It is based on 
> assumptions that are, at the very least, extremely questionable, and 
> according to some analyses, extremely unlikely.

I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64668559-1aacd3


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Richard Loosemore

Matt Mahoney wrote:

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)


You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.


This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64528236-2fa800


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > We just need to control AGIs goal system.
> >
> > You can only control the goal system of the first iteration.
> 
> 
> ..and you can add rules for it's creations (e.g. stick with the same
> goals/rules unless authorized otherwise)

You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.

> > > > But if consciousness does not exist...
> > >
> > > obviously, it does exist.
> >
> > Belief in consciousness exists.  There is no test for the truth of this
> > belief.
> 
> Consciousness is basically an awareness of certain data and there are
> tests for that.

autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64515425-65dd64


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-12 Thread Jiri Jelinek
On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > We just need to control AGIs goal system.
>
> You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)

> > > But if consciousness does not exist...
> >
> > obviously, it does exist.
>
> Belief in consciousness exists.  There is no test for the truth of this
> belief.

Consciousness is basically an awareness of certain data and there are
tests for that.

Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64449219-1a7532


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-11 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> > > >But logically you know that your brain is just a machine, or else AGI
> would
> > > not be possible.
> > >
> > > I disagree with your logic because human brain does things AGI does
> > > not need to do AND the stuff the AGI needs to do does not need to be
> > > done the way brain does it. But I don't deny that human brain is a
> > > [kind of] machine. We just don't understand all parts of it well
> > > enough for upload - which is not really a problem for AGI development.
> >
> > It is a big problem if AGI takes the form of recursively self improving
> > intelligence that is not in the form of augmented human brains.  RSI is an
> > evolutionary algorithm that favors rapid reproduction and acquisition of
> > computing resources, nothing else.  Humans would be seen as competitors.
> > If humans are to survive, then we must become the AGI by upgrading our
> brains
> > or uploading.
> 
> We just need to control AGIs goal system.

You can only control the goal system of the first iteration.

> > But if consciousness does not exist...
> 
> obviously, it does exist.

Belief in consciousness exists.  There is no test for the truth of this
belief.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=64049981-eab92d


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-09 Thread Jiri Jelinek
Matt,

> > >believe that you can control your thoughts and actions,
> >
> > I don't. Seems unlikely.
> >
> > > and fear death
> >
> > Some people accept things they cannot (or don't know how to) change
> > without getting emotional.
> >
> > >because if you did not have these beliefs you would not propagate
> > your DNA.  It is not possible for
> > any human to believe otherwise.
> >
> > False
>
> This is an example of my assertion that you believe you can control your
> thoughts. You believe you can override your instincts (one of which is this
> belief that you can). If you really believed that hunger was not real or that
> you could turn it off, then you would stop eating.  But you don't.

I don't think intelligence can successfully emerge in worlds that
aren't driven by causality = I believe we are being forced to
do_&_believe whatever we do_&_believe. Still, it's IMO OK to say
"person P1 did X & P2 Y" instead of "the Universe did X & Y through P1
& P2" or so.

> > >But logically you know that your brain is just a machine, or else AGI would
> > not be possible.
> >
> > I disagree with your logic because human brain does things AGI does
> > not need to do AND the stuff the AGI needs to do does not need to be
> > done the way brain does it. But I don't deny that human brain is a
> > [kind of] machine. We just don't understand all parts of it well
> > enough for upload - which is not really a problem for AGI development.
>
> It is a big problem if AGI takes the form of recursively self improving
> intelligence that is not in the form of augmented human brains.  RSI is an
> evolutionary algorithm that favors rapid reproduction and acquisition of
> computing resources, nothing else.  Humans would be seen as competitors.
> If humans are to survive, then we must become the AGI by upgrading our brains
> or uploading.

We just need to control AGIs goal system.

> But if consciousness does not exist...

obviously, it does exist.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=63801573-337715


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Matt Mahoney

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> >Of course you realize that qualia is an illusion? You believe that
> your environment is real, believe that pain and pleasure are real,
> 
> "real" is meaningless. Perception depends on sensors and subsequent
> sensation processing.

Reality depends on whether there is really a universe, or just a simulation of
one being fed to your sensory inputs.  There is no way for you to tell the
difference, just an instinct that says the former.


> >believe that you can control your thoughts and actions,
> 
> I don't. Seems unlikely.
> 
> > and fear death
> 
> Some people accept things they cannot (or don't know how to) change
> without getting emotional.
> 
> >because if you did not have these beliefs you would not propagate
> your DNA.  It is not possible for
> any human to believe otherwise.
> 
> False

This is an example of my assertion that you believe you can control your
thoughts.  You believe you can override your instincts (one of which is this
belief that you can).  If you really believed that hunger was not real or that
you could turn it off, then you would stop eating.  But you don't.


> >But logically you know that your brain is just a machine, or else AGI would
> not be possible.
> 
> I disagree with your logic because human brain does things AGI does
> not need to do AND the stuff the AGI needs to do does not need to be
> done the way brain does it. But I don't deny that human brain is a
> [kind of] machine. We just don't understand all parts of it well
> enough for upload - which is not really a problem for AGI development.

It is a big problem if AGI takes the form of recursively self improving
intelligence that is not in the form of augmented human brains.  RSI is an
evolutionary algorithm that favors rapid reproduction and acquisition of
computing resources, nothing else.  Humans would be seen as competitors.

If humans are to survive, then we must become the AGI by upgrading our brains
or uploading.  But if consciousness does not exist, as logic tells us, then
this outcome is no different than the other.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=62256955-5c83cf


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Bob Mottram
I've often heard people say things like "qualia are an illusion" or
"consciousness is just an illusion", but the concept of an illusion
when applied to the mind is not very helpful, since all our thoughts
and perceptions could be considered as "illusions" reconstructed from
limited sensory data and knowledge.


On 06/11/2007, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >Of course you realize that qualia is an illusion? You believe that
> your environment is real, believe that pain and pleasure are real,
>
> "real" is meaningless. Perception depends on sensors and subsequent
> sensation processing.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61579379-f62acb


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Jiri Jelinek
>Of course you realize that qualia is an illusion? You believe that
your environment is real, believe that pain and pleasure are real,

"real" is meaningless. Perception depends on sensors and subsequent
sensation processing.

>believe that you can control your thoughts and actions,

I don't. Seems unlikely.

> and fear death

Some people accept things they cannot (or don't know how to) change
without getting emotional.

>because if you did not have these beliefs you would not propagate
your DNA.  It is not possible for
any human to believe otherwise.

False

>But logically you know that your brain is just a machine, or else AGI would
not be possible.

I disagree with your logic because human brain does things AGI does
not need to do AND the stuff the AGI needs to do does not need to be
done the way brain does it. But I don't deny that human brain is a
[kind of] machine. We just don't understand all parts of it well
enough for upload - which is not really a problem for AGI development.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61514255-0e3357


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> We can compute behavior, but nothing indicates we can compute
> feelings. Qualia research needed to figure out new platforms for
> uploading.
> 
> Regards,
> Jiri Jelinek

Of course you realize that qualia is an illusion?  You believe that your
environment is real, believe that pain and pleasure are real, believe that you
can control your thoughts and actions, and fear death because if you did not
have these beliefs you would not propagate your DNA.  It is not possible for
any human to believe otherwise.

But logically you know that your brain is just a machine, or else AGI would
not be possible.



> 
> 
> On Nov 4, 2007 1:15 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> >
> > > Matt,
> > >
> > > Create a numeric "pleasure" variable in your mind, initialize it with
> > > a positive number and then keep doubling it for some time. Done? How
> > > do you feel? Not a big difference? Oh, keep doubling! ;-))
> >
> > The point of autobliss.cpp is to illustrate the flaw in the reasoning that
> we
> > can somehow through technology, AGI, and uploading, escape a world where
> we
> > are not happy all the time, where we sometimes feel pain, where we fear
> death
> > and then die.  Obviously my result is absurd.  But where is the mistake in
> my
> > reasoning?  Is it "if the brain is both conscious and computable"?



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61493494-1b1194


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Jiri Jelinek
Matt,

We can compute behavior, but nothing indicates we can compute
feelings. Qualia research needed to figure out new platforms for
uploading.

Regards,
Jiri Jelinek


On Nov 4, 2007 1:15 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>
> > Matt,
> >
> > Create a numeric "pleasure" variable in your mind, initialize it with
> > a positive number and then keep doubling it for some time. Done? How
> > do you feel? Not a big difference? Oh, keep doubling! ;-))
>
> The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
> can somehow through technology, AGI, and uploading, escape a world where we
> are not happy all the time, where we sometimes feel pain, where we fear death
> and then die.  Obviously my result is absurd.  But where is the mistake in my
> reasoning?  Is it "if the brain is both conscious and computable"?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=61383577-33004b


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Russell Wallace
On 11/4/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
> this goal; they just don't know it).  The problem is that you would forgo
> food, water, and sleep until you died (we assume, from animal experiments).

We have no need to assume: the experiment has been done with human
volunteers. They reported that the experience was indeed pleasurable -
but unlike animals, they could and did choose to stop pressing the
button.

(The rest, I'll leave to the would-be wireheads to argue about :))

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60982051-57939c


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Matt Mahoney
--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> Create a numeric "pleasure" variable in your mind, initialize it with
> a positive number and then keep doubling it for some time. Done? How
> do you feel? Not a big difference? Oh, keep doubling! ;-))

The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
can somehow through technology, AGI, and uploading, escape a world where we
are not happy all the time, where we sometimes feel pain, where we fear death
and then die.  Obviously my result is absurd.  But where is the mistake in my
reasoning?  Is it "if the brain is both conscious and computable"?


> 
> Regards,
> Jiri Jelinek
> 
> On Nov 3, 2007 10:01 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > --- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> > > If bliss without intelligence is the goal of the machines you imaging
> > > running the world, for the cost of supporting one human they could
> > > probably keep at least 100 mice in equal bliss, so if they were driven
> to
> > > maximize bliss why wouldn't they kill all the grooving humans and
> replace
> > > them with grooving mice.  It would provide one hell of a lot more bliss
> > > bang for the resource buck.
> >
> > Allow me to offer a less expensive approach.  Previously on the
> singularity
> > and sl4 mailing lists I posted a program that can feel pleasure and pain:
> a 2
> > input programmable logic gate trained by reinforcement learning.  You give
> it
> > an input, it responds, and you reward it.  In my latest version, I
> automated
> > the process.  You tell it which of the 16 logic functions you want it to
> learn
> > (AND, OR, XOR, NAND, etc), how much reward to apply for a correct output,
> and
> > how much penalty for an incorrect output.  The program then generates
> random
> > 2-bit inputs, evaluates the output, and applies the specified reward or
> > punishment.  The program runs until you kill it.  As it dies it reports
> its
> > life history (its age, what it learned, and how much pain and pleasure it
> > experienced since birth).
> >
> > http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
> >
> > To put the program in an eternal state of bliss, specify two positive
> numbers,
> > so that it is rewarded no matter what it does.  It won't learn anything,
> but
> > at least it will feel good.  (You could also put it in continuous pain by
> > specifying two negative numbers, but I put in safeguards so that it will
> die
> > before experiencing too much pain).
> >
> > Two problems remain: uploading your mind to this program, and making sure
> > nobody kills you by turning off the computer or typing Ctrl-C.  I will
> address
> > only the first problem.
> >
> > It is controversial whether technology can preserve your consciousness
> after
> > death.  If the brain is both conscious and computable, then Chalmers'
> fading
> > qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
> > computer simulation of your brain would also be conscious.
> >
> > Whether you *become* this simulation is also controversial.  Logically
> there
> > are two of you with identical goals and memories.  If either one is
> killed,
> > then you are in the same state as you were before the copy is made.  This
> is
> > the same dilemma that Captain Kirk faces when he steps into the
> transporter to
> > be vaporized and have an identical copy assembled on the planet below.  It
> > doesn't seem to bother him.  Does it bother you that the atoms in your
> body
> > now are not the same atoms that made up your body a year ago?
> >
> > Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
> > this goal; they just don't know it).  The problem is that you would forgo
> > food, water, and sleep until you died (we assume, from animal
> experiments).
> > The solution is to upload to a computer where this could be done safely.
> >
> > Normally an upload would have the same goals, memories, and sensory-motor
> I/O
> > as the original brain.  But consider the state of this program after self
> > activation of its reward signal.  No other goals are needed, so we can
> remove
> > them.  Since you no longer have the goal of learning, experiencing sensory
> > input, or controlling your environment, you won't mind if we replace your
> I/O
> > with a 2 bit input and 1 bit output.  You are happy, no?
> >
> > Finally, if your memories were changed, you would not be aware of it,
> right?
> > How do you know that all of your memories were not written into your brain
> one
> > second ago and you were some other person before that?  So no harm is done
> if
> > we replace your memory with a vector of 4 real numbers.  That will be all
> you
> > need in your new environment.  In fact, you won't even need that because
> you
> > will cease learning.
> >
> > So we can dispense with the complex steps of making a detailed copy of
> your
> > brain and then have it transition into a degenerate state, and just skip

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-03 Thread Jiri Jelinek
Matt,

Create a numeric "pleasure" variable in your mind, initialize it with
a positive number and then keep doubling it for some time. Done? How
do you feel? Not a big difference? Oh, keep doubling! ;-))

Regards,
Jiri Jelinek

On Nov 3, 2007 10:01 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> > If bliss without intelligence is the goal of the machines you imaging
> > running the world, for the cost of supporting one human they could
> > probably keep at least 100 mice in equal bliss, so if they were driven to
> > maximize bliss why wouldn't they kill all the grooving humans and replace
> > them with grooving mice.  It would provide one hell of a lot more bliss
> > bang for the resource buck.
>
> Allow me to offer a less expensive approach.  Previously on the singularity
> and sl4 mailing lists I posted a program that can feel pleasure and pain: a 2
> input programmable logic gate trained by reinforcement learning.  You give it
> an input, it responds, and you reward it.  In my latest version, I automated
> the process.  You tell it which of the 16 logic functions you want it to learn
> (AND, OR, XOR, NAND, etc), how much reward to apply for a correct output, and
> how much penalty for an incorrect output.  The program then generates random
> 2-bit inputs, evaluates the output, and applies the specified reward or
> punishment.  The program runs until you kill it.  As it dies it reports its
> life history (its age, what it learned, and how much pain and pleasure it
> experienced since birth).
>
> http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
>
> To put the program in an eternal state of bliss, specify two positive numbers,
> so that it is rewarded no matter what it does.  It won't learn anything, but
> at least it will feel good.  (You could also put it in continuous pain by
> specifying two negative numbers, but I put in safeguards so that it will die
> before experiencing too much pain).
>
> Two problems remain: uploading your mind to this program, and making sure
> nobody kills you by turning off the computer or typing Ctrl-C.  I will address
> only the first problem.
>
> It is controversial whether technology can preserve your consciousness after
> death.  If the brain is both conscious and computable, then Chalmers' fading
> qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
> computer simulation of your brain would also be conscious.
>
> Whether you *become* this simulation is also controversial.  Logically there
> are two of you with identical goals and memories.  If either one is killed,
> then you are in the same state as you were before the copy is made.  This is
> the same dilemma that Captain Kirk faces when he steps into the transporter to
> be vaporized and have an identical copy assembled on the planet below.  It
> doesn't seem to bother him.  Does it bother you that the atoms in your body
> now are not the same atoms that made up your body a year ago?
>
> Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
> this goal; they just don't know it).  The problem is that you would forgo
> food, water, and sleep until you died (we assume, from animal experiments).
> The solution is to upload to a computer where this could be done safely.
>
> Normally an upload would have the same goals, memories, and sensory-motor I/O
> as the original brain.  But consider the state of this program after self
> activation of its reward signal.  No other goals are needed, so we can remove
> them.  Since you no longer have the goal of learning, experiencing sensory
> input, or controlling your environment, you won't mind if we replace your I/O
> with a 2 bit input and 1 bit output.  You are happy, no?
>
> Finally, if your memories were changed, you would not be aware of it, right?
> How do you know that all of your memories were not written into your brain one
> second ago and you were some other person before that?  So no harm is done if
> we replace your memory with a vector of 4 real numbers.  That will be all you
> need in your new environment.  In fact, you won't even need that because you
> will cease learning.
>
> So we can dispense with the complex steps of making a detailed copy of your
> brain and then have it transition into a degenerate state, and just skip to
> the final result.
>
> Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
> 4-bit logic function and positive reinforcement for both right and wrong
> answers, e.g.
>
>   g++ autobliss.cpp -o autobliss.exe
>   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)
>
> Step 2. Kill yourself.  Upload complete.
>
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe o

Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-03 Thread Matt Mahoney
--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> If bliss without intelligence is the goal of the machines you imaging
> running the world, for the cost of supporting one human they could
> probably keep at least 100 mice in equal bliss, so if they were driven to
> maximize bliss why wouldn't they kill all the grooving humans and replace
> them with grooving mice.  It would provide one hell of a lot more bliss
> bang for the resource buck.

Allow me to offer a less expensive approach.  Previously on the singularity
and sl4 mailing lists I posted a program that can feel pleasure and pain: a 2
input programmable logic gate trained by reinforcement learning.  You give it
an input, it responds, and you reward it.  In my latest version, I automated
the process.  You tell it which of the 16 logic functions you want it to learn
(AND, OR, XOR, NAND, etc), how much reward to apply for a correct output, and
how much penalty for an incorrect output.  The program then generates random
2-bit inputs, evaluates the output, and applies the specified reward or
punishment.  The program runs until you kill it.  As it dies it reports its
life history (its age, what it learned, and how much pain and pleasure it
experienced since birth).

http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)

To put the program in an eternal state of bliss, specify two positive numbers,
so that it is rewarded no matter what it does.  It won't learn anything, but
at least it will feel good.  (You could also put it in continuous pain by
specifying two negative numbers, but I put in safeguards so that it will die
before experiencing too much pain).

Two problems remain: uploading your mind to this program, and making sure
nobody kills you by turning off the computer or typing Ctrl-C.  I will address
only the first problem.

It is controversial whether technology can preserve your consciousness after
death.  If the brain is both conscious and computable, then Chalmers' fading
qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
computer simulation of your brain would also be conscious.

Whether you *become* this simulation is also controversial.  Logically there
are two of you with identical goals and memories.  If either one is killed,
then you are in the same state as you were before the copy is made.  This is
the same dilemma that Captain Kirk faces when he steps into the transporter to
be vaporized and have an identical copy assembled on the planet below.  It
doesn't seem to bother him.  Does it bother you that the atoms in your body
now are not the same atoms that made up your body a year ago?

Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
this goal; they just don't know it).  The problem is that you would forgo
food, water, and sleep until you died (we assume, from animal experiments). 
The solution is to upload to a computer where this could be done safely.

Normally an upload would have the same goals, memories, and sensory-motor I/O
as the original brain.  But consider the state of this program after self
activation of its reward signal.  No other goals are needed, so we can remove
them.  Since you no longer have the goal of learning, experiencing sensory
input, or controlling your environment, you won't mind if we replace your I/O
with a 2 bit input and 1 bit output.  You are happy, no?

Finally, if your memories were changed, you would not be aware of it, right? 
How do you know that all of your memories were not written into your brain one
second ago and you were some other person before that?  So no harm is done if
we replace your memory with a vector of 4 real numbers.  That will be all you
need in your new environment.  In fact, you won't even need that because you
will cease learning.

So we can dispense with the complex steps of making a detailed copy of your
brain and then have it transition into a degenerate state, and just skip to
the final result.

Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
4-bit logic function and positive reinforcement for both right and wrong
answers, e.g.

  g++ autobliss.cpp -o autobliss.exe
  autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

Step 2. Kill yourself.  Upload complete.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60819880-7c826a