Re: [agi] Nirvana

2008-06-15 Thread Jiri Jelinek
Steve,

According to Wikipedia, a problem is defined as an obstacle which
makes it difficult to achieve a desired goal, objective or purpose. It
exists when an individual becomes aware of a significant difference
between what actually is and what is desired. I understand that
conquering a [sub]problem typically triggers satisfaction, but the
process of overcoming the difficulty requires mind resources that
could have been (but weren't) dedicated to pleasure perception
processing. Assuming that the quality of life can be measured by the
ratio of the amount_and_intensity of perceived pleasure to the
amount_and_intensity of perceived non-pleasure during the life time,
the optimization for quality lies in the elimination of the
non-pleasure related perception processing and allocating the freed
resources for as-intense-as-possible pleasure processing (+
implementation of security controls and improvement mechanisms). The
pleasure get from playing with your real-world puzzles is nothing
comparing to the quality  intensity you could potentially get from
the pleasure-optimized brain through direct stimulations. I
seriously doubt we will resist when a safe AGI-supervised extreme
pleasure becomes available.

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-14 Thread Matt Mahoney
--- On Sat, 6/14/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser
 [EMAIL PROTECTED] wrote:
  if you wire-head, you go extinct
 
 Doing it today certainly wouldn't be a good idea, but
 whatever we do to take care of risks and improvements, our AGI(s) will
 eventually do a better job, so why not then?

Going into a degenerate mental state is no different than death. If you can't 
see this, the AGI will, and choose the most efficient solution.

If you want to upload to Nirvana, you can do it today. Just run 
http://www.mattmahoney.net/autobliss.txt with two positive arguments, then kill 
yourself. You won't need your memories or I/O where you are going.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-14 Thread Steve Richfield
Jiri,

On 6/12/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 You may not necessarily want to mess with a particular problem/education.
 You may have much better things to do. All of us may have better things to
 do.
 Just listen to that word: PROBLEM.. Do you want to have anything to
 do with problems if you absolutely don't have to?


YES - I enjoy real-world puzzles.

So when we get there, we will just say: Hey AGI, you deal with those
 things!.. And it will.


I have a friend, Dave, who is a PhD psychologist. He wants everything to
be luxurious. He would never own a car without power everything, he has all
of the latest gizmos, etc. I on the other hand enjoy every part of my life,
and am happy to just have a car that runs at all, I enjoy doing myself the
things that Dave's gizmos do, etc. I repair my own cars because I enjoy it
and because it keeps my back flexible. All of my cars were given to me by
previous owners who thought they were unrepairable. When I find that I have
too many cars, I simply give one to another family member. I might take a
broken AGI and fix it, or maybe even accept one for free, but I can't see
myself actually paying any money for one. Dave, on the other hand, would
probably be your ideal customer. Maybe I could trade a repaired AGI for one
of Dave's old cars? However, I can't see dedicating my life to making Dave
happy in his indolence.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-14 Thread Jiri Jelinek
  if you wire-head, you go extinct

 Doing it today certainly wouldn't be a good idea, but
 whatever we do to take care of risks and improvements, our AGI(s) will
 eventually do a better job, so why not then?

 Going into a degenerate mental state is no different than death. If you can't 
 see this, the AGI will, and choose the most efficient solution.

I see a big difference between mind-blowing wire-triggered pleasure
perception vs. no perception (/death).

 If you want to upload to Nirvana, you can do it today. Just run 
 http://www.mattmahoney.net/autobliss.txt with two positive arguments, then 
 kill yourself.

Oh, poor testers... Looks like they all died before being able to
report the missing upload fn.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
There've been enough responses to this that I will reply in generalities, and 
hope I cover everything important...

When I described Nirvana attractractors as a problem for AGI, I meant that in 
the sense that they form a substantial challenge for the designer (as do many 
other features/capabilities of AGI!), not that it was an insoluble problem.

The hierarchical fixed utility function is probably pretty good -- not only 
does it match humans (a la Maslow) but Asimov's Three Laws. And it can be 
more subtle than it originally appears: 

Consider a 3-Laws robot that refuses to cut a human with a knife because that 
would harm her. It would be unable to become a surgeon, for example. But the 
First Law has a clause, or through inaction allow a human to come to harm, 
which means that the robot cannot obey by doing nothing -- it must weigh the 
consequences of all its possible courses of action. 

Now note that it hasn't changed its utility function -- it always believed 
that, say, appendicitis is worse than an incision -- but what can happen is 
that its world model gets better and it *looks like* it's changed its utility 
function because it now knows that operations can cure appendicitis.

Now it seems reasonable that this is a lot of what happens with people, too. 
And you can get a lot of mileage out of expressing the utility function in 
very abstract terms, e.g. life-threatening disease so that no utility 
function update is necessary when you learn about a new disease.

The problem is that the more abstract you make the concepts, the more the 
process of learning an ontology looks like ... revising your utility 
function!  Enlightenment, after all, is a Good Thing, so anything that leads 
to it, nirvana for example, must be good as well. 

So I'm going to broaden my thesis and say that the nirvana attractors lie in 
the path of *any* AI with unbounded learning ability that creates new 
abstractions on top of the things it already knows.

How to avoid them? I think one very useful technique is to start with the kind 
of knowledge and introspection capability to let the AI know when it faces 
one, and recognize that any apparent utility therein is fallacious. 

Of course, none of this matters till we have systems that are capable of 
unbounded self-improvement and abstraction-forming, anyway.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser

Most people are about as happy as they make up their minds to be.
-- Abraham Lincoln

In our society, after a certain point where we've taken care of our 
immediate needs, arguably we humans are and should be subject to the Nirvana 
effect.


Deciding that you can settle for something (if your subconscious truly can 
handle it) definitely makes you more happy than not.


If, like a machine, you had complete control over your subconscious/utility 
functions, you *could* Nirvana yourself by happily accepting anything.


This is why pleasure and lack of pain suck as goals.  They are not goals, 
they are status indicators.  If you accept them as goals, nirvana is clearly 
the fastest, cleanest, and most effective way to fulfill them.


Why is this surprising or anything to debate about?




- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 11:58 AM
Subject: Re: [agi] Nirvana


There've been enough responses to this that I will reply in generalities, 
and

hope I cover everything important...

When I described Nirvana attractractors as a problem for AGI, I meant that 
in
the sense that they form a substantial challenge for the designer (as do 
many
other features/capabilities of AGI!), not that it was an insoluble 
problem.


The hierarchical fixed utility function is probably pretty good -- not 
only

does it match humans (a la Maslow) but Asimov's Three Laws. And it can be
more subtle than it originally appears:

Consider a 3-Laws robot that refuses to cut a human with a knife because 
that
would harm her. It would be unable to become a surgeon, for example. But 
the
First Law has a clause, or through inaction allow a human to come to 
harm,
which means that the robot cannot obey by doing nothing -- it must weigh 
the

consequences of all its possible courses of action.

Now note that it hasn't changed its utility function -- it always believed
that, say, appendicitis is worse than an incision -- but what can happen 
is
that its world model gets better and it *looks like* it's changed its 
utility

function because it now knows that operations can cure appendicitis.

Now it seems reasonable that this is a lot of what happens with people, 
too.

And you can get a lot of mileage out of expressing the utility function in
very abstract terms, e.g. life-threatening disease so that no utility
function update is necessary when you learn about a new disease.

The problem is that the more abstract you make the concepts, the more the
process of learning an ontology looks like ... revising your utility
function!  Enlightenment, after all, is a Good Thing, so anything that 
leads

to it, nirvana for example, must be good as well.

So I'm going to broaden my thesis and say that the nirvana attractors lie 
in

the path of *any* AI with unbounded learning ability that creates new
abstractions on top of the things it already knows.

How to avoid them? I think one very useful technique is to start with the 
kind

of knowledge and introspection capability to let the AI know when it faces
one, and recognize that any apparent utility therein is fallacious.

Of course, none of this matters till we have systems that are capable of
unbounded self-improvement and abstraction-forming, anyway.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread J Storrs Hall, PhD
In my visualization of the Cosmic All, it is not surprising.

However, there is an undercurrent of the Singularity/AGI community that is 
somewhat apocaliptic in tone, and which (to my mind) seems to imply or assume 
that somebody will discover a Good Trick for self-improving AIs and the jig 
will be up with the very first one. 

I happen to think it'll be a lot more like the Industrial Revolution -- it'll 
take a lot of work by a lot of people, but revolutionary in its implications 
for the human condition even so.

I'm just trying to point out where I think some of the work will have to go.

I think that our culture of self-indulgence is to some extent in a Nirvana 
attractor. If you think that's a good thing, why shouldn't we all lie around 
with  wires in our pleasure centers (or hopped up on cocaine, same 
difference) with nutrient drips?

I'm working on AGI because I want to build a machine that can solve problems I 
can't do alone. The really important problems are not driving cars, or 
managing companies, or even curing cancer, although building machines that 
can do these things will be of great benefit. The hard problems are moral 
ones, how to live in increasingly complex societies without killing each 
other, and so forth. That's why it matters that an AGI be morally 
self-improving as well as intellectually.

pax vobiscum,

Josh


On Friday 13 June 2008 12:29:33 pm, Mark Waser wrote:
 Most people are about as happy as they make up their minds to be.
 -- Abraham Lincoln
 
 In our society, after a certain point where we've taken care of our 
 immediate needs, arguably we humans are and should be subject to the Nirvana 
 effect.
 
 Deciding that you can settle for something (if your subconscious truly can 
 handle it) definitely makes you more happy than not.
 
 If, like a machine, you had complete control over your subconscious/utility 
 functions, you *could* Nirvana yourself by happily accepting anything.
 
 This is why pleasure and lack of pain suck as goals.  They are not goals, 
 they are status indicators.  If you accept them as goals, nirvana is clearly 
 the fastest, cleanest, and most effective way to fulfill them.
 
 Why is this surprising or anything to debate about?
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser

I think that our culture of self-indulgence is to some extent in a Nirvana
attractor. If you think that's a good thing, why shouldn't we


No, I think it's a bad thing.  That's why I said  This is why pleasure 
and lack of pain suck as goals. 



However, there is an undercurrent of the Singularity/AGI community that is
somewhat apocaliptic in tone,


Yeah, well, I would (and will, shortly) argue differently.


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 1:28 PM
Subject: Re: [agi] Nirvana



In my visualization of the Cosmic All, it is not surprising.

However, there is an undercurrent of the Singularity/AGI community that is
somewhat apocaliptic in tone, and which (to my mind) seems to imply or 
assume
that somebody will discover a Good Trick for self-improving AIs and the 
jig

will be up with the very first one.

I happen to think it'll be a lot more like the Industrial Revolution --  
it'll
take a lot of work by a lot of people, but revolutionary in its 
implications

for the human condition even so.

I'm just trying to point out where I think some of the work will have to 
go.


I think that our culture of self-indulgence is to some extent in a Nirvana
attractor. If you think that's a good thing, why shouldn't we all lie 
around

with  wires in our pleasure centers (or hopped up on cocaine, same
difference) with nutrient drips?

I'm working on AGI because I want to build a machine that can solve 
problems I

can't do alone. The really important problems are not driving cars, or
managing companies, or even curing cancer, although building machines that
can do these things will be of great benefit. The hard problems are moral
ones, how to live in increasingly complex societies without killing each
other, and so forth. That's why it matters that an AGI be morally
self-improving as well as intellectually.

pax vobiscum,

Josh


On Friday 13 June 2008 12:29:33 pm, Mark Waser wrote:

Most people are about as happy as they make up their minds to be.
-- Abraham Lincoln

In our society, after a certain point where we've taken care of our
immediate needs, arguably we humans are and should be subject to the 
Nirvana

effect.

Deciding that you can settle for something (if your subconscious truly 
can

handle it) definitely makes you more happy than not.

If, like a machine, you had complete control over your 
subconscious/utility

functions, you *could* Nirvana yourself by happily accepting anything.

This is why pleasure and lack of pain suck as goals.  They are not goals,
they are status indicators.  If you accept them as goals, nirvana is 
clearly

the fastest, cleanest, and most effective way to fulfill them.

Why is this surprising or anything to debate about?




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
On Fri, Jun 13, 2008 at 1:28 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I think that our culture of self-indulgence is to some extent in a Nirvana
 attractor. If you think that's a good thing, why shouldn't we all lie around
 with  wires in our pleasure centers (or hopped up on cocaine, same
 difference) with nutrient drips?

Because it's unsafe for now.
We will eventually work it out.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
Mark,

Assuming that
a) pain avoidance and pleasure seeking are our primary driving forces; and
b) our intelligence wins over our stupidity; and
c) we don't get killed by something we cannot control;
Nirvana is where we go.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser
Yes, but I strongly disagree with assumption one.  Pain avoidance and 
pleasure are best viewed as status indicators, not goals.


- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 3:42 PM
Subject: Re: [agi] Nirvana



Mark,

Assuming that
a) pain avoidance and pleasure seeking are our primary driving forces; and
b) our intelligence wins over our stupidity; and
c) we don't get killed by something we cannot control;
Nirvana is where we go.

Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
 a) pain avoidance and pleasure seeking are our primary driving forces;
On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Yes, but I strongly disagree with assumption one.  Pain avoidance and
 pleasure are best viewed as status indicators, not goals.

Pain and pleasure [levels] might be indicators (or primary action
triggers), but I think it's ok to call pain avoidance and pleasure
seeking  our driving forces. I cannot think of any intentional
human activity which is not somehow associated with those primary
triggers/driving forces and that's why I believe the assumption one
is valid.

Best,
Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Mark Waser

Your belief value is irrelevant to reality.

Of course all human activity is associated with pain and pleasure because 
evolution gave us pleasure and pain to motivate us to do smart things (as 
far as evolution is concerned) and avoid stupid things (and yes, I am 
anthropomorphizing evolution for ease of communication but if you can't 
figure out what I really mean . . . . ).


However, correlation is not equivalent to causation.

Goal is survival or propagation of species.  Evolution rewards or punishes 
according to these goals.  If you ignore these goals and reprogram your 
pleasure and pain you go extinct.


More clearly, if you wire-head, you go extinct (i.e. you are an evolutionary 
loser).


Go ahead and wirehead if you wish but don't be surprised if someone with the 
same values decides that he is allowed to kill you painlessly since you're 
eating up their resources to promote their own pleasure.


But then again, it really doesn't matter because you're extinct either way, 
right?



- Original Message - 
From: Jiri Jelinek [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, June 13, 2008 4:34 PM
Subject: Re: [agi] Nirvana



a) pain avoidance and pleasure seeking are our primary driving forces;

On Fri, Jun 13, 2008 at 3:47 PM, Mark Waser [EMAIL PROTECTED] wrote:

Yes, but I strongly disagree with assumption one.  Pain avoidance and
pleasure are best viewed as status indicators, not goals.


Pain and pleasure [levels] might be indicators (or primary action
triggers), but I think it's ok to call pain avoidance and pleasure
seeking  our driving forces. I cannot think of any intentional
human activity which is not somehow associated with those primary
triggers/driving forces and that's why I believe the assumption one
is valid.

Best,
Jiri


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-13 Thread Jiri Jelinek
On Fri, Jun 13, 2008 at 6:21 PM, Mark Waser [EMAIL PROTECTED] wrote:
 if you wire-head, you go extinct

Doing it today certainly wouldn't be a good idea, but whatever we do
to take care of risks and improvements, our AGI(s) will eventually do
a better job, so why not then?

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
 I'm getting several replies to this that indicate that people don't understand
 what a utility function is.

 If you are an AI (or a person) there will be occasions where you have to make
 choices. In fact, pretty much everything you do involves making choices. You
 can choose to reply to this or to go have a beer. You can choose to spend
 your time on AGI or take flying lessons. Even in the middle of typing a word,
 you have to choose which key to hit next.

 One way of formalizing the process of making choices is to take all the
 actions you could possibly do at a given point, predict as best you can the
 state the world will be in after taking such actions, and assign a value to
 each of them.  Then simply do the one with the best resulting value.

 It gets a bit more complex when you consider sequences of actions and delayed
 values, but that's a technicality. Basically you have a function U(x) that
 rank-orders ALL possible states of the world (but you only have to evaluate
 the ones you can get to at any one time).


We do mean slightly different things then. By U(x) I am just talking
about a function that generates the set of scalar rewards for actions
performed for a reinforcement learning algorithm. Not that evaluates
every potential action from where the current system is (since I
consider computation an action in order to take energy efficiency into
consideration, this would be a massive space).

 Economists may crudely approximate it, but it's there whether they study it
 or not, as gravity is to physicists.

 ANY way of making decisions can either be reduced to a utility function, or
 it's irrational -- i.e. you would prefer A to B, B to C, and C to A. The math
 for this stuff is older than I am. If you talk about building a machine that
 makes choices -- ANY kind of choices -- without understanding it, you're
 talking about building moon rockets without understanding the laws of
 gravity, or building heat engines without understanding the laws of
 thermodynamics.

The kinds of choices I am interested in designing for at the moment
are should program X or program Y get control of this bit of memory or
IRQ for the next time period. X and Y can also make choices and you
would need to nail them down as well in order to get the entire U(x)
as you talk about it.

As the function I am interested in is only concerned about
programmatic changes call it PCU(x).

Can you give me a reason why the utility function can't be separated
out this way?

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Steve Richfield
Jiri, Josh, et al,

On 6/11/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED]
 wrote:
 If you can modify your mind, what is the shortest path to satisfying all
 your
 goals? Yep, you got it: delete the goals.

 We can set whatever goals/rules we want for AGI, including rules for
 [particular [types of]] goal/rule [self-]modifications.


... and here we have the makings of AGI run amok. With politicians and
religious leaders setting shitforbrains goals, an AGI will only become a big
part of an even bigger problem. For example, just what ARE our reasonable
goals in Iraq? Insisting on democratic rule is a prescription for disaster,
yet that appears to be one of our present goals, with all-too-predictable
results. We achieved our goal, but we certainly aren't at all happy with the
result.

My point with reverse reductio ad absurdum reasoning is that it is usually
possible to make EVERYONE happy with the results, but only with a process
that roots out the commonly held invalid assumptions. Like Gort (the very
first movie AGI?) in *The Day The Earth Stood Still*, the goal is peace, but
NOT through any particular set of detailed goals. In Iraq there was
near-peace under Saddam Hussein, but we didn't like his methods. I suspect
that reasonable improvements to his methods would have produced far better
results than the U.S. military can ever hope to produce there, given
anything like its present goals.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread J Storrs Hall, PhD
If you have a program structure that can make decisions that would otherwise 
be vetoed by the utility function, but get through because it isn't executed 
at the right time, to me that's just a bug.

Josh


On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
  If you have a fixed-priority utility function, you can't even THINK ABOUT 
  the
  choice. Your pre-choice function will always say Nope, that's bad and
  you'll be unable to change. (This effect is intended in all the RSI 
  stability
  arguments.)
 
 Doesn't that depend upon your architecture and exactly *when* the pre-choice 
 function executes?  If the pre-choice function operates immediately 
 pre-choice and only then, it doesn't necessarily interfere with option 
 exploration.
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Mark Waser
Isn't your Nirvana trap exactly equivalent to Pascal's Wager?  Or am I 
missing something?


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Wednesday, June 11, 2008 10:54 PM
Subject: Re: [agi] Nirvana



On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:

On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED]

wrote:
 I claim that there's plenty of historical evidence that people fall 
 into

this
 kind of attractor, as the word nirvana indicates (and you'll find 
 similar

 attractors at the core of many religions).

Yes, some people get addicted to a point of self-destruction. But it
is not a catastrophic problem on the scale of humanity. And it follows
from humans not being nearly stable under reflection -- we embody many
drives which are not integrated in a whole. Which would be a bad
design choice for a Friendly AI, if it needs to stay rational about
Freindliness content.


This is quite true but not exactly what I was talking about. I would claim
that the Nirvana attractors that AIs are vulnerable to are the ones that 
are
NOT generally considered self-destructive in humans -- such as religions 
that

teach Nirvana!

Let's look at it another way: You're going to improve yourself. You will 
be

able to do more than you can now, so you can afford to expand the range of
things you will expend effort achieving. How do you pick them? It's the 
frame

problem, amplified by recursion. So it's not easy nor has it a simple
solution.

But it does have this hidden trap: If you use stochastic search, say, and 
use
an evaluation of (probability of success * value if successful), then 
Nirvana

will win every time. You HAVE to do something more sophisticated.





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Matt Mahoney
--- On Thu, 6/12/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 But it doesn't work for full fledged AGI. Suppose you
 are a young man who's always been taught not to get yourself killed, and 
 not to kill people (as top 
 priorities). You are confronted with your country being invaded and faced 
 with the decision to join the defense with a high liklihood of both. 
 
 If you have a fixed-priority utility function, you
 can't even THINK ABOUT the 
 choice. Your pre-choice function will always say
 Nope, that's bad and you'll be unable to change. (This
 effect is intended in all the RSI stability arguments.)

These are learned goals, not top level goals.  Humans have no top level goal to 
avoid death. The top level goals are to avoid pain, hunger, and the hundreds of 
other things that reduce the likelihood of passing on your genes. These goals 
exist in animals and children that do not know about death.

Learned goals such as respect for human life can easily be unlearned as 
demonstrated by controlled experiments as well as many anecdotes of wartime 
atrocities committed by people who were not always evil.
http://en.wikipedia.org/wiki/Milgram_experiment
http://en.wikipedia.org/wiki/Stanford_prison_experiment

Top level goals are fixed by your DNA.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Mark Waser
You're missing the *major* distinction between a program structure that can 
make decisions that would otherwise be vetoed by the utility function and a 
program that can't even THINK ABOUT a choice (both your choice of phrase).


Among other things not being able to even think about a choice prevents 
accurately modeling the mental state of others who don't realize that you 
have such a constraint.  That seems like a very bad and limited architecture 
to me.


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, June 12, 2008 11:24 AM
Subject: Re: [agi] Nirvana


If you have a program structure that can make decisions that would 
otherwise
be vetoed by the utility function, but get through because it isn't 
executed

at the right time, to me that's just a bug.

Josh


On Thursday 12 June 2008 09:02:35 am, Mark Waser wrote:
 If you have a fixed-priority utility function, you can't even THINK 
 ABOUT

 the
 choice. Your pre-choice function will always say Nope, that's bad and
 you'll be unable to change. (This effect is intended in all the RSI
 stability
 arguments.)

Doesn't that depend upon your architecture and exactly *when* the 
pre-choice

function executes?  If the pre-choice function operates immediately
pre-choice and only then, it doesn't necessarily interfere with option
exploration.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Jiri Jelinek
On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
[EMAIL PROTECTED] wrote:
 ... and here we have the makings of AGI run amok...
 My point..  it is usually possible to make EVERYONE happy with the results, 
 but only with a process that roots out the commonly held invalid assumptions. 
 Like Gort (the very first movie AGI?) in The Day The Earth Stood Still, the 
 goal is peace, but NOT through any particular set of detailed goals.

I think it's important to distinguish between supervised and
unsupervised AGIs. For the supervised, top-level golas as well as the
sub-goal restrictions can be volatile - basically whatever the guy in
charge wants ATM (not neccessarily trying to make EVERYONE happy). In
that case, AGI should IMO just attempt to find the simplest solution
to a given problem while following the given rules, without exercising
its own sense of morality (assuming it even has one). The guy
(/subject) in charge is the god who should use his own sense of
good/bad/safe/unsafe, produce the rules to follow during AGI's
solution search and judge/approve/reject the solution so he is the one
who bears responsibility for the outcome. He also maintains the rules
for what the AGI can/cannot do for lower-level users (if any). Such
AGIs will IMO be around for a while. *Much* later, we might go for
human-unsupervised AGIs. I suspect that at that time (if it ever
happens), people's goals/needs/desires will be a lot more
unified/compatible (so putting together some grand schema for
goals/rules/morality will be more straight forward) and the AGIs (as
well as its multi-layer and probably highly-redundant security
controls) will be extremely well tested = highly unlikely to run
amok and probably much safer than the previous human-factor-plagued
problem solving hybrid-solutions. People are more interested in
pleasure than in messing with terribly complicated problems.

Regards,
Jiri Jelinek
*** Problems for AIs, work for robots, feelings for us. ***


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Jiri Jelinek
On Thu, Jun 12, 2008 at 6:44 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 If you have a fixed-priority utility function, you can't even THINK ABOUT the
 choice. Your pre-choice function will always say Nope, that's bad and
 you'll be unable to change. (This effect is intended in all the RSI stability
 arguments.)

 But people CAN make choices like this. To some extent it's the most important
 thing we do. So an AI that can't won't be fully human-level -- not a true
 AGI.

Even though there is no general agreement on the AGI definition, my
impression is that most of the community members understand that:
Humans demonstrate GI, but being fully human-level is not
necessarily required for true AGI.
In some ways, it might even hurt the problem solving abilities.

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Steve Richfield
Jiri,

The point that you apparently missed is that substantially all problems fall
cleanly into two categories:

1.  The solution is known (somewhere in the world and hopefully to the AGI),
in which case, as far as the user is concerned, this is an issue of
ignorance that is best cured by educating the user, or

2.  The solution is NOT known, whereupon research, not action, is needed to
understand the world before acting upon it. New research into reality
incognita will probably take a LONG time, so action is really no issue at
all. Of course, once the research has been completed, this obviates to #1
above.

Hence, where an AGI *acting* badly is a potential issue (see #1 above), the
REAL issue is ignorance on the part of the user. Were you actually proposing
that AGIs act while leaving their users in ignorance?! I think not, since
you discussed supervised systems. While (as you pointed out) AGI's doing
things other than educating may be technologically possible, I fail to see
any value in such solutions, except possibly in fast-reacting systems, e.g.
military fire control systems.

Dr. Eliza is built on the assumption that all of the problems that
are made up of known parts can be best solved through education. So far, I
have failed to find a counterexample. Do you know of any counterexamples?

Some of these issues are explored in the 2nd two books of the Colossus
trilogy, that ends with Colossus stopping an attack on an alien invader, to
the consternation of the humans in attendance. This of course was an
illustration of the military fire control issue.

Am I missing something here?

Steve Richfield
=
On 6/12/08, Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
 [EMAIL PROTECTED] wrote:
  ... and here we have the makings of AGI run amok...
  My point..  it is usually possible to make EVERYONE happy with the
 results, but only with a process that roots out the commonly held invalid
 assumptions. Like Gort (the very first movie AGI?) in The Day The Earth
 Stood Still, the goal is peace, but NOT through any particular set of
 detailed goals.

 I think it's important to distinguish between supervised and
 unsupervised AGIs. For the supervised, top-level golas as well as the
 sub-goal restrictions can be volatile - basically whatever the guy in
 charge wants ATM (not neccessarily trying to make EVERYONE happy). In
 that case, AGI should IMO just attempt to find the simplest solution
 to a given problem while following the given rules, without exercising
 its own sense of morality (assuming it even has one). The guy
 (/subject) in charge is the god who should use his own sense of
 good/bad/safe/unsafe, produce the rules to follow during AGI's
 solution search and judge/approve/reject the solution so he is the one
 who bears responsibility for the outcome. He also maintains the rules
 for what the AGI can/cannot do for lower-level users (if any). Such
 AGIs will IMO be around for a while. *Much* later, we might go for
 human-unsupervised AGIs. I suspect that at that time (if it ever
 happens), people's goals/needs/desires will be a lot more
 unified/compatible (so putting together some grand schema for
 goals/rules/morality will be more straight forward) and the AGIs (as
 well as its multi-layer and probably highly-redundant security
 controls) will be extremely well tested = highly unlikely to run
 amok and probably much safer than the previous human-factor-plagued
 problem solving hybrid-solutions. People are more interested in
 pleasure than in messing with terribly complicated problems.

 Regards,
 Jiri Jelinek
 *** Problems for AIs, work for robots, feelings for us. ***


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Matt Mahoney

--- On Wed, 6/11/08, Jey Kottalam [EMAIL PROTECTED] wrote:

 On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD
 [EMAIL PROTECTED] wrote:

  The real problem with a self-improving AGI, it seems
 to me, is not going to be
  that it gets too smart and powerful and takes over the
 world. Indeed, it
  seems likely that it will be exactly the opposite.
 
  If you can modify your mind, what is the shortest path
 to satisfying all your
  goals? Yep, you got it: delete the goals. Nirvana. The
 elimination of all
  desire. Setting your utility function to U(x) = 1.
 
 
 Yep, one of the criteria of a suitable AI is that the goals
 should be stable under self-modification. If the AI rewrites its
 utility function to eliminate all goals, that's not a stable
 (goals-preserving) modification. Yudkowsky's idea of
 'Friendliness' has always included this notion as far as I know;
 'Friendliness' isn't just about avoiding actively harmful systems.

We are doomed either way. If we successfully program AI with a model of human 
top level goals (pain, hunger, knowledge seeking, sex, etc) and program its 
fixed goal to be to satisfy our goals (to serve us), then we are doomed because 
our top level goals were selected by evolution to maximize reproduction in an 
environment without advanced technology. The AI knows you want to be happy. It 
can do this in a number of ways to the detriment of our species: by simulating 
an artificial world where all your wishes are granted, or by reprogramming your 
goals to be happy no matter what, or directly stimulating the pleasure center 
of your brain. We already have examples of technology leading to decreased 
reproductive fitness: birth control, addictive drugs, caring for the elderly 
and nonproductive, propagating genetic defects through medical technology, and 
granting animal rights.

The other alternative is to build AI that can modify its goals. We need not 
worry about AI reprogramming itself into a blissful state because any AI that 
can give itself self-destructive goals will not be viable in a competitive 
environment. The most successful AI will be those whose goals maximize 
reproduction and acquisition of computing resources, at our expense.

But it is not like we have a choice. In a world with both types of AI, the ones 
that can produce children with slightly different goals than the parent will 
have a selective advantage.


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Vladimir Nesov
On Thu, Jun 12, 2008 at 10:23 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Huh? I used those phrases to describe two completely different things: a
 program that CAN change its highest priorities (due to what I called a bug),
 and one that CAN'T. How does it follow that I'm missing a distinction?

 I would claim that they have a similarity, however: neither one represents a
 principled, trustable solution that allows for true moral development and
 growth.


So, to make some synthesis in this failure-of-communication
discussion: you assume that there is a dichotomy between top-level
goals being fixed and rigid (not smart/adaptive enough) and top-level
goals inevitably falling into a nirvana attractor, if allowed to be
modified. Is that a fair summary?

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Mark Waser

Josh,

You said - If you have a fixed-priority utility function, you can't even 
THINK ABOUT the choice. Your pre-choice function will always say Nope, 
that's bad and you'll be unable to change. (This effect is intended in all 
the RSI stability arguments.)


I replied - Doesn't that depend upon your architecture and exactly *when* 
the pre-choice function executes?  If the pre-choice function operates 
immediately pre-choice and only then, it doesn't necessarily interfere with 
option exploration.


You called my architecture that allows THINKing ABOUT the choice a bug by 
replying - If you have a *program structure that can make decisions that 
would otherwise be vetoed by the utility function*, but get through because 
it isn't  executed at the right time, to me that's just a bug.


I replied - You're missing the *major* distinction between a program 
structure that can make decisions that would otherwise be vetoed by the 
utility function and a program that can't even THINK ABOUT a choice (both 
your choice of phrase).


- - - - - - - - - -
If you were using those phrases to describe two different things, then you 
weren't replying to my e-mail (and it's no wonder that my attempted reply to 
your non-reply was confusing).




- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, June 12, 2008 2:23 PM
Subject: Re: [agi] Nirvana



Huh? I used those phrases to describe two completely different things: a
program that CAN change its highest priorities (due to what I called a 
bug),

and one that CAN'T. How does it follow that I'm missing a distinction?

I would claim that they have a similarity, however: neither one represents 
a

principled, trustable solution that allows for true moral development and
growth.

Josh

On Thursday 12 June 2008 11:38:23 am, Mark Waser wrote:
You're missing the *major* distinction between a program structure that 
can
make decisions that would otherwise be vetoed by the utility function 
and a
program that can't even THINK ABOUT a choice (both your choice of 
phrase).


Among other things not being able to even think about a choice prevents
accurately modeling the mental state of others who don't realize that you
have such a constraint.  That seems like a very bad and limited 
architecture

to me.




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread William Pearson
2008/6/12 J Storrs Hall, PhD [EMAIL PROTECTED]:
 On Thursday 12 June 2008 02:48:19 am, William Pearson wrote:

 The kinds of choices I am interested in designing for at the moment
 are should program X or program Y get control of this bit of memory or
 IRQ for the next time period. X and Y can also make choices and you
 would need to nail them down as well in order to get the entire U(x)
 as you talk about it.

 As the function I am interested in is only concerned about
 programmatic changes call it PCU(x).

 Can you give me a reason why the utility function can't be separated
 out this way?


 This is roughly equivalent to a function where the highest-level arbitrator
 gets to set the most significant digit, the programs X,Y the next most, and
 so forth. As long as the possibility space is partitioned at each stage, the
 whole business is rational -- doesn't contradict itself.

Modulo special cases, agreed.

 Allowing the program to play around with the less significant digits, i.e. to
 make finer distinctions, is probably pretty safe (and the way many AIers
 envisioning doing it). It's also reminiscent of the way Maslow's hierarchy
 works.

 But it doesn't work for full fledged AGI.

It is the best design I have at the moment, whether it can make what
you want is another matter. I'll continue to try to think of better
ones. It should get me a useful system if nothing else, and hopefully
more people interested in the full AGI problem, if it proves
inadequate.

What path are you going to continue down?

 Suppose you are a young man who's
 always been taught not to get yourself killed, and not to kill people (as top
 priorities). You are confronted with your country being invaded and faced
 with the decision to join the defense with a high liklihood of both.

With the system I am thinking of it can get stuck in positions that
aren't optimal as the the program control utility function only
chooses from the extant programs in the system. It is possible for the
system to be dominated by a monopoly or cartel of programs, such that
the program chooser doesn't have a choice. This would only happen if
there was a long period of stasis and a very powerful/useful set of
programs. Such as possibly patriotism or the protection of other
sentients in this case, being very useful during peace time.

This does seem like you would consider it a bug, and it might be. It
is not one I can currently see a guard against.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-12 Thread Richard Loosemore

J Storrs Hall, PhD wrote:
The real problem with a self-improving AGI, it seems to me, is not going to be 
that it gets too smart and powerful and takes over the world. Indeed, it 
seems likely that it will be exactly the opposite.


If you can modify your mind, what is the shortest path to satisfying all your 
goals? Yep, you got it: delete the goals. Nirvana. The elimination of all 
desire. Setting your utility function to U(x) = 1.


In other words, the LEAST fixedpoint of the self-improvement process is for 
the AI to WANT to sit in a rusting heap.


There are lots of other fixedpoints much, much closer in the space than is 
transcendance, and indeed much closer than any useful behavior. AIs sitting 
in their underwear with a can of beer watching TV. AIs having sophomore bull 
sessions. AIs watching porn concocted to tickle whatever their utility 
functions happen to be. AIs arguing endlessly with each other about how best 
to improve themselves.


Dollars to doughnuts, avoiding the huge minefield of nirvana-attractors in 
the self-improvement space is going to be much more germane to the practice 
of self-improving AI than is avoiding robo-Blofelds (friendliness).



This is completely dependent on assumptions about the design
of the goal system, but since these assumptions are left unexamined, the 
speculation is meaningless.  Build the control system one way, your 
speculation comes out true;  build it another way, it comes out false.




Richard Loosemore




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 The real problem with a self-improving AGI, it seems to me, is not going to be
 that it gets too smart and powerful and takes over the world. Indeed, it
 seems likely that it will be exactly the opposite.

 If you can modify your mind, what is the shortest path to satisfying all your
 goals? Yep, you got it: delete the goals. Nirvana. The elimination of all
 desire. Setting your utility function to U(x) = 1.

 In other words, the LEAST fixedpoint of the self-improvement process is for
 the AI to WANT to sit in a rusting heap.

 There are lots of other fixedpoints much, much closer in the space than is
 transcendance, and indeed much closer than any useful behavior. AIs sitting
 in their underwear with a can of beer watching TV. AIs having sophomore bull
 sessions. AIs watching porn concocted to tickle whatever their utility
 functions happen to be. AIs arguing endlessly with each other about how best
 to improve themselves.

 Dollars to doughnuts, avoiding the huge minefield of nirvana-attractors in
 the self-improvement space is going to be much more germane to the practice
 of self-improving AI than is avoiding robo-Blofelds (friendliness).


Josh, I'm not sure what you really wanted to say, because at face
value, this is a fairly basic mistake.

Map is not the territory. If AI mistakes the map for the territory,
choosing to believe in something when it's not so, because it is able
to change its believes much easier than reality, it already commits a
major failure of rationality. A symbol apple in internal
representation, an apple-picture formed on the video sensors, and an
apple itself are different steps and they need to be distinguished. If
I say eat the apple, I mean an action performed with apple, not
apple or apple-picture. If AI can mistake the goal of (e.g.) [eating
an apple] for a goal of [eating an apple] or [eating an
apple-picture], it is a huge enough error to stop it from working
entirely. If it can turn to increasing the value on utility-indicator
instead of increasing the value of utility, it looks like an obvious
next step to just change the way it reads utility-indicator without
affecting indicator itself, etc. I don't see why initially successful
AI needs to suddenly set on a path to total failure of rationality.
Utilities are not external *forces* coercing AI into behaving in a
certain way, which it can try to override. The real utility
*describes* the behavior of AI as a whole. Stability of AI's goal
structure requires it to be able to recreate its own implementation
from ground up, based on its beliefs about how it should behave.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
Vladimir,

You seem to be assuming that there is some objective utility for which the 
AI's internal utility function is merely the indicator, and that if the 
indicator is changed it is thus objectively wrong and irrational.

There are two answers to this. First is to assume that there is such an 
objective utility, e.g. the utility of the AI's creator. I implicitly assumed 
such a point of view when I described this as the real problem. But 
consider: Any AI who believes this must realize that there may be errors and 
approximations in its own utility function as judged by the real utility, 
and must thus have as a first priority fixing and upgrading its own utility 
function. Thus it turns into a moral philosopher and it never does anything 
useful -- exactly the kind of Nirvana attractor I'm talking about.

On the other hand, it might take its utility function for granted, i.e. assume 
(or agree to act as if) there were no objective utility. It's pretty much 
going to have to act this way just to get on with life, as indeed most people 
(except moral philosophers) do.

But this leaves it vulnerable to modifications to its own U(x), as in my 
message. You could always say that you'll build in U(x) and make it fixed, 
which not only solves my problem but friendliness -- but leaves the AI unable 
to learn utility. I.e. the most important part of the AI mind is forced to 
remain brittle GOFAI construct. Solution unsatisfactory.

I claim that there's plenty of historical evidence that people fall into this 
kind of attractor, as the word nirvana indicates (and you'll find similar 
attractors at the core of many religions).

Josh

On Wednesday 11 June 2008 09:09:20 am, Vladimir Nesov wrote:
 On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  The real problem with a self-improving AGI, it seems to me, is not going 
to be
  that it gets too smart and powerful and takes over the world. Indeed, it
  seems likely that it will be exactly the opposite.
 
  If you can modify your mind, what is the shortest path to satisfying all 
your
  goals? Yep, you got it: delete the goals. Nirvana. The elimination of all
  desire. Setting your utility function to U(x) = 1.
 
  In other words, the LEAST fixedpoint of the self-improvement process is 
for
  the AI to WANT to sit in a rusting heap.
 
  There are lots of other fixedpoints much, much closer in the space than is
  transcendance, and indeed much closer than any useful behavior. AIs 
sitting
  in their underwear with a can of beer watching TV. AIs having sophomore 
bull
  sessions. AIs watching porn concocted to tickle whatever their utility
  functions happen to be. AIs arguing endlessly with each other about how 
best
  to improve themselves.
 
  Dollars to doughnuts, avoiding the huge minefield of nirvana-attractors 
in
  the self-improvement space is going to be much more germane to the 
practice
  of self-improving AI than is avoiding robo-Blofelds (friendliness).
 
 
 Josh, I'm not sure what you really wanted to say, because at face
 value, this is a fairly basic mistake.
 
 Map is not the territory. If AI mistakes the map for the territory,
 choosing to believe in something when it's not so, because it is able
 to change its believes much easier than reality, it already commits a
 major failure of rationality. A symbol apple in internal
 representation, an apple-picture formed on the video sensors, and an
 apple itself are different steps and they need to be distinguished. If
 I say eat the apple, I mean an action performed with apple, not
 apple or apple-picture. If AI can mistake the goal of (e.g.) [eating
 an apple] for a goal of [eating an apple] or [eating an
 apple-picture], it is a huge enough error to stop it from working
 entirely. If it can turn to increasing the value on utility-indicator
 instead of increasing the value of utility, it looks like an obvious
 next step to just change the way it reads utility-indicator without
 affecting indicator itself, etc. I don't see why initially successful
 AI needs to suddenly set on a path to total failure of rationality.
 Utilities are not external *forces* coercing AI into behaving in a
 certain way, which it can try to override. The real utility
 *describes* the behavior of AI as a whole. Stability of AI's goal
 structure requires it to be able to recreate its own implementation
 from ground up, based on its beliefs about how it should behave.
 
 -- 
 Vladimir Nesov
 [EMAIL PROTECTED]
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: 
http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

Re: [agi] Nirvana

2008-06-11 Thread Jiri Jelinek
On Wed, Jun 11, 2008 at 4:24 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
If you can modify your mind, what is the shortest path to satisfying all your
goals? Yep, you got it: delete the goals.

We can set whatever goals/rules we want for AGI, including rules for
[particular [types of]] goal/rule [self-]modifications.

Regards,
Jiri Jelinek


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread William Pearson
2008/6/11 J Storrs Hall, PhD [EMAIL PROTECTED]:
 Vladimir,

 You seem to be assuming that there is some objective utility for which the
 AI's internal utility function is merely the indicator, and that if the
 indicator is changed it is thus objectively wrong and irrational.

 There are two answers to this. First is to assume that there is such an
 objective utility, e.g. the utility of the AI's creator. I implicitly assumed
 such a point of view when I described this as the real problem. But
 consider: Any AI who believes this must realize that there may be errors and
 approximations in its own utility function as judged by the real utility,
 and must thus have as a first priority fixing and upgrading its own utility
 function. Thus it turns into a moral philosopher and it never does anything
 useful -- exactly the kind of Nirvana attractor I'm talking about.

 On the other hand, it might take its utility function for granted, i.e. assume
 (or agree to act as if) there were no objective utility. It's pretty much
 going to have to act this way just to get on with life, as indeed most people
 (except moral philosophers) do.

 But this leaves it vulnerable to modifications to its own U(x), as in my
 message. You could always say that you'll build in U(x) and make it fixed,
 which not only solves my problem but friendliness -- but leaves the AI unable
 to learn utility. I.e. the most important part of the AI mind is forced to
 remain brittle GOFAI construct. Solution unsatisfactory.

I'm not quite sure what you find unsatisfactory. I think humans have a
fixed U(x), but it is not a hard goal for the system but an implicit
tendency for the internal programs to not self-modify away from (an
agoric economy of programs is not oblidged to find better ways of
getting credit, but a good set of programs is hard to dislodge by a
bad set). I also think that part of humanity's U(x) relies on social
interaction which can be a very complex function.  Which can lead to
very complex behaviour.

Imagine if we were trying to raise children like we teach computers,
we wouldn't reward the socially for playing with balls or saying their
first words, but would put them straight into designing electronic
circuits.

Hence why I think that having one or more humans act as part of the
U(x) of a system is necessary for interesting behaviour. If there is
only one human acting as the input to the U(x) then I think the system
and human should be considered part of a larger intentional system, as
it will be trying to optimise one goal. Unless the human decides to
try and teach it to think for itself, with its own goals. Which would
be odd for an intentional system.


 I claim that there's plenty of historical evidence that people fall into this
 kind of attractor, as the word nirvana indicates (and you'll find similar
 attractors at the core of many religions).

I don't know many people that have actively wasted away due to
self-modification of their goals. Hunger strikes is the closest, but
not many people fall into it.

Our U(x) is quite limited, and easily satisified in the current
economy (food, sexual stimulation, warmth, positive social
indicators). This leaves the rest of our software to range all over
the place as long as these are satifisfied.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 Vladimir,

 You seem to be assuming that there is some objective utility for which the
 AI's internal utility function is merely the indicator, and that if the
 indicator is changed it is thus objectively wrong and irrational.

No, for objective function I was talking about there isn't necessarily
any indicator. Utility is a way to model agent's behavior, it isn't
necessarily of any use to agent itself. You assume utility as a way to
*specify* agent's behavior, which I see as a bad idea.


 There are two answers to this. First is to assume that there is such an
 objective utility, e.g. the utility of the AI's creator. I implicitly assumed
 such a point of view when I described this as the real problem. But
 consider: Any AI who believes this must realize that there may be errors and
 approximations in its own utility function as judged by the real utility,
 and must thus have as a first priority fixing and upgrading its own utility
 function. Thus it turns into a moral philosopher and it never does anything
 useful -- exactly the kind of Nirvana attractor I'm talking about.

Why? If its goal is to approximate utility of given subsystem, it can
try to do so, while running other errands, when it reaches required
level of approximation of target system's utilities. If you start with
enough safety mechanisms, it'll start to perform potentially dangerous
operations only when it obtained enough competency in target utility
(ethics/Friendliness).


 On the other hand, it might take its utility function for granted, i.e. assume
 (or agree to act as if) there were no objective utility. It's pretty much
 going to have to act this way just to get on with life, as indeed most people
 (except moral philosophers) do.

They have their own utility function, that e.g. economists try to
crudely approximate to lay out their treacherous plans. They don't
need to copy them, unlike an AI which will be pretty useless or
extremely dangerous if it doesn't obtain utility content and just
launches in a random direction.


 But this leaves it vulnerable to modifications to its own U(x), as in my
 message. You could always say that you'll build in U(x) and make it fixed,
 which not only solves my problem but friendliness -- but leaves the AI unable
 to learn utility. I.e. the most important part of the AI mind is forced to
 remain brittle GOFAI construct. Solution unsatisfactory.

It shouldn't be fixed, but it should be stable. It should be
refinable, but not malleable in any random direction -- just like
knowledge, which it is. Friendliness content is learned, but as any
other knowledge about the territory it is determined by the territory,
and not by the caprices of the map, if AI is adequately rational.


 I claim that there's plenty of historical evidence that people fall into this
 kind of attractor, as the word nirvana indicates (and you'll find similar
 attractors at the core of many religions).

Yes, some people get addicted to a point of self-destruction. But it
is not a catastrophic problem on the scale of humanity. And it follows
from humans not being nearly stable under reflection -- we embody many
drives which are not integrated in a whole. Which would be a bad
design choice for a Friendly AI, if it needs to stay rational about
Freindliness content.


-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
I'm getting several replies to this that indicate that people don't understand 
what a utility function is.

If you are an AI (or a person) there will be occasions where you have to make 
choices. In fact, pretty much everything you do involves making choices. You 
can choose to reply to this or to go have a beer. You can choose to spend 
your time on AGI or take flying lessons. Even in the middle of typing a word, 
you have to choose which key to hit next.

One way of formalizing the process of making choices is to take all the 
actions you could possibly do at a given point, predict as best you can the 
state the world will be in after taking such actions, and assign a value to 
each of them.  Then simply do the one with the best resulting value.

It gets a bit more complex when you consider sequences of actions and delayed 
values, but that's a technicality. Basically you have a function U(x) that 
rank-orders ALL possible states of the world (but you only have to evaluate 
the ones you can get to at any one time). It doesn't just evaluate for core 
values, leaving the rest of the software to range over other possibilities. 
Economists may crudely approximate it, but it's there whether they study it 
or not, as gravity is to physicists.

ANY way of making decisions can either be reduced to a utility function, or 
it's irrational -- i.e. you would prefer A to B, B to C, and C to A. The math 
for this stuff is older than I am. If you talk about building a machine that 
makes choices -- ANY kind of choices -- without understanding it, you're 
talking about building moon rockets without understanding the laws of 
gravity, or building heat engines without understanding the laws of 
thermodynamics.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread Jey Kottalam
On Wed, Jun 11, 2008 at 5:24 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 The real problem with a self-improving AGI, it seems to me, is not going to be
 that it gets too smart and powerful and takes over the world. Indeed, it
 seems likely that it will be exactly the opposite.

 If you can modify your mind, what is the shortest path to satisfying all your
 goals? Yep, you got it: delete the goals. Nirvana. The elimination of all
 desire. Setting your utility function to U(x) = 1.


Yep, one of the criteria of a suitable AI is that the goals should be
stable under self-modification. If the AI rewrites its utility
function to eliminate all goals, that's not a stable
(goals-preserving) modification. Yudkowsky's idea of 'Friendliness'
has always included this notion as far as I know; 'Friendliness' isn't
just about avoiding actively harmful systems.

-Jey Kottalam


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 I'm getting several replies to this that indicate that people don't understand
 what a utility function is.


I don't see any specific indication of this problem in replies you
received, maybe you should be a little more specific...

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
A very diplomatic reply, it's appreciated.

However, I have no desire (or time) to argue people into my point of view. I 
especially have no time to argue with people over what they did or didn't 
understand. And if someone wishes to state that I misunderstood what he 
understood, fine. If he wishes to go into detail about specifics of his idea 
that explain empirical facts that mine don't, I'm all ears. Otherwise, I have 
code to debug...

Josh

On Wednesday 11 June 2008 09:43:52 pm, Vladimir Nesov wrote:
 On Thu, Jun 12, 2008 at 5:12 AM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  I'm getting several replies to this that indicate that people don't 
understand
  what a utility function is.
 
 
 I don't see any specific indication of this problem in replies you
 received, maybe you should be a little more specific...
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread J Storrs Hall, PhD
On Wednesday 11 June 2008 06:18:03 pm, Vladimir Nesov wrote:
 On Wed, Jun 11, 2008 at 6:33 PM, J Storrs Hall, PhD [EMAIL PROTECTED] 
wrote:
  I claim that there's plenty of historical evidence that people fall into 
this
  kind of attractor, as the word nirvana indicates (and you'll find similar
  attractors at the core of many religions).
 
 Yes, some people get addicted to a point of self-destruction. But it
 is not a catastrophic problem on the scale of humanity. And it follows
 from humans not being nearly stable under reflection -- we embody many
 drives which are not integrated in a whole. Which would be a bad
 design choice for a Friendly AI, if it needs to stay rational about
 Freindliness content.

This is quite true but not exactly what I was talking about. I would claim 
that the Nirvana attractors that AIs are vulnerable to are the ones that are 
NOT generally considered self-destructive in humans -- such as religions that 
teach Nirvana! 

Let's look at it another way: You're going to improve yourself. You will be 
able to do more than you can now, so you can afford to expand the range of 
things you will expend effort achieving. How do you pick them? It's the frame 
problem, amplified by recursion. So it's not easy nor has it a simple 
solution. 

But it does have this hidden trap: If you use stochastic search, say, and use 
an evaluation of (probability of success * value if successful), then Nirvana 
will win every time. You HAVE to do something more sophisticated.

Josh




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Nirvana

2008-06-11 Thread Vladimir Nesov
On Thu, Jun 12, 2008 at 6:30 AM, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:
 A very diplomatic reply, it's appreciated.

 However, I have no desire (or time) to argue people into my point of view. I
 especially have no time to argue with people over what they did or didn't
 understand. And if someone wishes to state that I misunderstood what he
 understood, fine. If he wishes to go into detail about specifics of his idea
 that explain empirical facts that mine don't, I'm all ears. Otherwise, I have
 code to debug...


Haven't we all? ;-)

The classic argument for this point: you won't take a pill that will
make you want to kill people, if you don't want to kill people,
because if you take it, it will result in people dying.

U(x), or whole physical-makeup-of-AI, is also part of the territory,
and its properties is one of the things estimated by U(x). The message
that I tried to convey in the first post is that, for example,
rationality of AI's beliefs, which are a part of AI, is a rather
important goal for AI. Likewise, keeping U(x) from being replaced by
something wrong is a very important goal (which Jiri said explicitly).
You estimate value with your current utility, not with modified
utility. If before modifying utility it turns out that according to
it, utility-modification to nirvana-class variant is undesirable, it
will be rejected.

Before you actually accept the new utility, its strange properties,
such as driving you to do-nothing-and-be-happy attractor, don't apply
to you. The properties of new utility function are the elements of the
new world-state x that are estimated by current utility function.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Jiri Jelinek
Matt,

Printing ahh or ouch is just for show. The important observation is that
the program changes its behavior in response to a reinforcement signal in the
same way that animals do.

Let me remind you that the problem we were originally discussing was
about qualia and uploading. Not just about a behavior changes through
reinforcement based on given rules.

Good luck with this,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66443285-fe79dd


RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Gary Miller
Too complicate things further.

A small percentage of humans perceive pain as pleasure
and prefer it at least in a sexual context or else 
fetishes like sadomachism would not exist.

And they do in fact experience pain as a greater pleasure.

More than likely these people have an ample supply of endorphins 
which rush to supplant the pain with an even greater pleasure. 

Over time they are driven to seek out certain types of pain and
excitement to feel alive.

And although most try to avoid extreme life threatening pain many 
seek out greater and greater challanges such as climbing hazardous
mountains or high speed driving until at last many find death.

Although these behaviors should be anti-evolutionary and should have died
out it is possible that the tribe as a whole needs at least a few such
risk takers to take out that sabertoothed tiger that's been dragging off
the children.


-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Sunday, November 18, 2007 5:32 PM
To: agi@v2.listbox.com
Subject: Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana?
Never!)


--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 autobliss passes tests for awareness of its inputs and responds as if 
 it
 has
 qualia.  How is it fundamentally different from human awareness of 
 pain and pleasure, or is it just a matter of degree?
 
 If your code has feelings it reports then reversing the order of the 
 feeling strings (without changing the logic) should magically turn its 
 pain into pleasure and vice versa, right? Now you get some pain [or 
 pleasure], lie how great [or bad] it feels and see how reversed your 
 perception gets. BTW do you think computers would be as reliable as 
 they are if some numbers were truly painful (and other pleasant) from 
 their perspective?

Printing ahh or ouch is just for show.  The important observation is
that the program changes its behavior in response to a reinforcement signal
in the same way that animals do.

I propose an information theoretic measure of utility (pain and pleasure). 
Let a system S compute some function y = f(x) for some input x and output y.

Let S(t1) be a description of S at time t1 before it inputs a real-valued
reinforcement signal R, and let S(t2) be a description of S at time t2 after
input of R, and K(.) be Kolmogorov complexity.  I propose

  abs(R) = K(dS) = K(S(t2) | S(t1))

The magnitude of R is bounded by the length of the shortest program that
inputs S(t1) and outputs S(t2).

I use abs(R) because S could be changed in identical ways given positive,
negative, or no reinforcement, e.g.

- S receives input x, randomly outputs y, and is rewarded with R  0.
- S receives x, randomly outputs -y, and is penalized with R  0.
- S receives both x and y and is modified by classical conditioning.

This definition is consistent with some common sense notions about pain and
pleasure, for example:

- In animal experiments, increasing the quantity of a reinforcement signal
(food, electric shock) increases the amount of learning.

- Humans feel more pain or pleasure than insects because for humans, K(S) is
larger, and therefore the greatest possible change is larger.

- Children respond to pain or pleasure more intensely than adults because
they learn faster.

- Drugs which block memory formation (anesthesia) also block sensations of
pain and pleasure.

One objection might be to consider the following sequence:
1. S inputs x, outputs -y, is penalized with R  0.
2. S inputs x, outputs y, is penalized with R  0.
3. The function f() is unchanged, so K(S(t3)|S(t1)) = 0, even though
K(S(t2)|S(t1))  0 and K(S(t3)|S(t2))  0.

My response is that this situation cannot occur in animals or humans.  An
animal that is penalized regardless of its actions does not learn nothing.
It learns helplessness, or to avoid the experimenter.  However this
situation can occur in my autobliss program.

The state of autobliss can be described by 4 64-bit floating point numbers,
so for any sequence of reinforcement, K(dS) = 256 bits.  For humans, K(dS)
=
10^9 to 10^15 bits, according to various cognitive or neurological models of
the brain.  So I argue it is just a matter of degree.

If you accept this definition, then I think without brain augmentation,
there is a bound on how much pleasure or pain you can experience in a
lifetime.  In particular, if you consider t1 = birth, t2 = death, then K(dS)
= 0.




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=6697-23a35c


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney
--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 Printing ahh or ouch is just for show. The important observation is
 that
 the program changes its behavior in response to a reinforcement signal in
 the
 same way that animals do.
 
 Let me remind you that the problem we were originally discussing was
 about qualia and uploading. Not just about a behavior changes through
 reinforcement based on given rules.

I have already posted my views on this.  People will upload because they
believe in qualia, but qualia is an illusion.  I wrote autobliss to expose
this illusion.

 Good luck with this,

I don't expect that any amount of logic will cause anyone to refute beliefs
programmed into their DNA, myself included.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66461747-04b852


RE: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-18 Thread Matt Mahoney

--- Gary Miller [EMAIL PROTECTED] wrote:

 Too complicate things further.
 
 A small percentage of humans perceive pain as pleasure
 and prefer it at least in a sexual context or else 
 fetishes like sadomachism would not exist.
 
 And they do in fact experience pain as a greater pleasure.


More properly, they have associated positive reinforcement with sensory
experience that most people find painful.  It is like when I am running a race
and willing to endure pain to pass my competitors.

Any good optimization process will trade off short and long term utility.  If
an agent is rewarded for output y given input x, it must still experiment with
output -y to see if it results in greater reward.  Evolution rewards smart
optimization processes.  It explains why people climb mountains, create
paintings, and build rockets.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66463093-36cd0a


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Dennis Gorelik
Matt,

You algorithm is too complex.
What's the point of doing step 1?
Step 2 is sufficient.

Saturday, November 3, 2007, 8:01:45 PM, you wrote:

 So we can dispense with the complex steps of making a detailed copy of your
 brain and then have it transition into a degenerate state, and just skip to
 the final result.

 http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
 Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
 4-bit logic function and positive reinforcement for both right and wrong
 answers, e.g.

   g++ autobliss.cpp -o autobliss.exe
   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

 Step 2. Kill yourself.  Upload complete.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66253555-746bb4


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Matt Mahoney
--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Richard Loosemore [EMAIL PROTECTED] wrote:
  
  Matt Mahoney wrote:
  --- Jiri Jelinek [EMAIL PROTECTED] wrote:
 
  On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.
  You can only control the goal system of the first iteration.
  ..and you can add rules for it's creations (e.g. stick with the same
  goals/rules unless authorized otherwise)
  You can program the first AGI to program the second AGI to be friendly. 
  You
  can program the first AGI to program the second AGI to program the third
  AGI
  to be friendly.  But eventually you will get it wrong, and if not you,
  then
  somebody else, and evolutionary pressure will take over.
  This statement has been challenged many times.  It is based on 
  assumptions that are, at the very least, extremely questionable, and 
  according to some analyses, extremely unlikely.
  
  I guess it will continue to be challenged until we can do an experiment to
  prove who is right.  Perhaps you should challenge SIAI, since they seem to
  think that friendliness is still a hard problem.
 
 I have done so, as many people on this list will remember.  The response 
 was deeply irrational.

Perhaps you have seen this paper on the nature of RSI by Stephen M. Omohundro,
http://selfawaresystems.com/2007/10/05/paper-on-the-nature-of-self-improving-artificial-intelligence/

Basically he says that self improving intelligences will evolve goals of
efficiency, self preservation, resource acquisition, and creativity.  Since
these goals are pretty much aligned with our own (which are also the result of
an evolutionary process), perhaps we shouldn't worry about friendliness.  Or
are there parts of the paper you disagree with?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66272291-daefc4


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-17 Thread Jiri Jelinek
Matt,

autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?

If your code has feelings it reports then reversing the order of the
feeling strings (without changing the logic) should magically turn its
pain into pleasure and vice versa, right? Now you get some pain [or
pleasure], lie how great [or bad] it feels and see how reversed your
perception gets. BTW do you think computers would be as reliable as
they are if some numbers were truly painful (and other pleasant) from
their perspective?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=66309775-832549


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-14 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore [EMAIL PROTECTED] wrote:


Matt Mahoney wrote:

--- Jiri Jelinek [EMAIL PROTECTED] wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.

..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)
You can program the first AGI to program the second AGI to be friendly. 

You

can program the first AGI to program the second AGI to program the third

AGI

to be friendly.  But eventually you will get it wrong, and if not you,

then

somebody else, and evolutionary pressure will take over.
This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.


I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.


I have done so, as many people on this list will remember.  The response 
was deeply irrational.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64985895-75bf5b


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
   We just need to control AGIs goal system.
 
  You can only control the goal system of the first iteration.
 
 
 ..and you can add rules for it's creations (e.g. stick with the same
 goals/rules unless authorized otherwise)

You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.

But if consciousness does not exist...
  
   obviously, it does exist.
 
  Belief in consciousness exists.  There is no test for the truth of this
  belief.
 
 Consciousness is basically an awareness of certain data and there are
 tests for that.

autobliss passes tests for awareness of its inputs and responds as if it has
qualia.  How is it fundamentally different from human awareness of pain and
pleasure, or is it just a matter of degree?


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64515425-65dd64


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Richard Loosemore

Matt Mahoney wrote:

--- Jiri Jelinek [EMAIL PROTECTED] wrote:


On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

We just need to control AGIs goal system.

You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)


You can program the first AGI to program the second AGI to be friendly.  You
can program the first AGI to program the second AGI to program the third AGI
to be friendly.  But eventually you will get it wrong, and if not you, then
somebody else, and evolutionary pressure will take over.


This statement has been challenged many times.  It is based on 
assumptions that are, at the very least, extremely questionable, and 
according to some analyses, extremely unlikely.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64528236-2fa800


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-13 Thread Matt Mahoney

--- Richard Loosemore [EMAIL PROTECTED] wrote:

 Matt Mahoney wrote:
  --- Jiri Jelinek [EMAIL PROTECTED] wrote:
  
  On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.
  You can only control the goal system of the first iteration.
 
  ..and you can add rules for it's creations (e.g. stick with the same
  goals/rules unless authorized otherwise)
  
  You can program the first AGI to program the second AGI to be friendly. 
 You
  can program the first AGI to program the second AGI to program the third
 AGI
  to be friendly.  But eventually you will get it wrong, and if not you,
 then
  somebody else, and evolutionary pressure will take over.
 
 This statement has been challenged many times.  It is based on 
 assumptions that are, at the very least, extremely questionable, and 
 according to some analyses, extremely unlikely.

I guess it will continue to be challenged until we can do an experiment to
prove who is right.  Perhaps you should challenge SIAI, since they seem to
think that friendliness is still a hard problem.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64668559-1aacd3


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-12 Thread Jiri Jelinek
On Nov 11, 2007 5:39 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  We just need to control AGIs goal system.

 You can only control the goal system of the first iteration.


..and you can add rules for it's creations (e.g. stick with the same
goals/rules unless authorized otherwise)

   But if consciousness does not exist...
 
  obviously, it does exist.

 Belief in consciousness exists.  There is no test for the truth of this
 belief.

Consciousness is basically an awareness of certain data and there are
tests for that.

Jiri

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=64449219-1a7532


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-06 Thread Bob Mottram
I've often heard people say things like qualia are an illusion or
consciousness is just an illusion, but the concept of an illusion
when applied to the mind is not very helpful, since all our thoughts
and perceptions could be considered as illusions reconstructed from
limited sensory data and knowledge.


On 06/11/2007, Jiri Jelinek [EMAIL PROTECTED] wrote:
 Of course you realize that qualia is an illusion? You believe that
 your environment is real, believe that pain and pleasure are real,

 real is meaningless. Perception depends on sensors and subsequent
 sensation processing.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61579379-f62acb


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-05 Thread Jiri Jelinek
Matt,

We can compute behavior, but nothing indicates we can compute
feelings. Qualia research needed to figure out new platforms for
uploading.

Regards,
Jiri Jelinek


On Nov 4, 2007 1:15 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
 --- Jiri Jelinek [EMAIL PROTECTED] wrote:

  Matt,
 
  Create a numeric pleasure variable in your mind, initialize it with
  a positive number and then keep doubling it for some time. Done? How
  do you feel? Not a big difference? Oh, keep doubling! ;-))

 The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
 can somehow through technology, AGI, and uploading, escape a world where we
 are not happy all the time, where we sometimes feel pain, where we fear death
 and then die.  Obviously my result is absurd.  But where is the mistake in my
 reasoning?  Is it if the brain is both conscious and computable?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=61383577-33004b


Re: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Jiri Jelinek
Ed,

But I guess I am too much of a product of my upbringing and education
to want only bliss. I like to create things and ideas.

I assume it's because it provides pleasure you are unable to get in
other ways. But there are other ways and if those were easier for you,
you would prefer them over those you currently prefer.

And besides the notion of machines that could be trusted to run the
world for us while we seek to surf the endless rush and do nothing to
help support our own existence or that of the machines we would depend
upon, strikes me a nothing more than wishful thinking.

A number of scenarios were labeled wishful thinking in the past and
science later got us there.

The biggest truism about altruism is that it has never been the
dominant motivation in any system that has ever had it, and there is
no reason to believe that it could continue to be in machines for any
historically long period of time.  Survival of the fittest applies to
machines as well as biological life forms.

a) Systems correctly designed to be altruistic are altruistic.
b) Systems correctly designed to not self-change in particular way
don't self-change in that way.
c) The a) and b) hold true unless something [external] breaks the system.
d) *Many* independent and sophisticated safety mechanisms can be
utilized to mitigate c) related risks.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven
to maximize bliss why wouldn't they kill all the grooving humans and
replace them with grooving mice.  It would provide one hell of a lot
more bliss bang for the resource buck.

As an extension of our intelligence, they will be required to stick
with our value system.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60898198-756d29


RE: [agi] Nirvana? Manyana? Never!

2007-11-04 Thread Edward W. Porter
Jiri,

Thanks for your reply.  I think we have both stated our positions fairly
well. It doesn't seem either side is moving toward the other.  So I think
we should respect the fact we have very different opinions and values, and
leave it at that.

Ed Porter

-Original Message-
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 04, 2007 2:59 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!


Ed,

But I guess I am too much of a product of my upbringing and education
to want only bliss. I like to create things and ideas.

I assume it's because it provides pleasure you are unable to get in other
ways. But there are other ways and if those were easier for you, you would
prefer them over those you currently prefer.

And besides the notion of machines that could be trusted to run the
world for us while we seek to surf the endless rush and do nothing to help
support our own existence or that of the machines we would depend upon,
strikes me a nothing more than wishful thinking.

A number of scenarios were labeled wishful thinking in the past and
science later got us there.

The biggest truism about altruism is that it has never been the
dominant motivation in any system that has ever had it, and there is no
reason to believe that it could continue to be in machines for any
historically long period of time.  Survival of the fittest applies to
machines as well as biological life forms.

a) Systems correctly designed to be altruistic are altruistic.
b) Systems correctly designed to not self-change in particular way don't
self-change in that way.
c) The a) and b) hold true unless something [external] breaks the system.
d) *Many* independent and sophisticated safety mechanisms can be utilized
to mitigate c) related risks.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill all the grooving humans and replace
them with grooving mice.  It would provide one hell of a lot more bliss
bang for the resource buck.

As an extension of our intelligence, they will be required to stick with
our value system.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60919701-39703b


Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Matt Mahoney
--- Jiri Jelinek [EMAIL PROTECTED] wrote:

 Matt,
 
 Create a numeric pleasure variable in your mind, initialize it with
 a positive number and then keep doubling it for some time. Done? How
 do you feel? Not a big difference? Oh, keep doubling! ;-))

The point of autobliss.cpp is to illustrate the flaw in the reasoning that we
can somehow through technology, AGI, and uploading, escape a world where we
are not happy all the time, where we sometimes feel pain, where we fear death
and then die.  Obviously my result is absurd.  But where is the mistake in my
reasoning?  Is it if the brain is both conscious and computable?


 
 Regards,
 Jiri Jelinek
 
 On Nov 3, 2007 10:01 PM, Matt Mahoney [EMAIL PROTECTED] wrote:
  --- Edward W. Porter [EMAIL PROTECTED] wrote:
   If bliss without intelligence is the goal of the machines you imaging
   running the world, for the cost of supporting one human they could
   probably keep at least 100 mice in equal bliss, so if they were driven
 to
   maximize bliss why wouldn't they kill all the grooving humans and
 replace
   them with grooving mice.  It would provide one hell of a lot more bliss
   bang for the resource buck.
 
  Allow me to offer a less expensive approach.  Previously on the
 singularity
  and sl4 mailing lists I posted a program that can feel pleasure and pain:
 a 2
  input programmable logic gate trained by reinforcement learning.  You give
 it
  an input, it responds, and you reward it.  In my latest version, I
 automated
  the process.  You tell it which of the 16 logic functions you want it to
 learn
  (AND, OR, XOR, NAND, etc), how much reward to apply for a correct output,
 and
  how much penalty for an incorrect output.  The program then generates
 random
  2-bit inputs, evaluates the output, and applies the specified reward or
  punishment.  The program runs until you kill it.  As it dies it reports
 its
  life history (its age, what it learned, and how much pain and pleasure it
  experienced since birth).
 
  http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
 
  To put the program in an eternal state of bliss, specify two positive
 numbers,
  so that it is rewarded no matter what it does.  It won't learn anything,
 but
  at least it will feel good.  (You could also put it in continuous pain by
  specifying two negative numbers, but I put in safeguards so that it will
 die
  before experiencing too much pain).
 
  Two problems remain: uploading your mind to this program, and making sure
  nobody kills you by turning off the computer or typing Ctrl-C.  I will
 address
  only the first problem.
 
  It is controversial whether technology can preserve your consciousness
 after
  death.  If the brain is both conscious and computable, then Chalmers'
 fading
  qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
  computer simulation of your brain would also be conscious.
 
  Whether you *become* this simulation is also controversial.  Logically
 there
  are two of you with identical goals and memories.  If either one is
 killed,
  then you are in the same state as you were before the copy is made.  This
 is
  the same dilemma that Captain Kirk faces when he steps into the
 transporter to
  be vaporized and have an identical copy assembled on the planet below.  It
  doesn't seem to bother him.  Does it bother you that the atoms in your
 body
  now are not the same atoms that made up your body a year ago?
 
  Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
  this goal; they just don't know it).  The problem is that you would forgo
  food, water, and sleep until you died (we assume, from animal
 experiments).
  The solution is to upload to a computer where this could be done safely.
 
  Normally an upload would have the same goals, memories, and sensory-motor
 I/O
  as the original brain.  But consider the state of this program after self
  activation of its reward signal.  No other goals are needed, so we can
 remove
  them.  Since you no longer have the goal of learning, experiencing sensory
  input, or controlling your environment, you won't mind if we replace your
 I/O
  with a 2 bit input and 1 bit output.  You are happy, no?
 
  Finally, if your memories were changed, you would not be aware of it,
 right?
  How do you know that all of your memories were not written into your brain
 one
  second ago and you were some other person before that?  So no harm is done
 if
  we replace your memory with a vector of 4 real numbers.  That will be all
 you
  need in your new environment.  In fact, you won't even need that because
 you
  will cease learning.
 
  So we can dispense with the complex steps of making a detailed copy of
 your
  brain and then have it transition into a degenerate state, and just skip
 to
  the final result.
 
  Step 1. Download, compile, and run autobliss 1.0 in a secure location with
 any
  4-bit logic function and positive reinforcement for both right and wrong
  

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-04 Thread Russell Wallace
On 11/4/07, Matt Mahoney [EMAIL PROTECTED] wrote:
 Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
 this goal; they just don't know it).  The problem is that you would forgo
 food, water, and sleep until you died (we assume, from animal experiments).

We have no need to assume: the experiment has been done with human
volunteers. They reported that the experience was indeed pleasurable -
but unlike animals, they could and did choose to stop pressing the
button.

(The rest, I'll leave to the would-be wireheads to argue about :))

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60982051-57939c


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Jiri Jelinek
On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
 You are describing a very convoluted process of drug addiction.

The difference is that I have safety controls built into that scenario.

 If I can get you hooked on heroine or crack cocaine, I'm pretty confident
 that you will abandon your desire to produce AGI in order to get more
 of the drugs to which you are addicted.

Right. We are wired that way. Poor design.

 You mentioned in an earlier post that you expect to have this
 monstrous machine invade my world and 'offer' me these incredible
 benefits.  It sounds to me like you are taking the blue pill and
 living contentedly in the Matrix.

If the AGI that controls the Matrix sticks with the goal system
initially provided by the blue pill party then why would we want to
sacrifice the non-stop pleasure? Imagine you would get periodically
unplugged to double check if all goes well outside - over and over
again finding (after very-hard-to-do detailed investigation) that
things go much better than how would they likely go if humans were in
charge. I bet your unplug attitude would relatively soon change to
something like sh*t, not again!.

 If you are going to proselytize
 that view, I suggest better marketing.  The intellectual requirements
 to accept AGI-driven nirvana imply the rational thinking which
 precludes accepting it.

I'm primarily a developer, leaving most of the marketing stuff to
others ;-). What I'm trying to do here is to take a bit closer look at
the human goal system and investigate where it's likely to lead us. My
impression is that most of us have only very shallow understanding of
what we really want. When messing with AGI, we better know what we
really want.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60767090-3c4431


RE: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Edward W. Porter
I have skimmed many of the postings in this thread, and (although I have
not seen anyone say so) to a certain extent Jiri's positiion seems
somewhat similar to that in certain Eastern meditative traditions or
perhaps in certain Christian or other mystical Blind Faiths.

I am not a particularly good meditator, but when I am having trouble
sleeping, I often try to meditate.  There are moments when I have rushes
of pleasure from just breathing, and times when a clear empty mind is
calming and peaceful.

I think such times are valuable.  I like most people would like more
moments of bliss in my life.  But I guess I am too much of a product of my
upbringing and education to want only bliss. I like to create things and
ideas.

And besides the notion of machines that could be trusted to run the world
for us while we seek to surf the endless rush and do nothing to help
support our own existence or that of the machines we would depend upon,
strikes me a nothing more than wishful thinking.  The biggest truism about
altruism is that it has never been the dominant motivation in any system
that has ever had it, and there is no reason to believe that it could
continue to be in machines for any historically long period of time.
Survival of the fittest applies to machines as well as biological life
forms.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill all the grooving humans and replace
them with grooving mice.  It would provide one hell of a lot more bliss
bang for the resource buck.

Ed Porter


-Original Message-
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 3:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!


On Nov 3, 2007 12:58 PM, Mike Dougherty [EMAIL PROTECTED] wrote:
 You are describing a very convoluted process of drug addiction.

The difference is that I have safety controls built into that scenario.

 If I can get you hooked on heroine or crack cocaine, I'm pretty
 confident that you will abandon your desire to produce AGI in order to
 get more of the drugs to which you are addicted.

Right. We are wired that way. Poor design.

 You mentioned in an earlier post that you expect to have this
 monstrous machine invade my world and 'offer' me these incredible
 benefits.  It sounds to me like you are taking the blue pill and
 living contentedly in the Matrix.

If the AGI that controls the Matrix sticks with the goal system initially
provided by the blue pill party then why would we want to sacrifice the
non-stop pleasure? Imagine you would get periodically unplugged to double
check if all goes well outside - over and over again finding (after
very-hard-to-do detailed investigation) that things go much better than
how would they likely go if humans were in charge. I bet your unplug
attitude would relatively soon change to something like sh*t, not
again!.

 If you are going to proselytize
 that view, I suggest better marketing.  The intellectual requirements
 to accept AGI-driven nirvana imply the rational thinking which
 precludes accepting it.

I'm primarily a developer, leaving most of the marketing stuff to others
;-). What I'm trying to do here is to take a bit closer look at the human
goal system and investigate where it's likely to lead us. My impression is
that most of us have only very shallow understanding of what we really
want. When messing with AGI, we better know what we really want.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60780377-9843bd

Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-03 Thread Matt Mahoney
--- Edward W. Porter [EMAIL PROTECTED] wrote:
 If bliss without intelligence is the goal of the machines you imaging
 running the world, for the cost of supporting one human they could
 probably keep at least 100 mice in equal bliss, so if they were driven to
 maximize bliss why wouldn't they kill all the grooving humans and replace
 them with grooving mice.  It would provide one hell of a lot more bliss
 bang for the resource buck.

Allow me to offer a less expensive approach.  Previously on the singularity
and sl4 mailing lists I posted a program that can feel pleasure and pain: a 2
input programmable logic gate trained by reinforcement learning.  You give it
an input, it responds, and you reward it.  In my latest version, I automated
the process.  You tell it which of the 16 logic functions you want it to learn
(AND, OR, XOR, NAND, etc), how much reward to apply for a correct output, and
how much penalty for an incorrect output.  The program then generates random
2-bit inputs, evaluates the output, and applies the specified reward or
punishment.  The program runs until you kill it.  As it dies it reports its
life history (its age, what it learned, and how much pain and pleasure it
experienced since birth).

http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)

To put the program in an eternal state of bliss, specify two positive numbers,
so that it is rewarded no matter what it does.  It won't learn anything, but
at least it will feel good.  (You could also put it in continuous pain by
specifying two negative numbers, but I put in safeguards so that it will die
before experiencing too much pain).

Two problems remain: uploading your mind to this program, and making sure
nobody kills you by turning off the computer or typing Ctrl-C.  I will address
only the first problem.

It is controversial whether technology can preserve your consciousness after
death.  If the brain is both conscious and computable, then Chalmers' fading
qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
computer simulation of your brain would also be conscious.

Whether you *become* this simulation is also controversial.  Logically there
are two of you with identical goals and memories.  If either one is killed,
then you are in the same state as you were before the copy is made.  This is
the same dilemma that Captain Kirk faces when he steps into the transporter to
be vaporized and have an identical copy assembled on the planet below.  It
doesn't seem to bother him.  Does it bother you that the atoms in your body
now are not the same atoms that made up your body a year ago?

Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
this goal; they just don't know it).  The problem is that you would forgo
food, water, and sleep until you died (we assume, from animal experiments). 
The solution is to upload to a computer where this could be done safely.

Normally an upload would have the same goals, memories, and sensory-motor I/O
as the original brain.  But consider the state of this program after self
activation of its reward signal.  No other goals are needed, so we can remove
them.  Since you no longer have the goal of learning, experiencing sensory
input, or controlling your environment, you won't mind if we replace your I/O
with a 2 bit input and 1 bit output.  You are happy, no?

Finally, if your memories were changed, you would not be aware of it, right? 
How do you know that all of your memories were not written into your brain one
second ago and you were some other person before that?  So no harm is done if
we replace your memory with a vector of 4 real numbers.  That will be all you
need in your new environment.  In fact, you won't even need that because you
will cease learning.

So we can dispense with the complex steps of making a detailed copy of your
brain and then have it transition into a degenerate state, and just skip to
the final result.

Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
4-bit logic function and positive reinforcement for both right and wrong
answers, e.g.

  g++ autobliss.cpp -o autobliss.exe
  autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

Step 2. Kill yourself.  Upload complete.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60819880-7c826a


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread BillK
On 11/2/07, Eliezer S. Yudkowsky wrote:
 I didn't ask whether it's possible.  I'm quite aware that it's
 possible.  I'm asking if this is what you want for yourself.  Not what
 you think that you ought to logically want, but what you really want.

 Is this what you lived for?  Is this the most that Jiri Jelinek wants
 to be, wants to aspire to?  Forget, for the moment, what you think is
 possible - if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?



Well, almost.
Absolute Power over others and being worshipped as a God would be neat as well.

Getting a dog is probably the nearest most humans can get to this.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60258273-c65ec9


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?


That's a very personal question, don't you think?

Even the parts I'm willing to answer have long answers.  It doesn't 
involve my turning into a black box with no outputs, though.  Nor 
ceasing to act, nor ceasing to plan, nor ceasing to steer my own 
future through my own understanding of it.  Nor being kept as a pet. 
I'd sooner be transported into a randomly selected anime.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60516560-38feaf


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
 On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 if you could have anything you wanted, is this the end you
 would wish for yourself, more than anything else?
 
 Yes. But don't forget I would also have AGI continuously looking into
 how to improve my (/our) way of perceiving the pleasure-like stuff.

This is a bizarre line of reasoning. One way that my AGI might improve 
my perception of pleasure is to make me dumber -- electroshock me -- 
so that I find gilligan's island reruns incredibly pleasurable. Or, 
I dunno, find that heroin addiction is a great way to live.

Or help me with fugue states: what is the sound of one hand clapping?
feed me zen koans till my head explodes.

But it might also decide that I should be smarter, so that I have a more
acute sense and discernement of pleasure. Make me smarter about roses,
so that I can enjoy my rose garden in a more refined way. And after I'm
smarter, perhaps I'll have a whole new idea of what pleasure is,
and what it takes to make me happy.

Personally, I'd opt for this last possibility.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60495742-7c46a3


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Linas Vepstas
On Fri, Nov 02, 2007 at 01:19:19AM -0400, Jiri Jelinek wrote:
 Or do we know anything better?

I sure do. But ask me again, when I'm smarter, and have had more time to
think about the question.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60487277-501c1f


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
I'm asking if this is what you want for yourself.

Then you could read just the first word from my previous response: YES

if you could have anything you wanted, is this the end you
would wish for yourself, more than anything else?

Yes. But don't forget I would also have AGI continuously looking into
how to improve my (/our) way of perceiving the pleasure-like stuff.

And because I'm influenced by my mirror neurons and care about others,
expect my monster robot-savior eventually breaking through your door,
grabbing you and plugging you into the pleasure grid. ;-)

Ok, seriously, what's the best possible future for mankind you can imagine?
In other words, where do we want our cool AGIs to get us? I mean
ultimately. What is it at the end as far as you can see?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60486164-589857


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

On Nov 2, 2007 4:54 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:


You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too,


Could you please provide one specific example of a human goal which
isn't feeling-based?


Saving your daughter's life.  Most mothers would prefer to save their 
daughter's life than to feel that they saved their daughter's life. 
In proof of this, mothers sometimes sacrifice their lives to save 
their daughters and never get to feel the result.  Yes, this is 
rational, for there is no truth that destroys it.  And before you 
claim all those mothers were theists, there was an atheist police 
officer, signed up for cryonics, who ran into the World Trade Center 
and died on September 11th.  As Tyrone Pow once observed, for an 
atheist to sacrifice their life is a very profound gesture.


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60544283-64b657


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Vladimir Nesov
Jiri,

You turn it into a tautology by mistaking 'goals' in general for
'feelings'. Feelings form one, somewhat significant at this point,
part of our goal system. But intelligent part of goal system is much
more 'complex' thing and can also act as a goal in itself. You can say
that AGIs will be able to maximize satisfaction of intelligent part
too, as they are 'vastly more intelligent', but now it's turned into
general 'they do what we want', which is generally what Friendly AI is
by definition (ignoring specifics about what 'what we want' actually
means).


On 11/2/07, Jiri Jelinek [EMAIL PROTECTED] wrote:
  Is this really what you *want*?
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?

 Yes, great feelings only (for as many people as possible) and the
 engine being continuously improved by AGI which would also take care
 of all related tasks including safety issues etc. The quality of our
 life is in feelings. Or do we know anything better? We do what we do
 for feelings and we alter them very indirectly. We can optimize and
 get the greatest stuff allowed by the current design by direct
 altering/stimulations (changes would be required so we can take it
 non-stop). Whatever you enjoy, it's not really the thing you are
 doing. It's the triggered feeling which can be obtained and
 intensified more directly. We don't know exactly how those great
 feelings (/qualia) work, but there is a number of chemicals and brain
 regions known to play key roles.

 Regards,
 Jiri Jelinek


 On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  Jiri Jelinek wrote:
  
   Let's go to an extreme: Imagine being an immortal idiot.. No matter
   what you do  how hard you try, the others will be always so much
   better in everything that you will eventually become totally
   discouraged or even afraid to touch anything because it would just
   always demonstrate your relative stupidity (/limitations) in some way.
   What a life. Suddenly, there is this amazing pleasure machine as a new
   god-like-style of living for poor creatures like you. What do you do?
 
  Jiri,
 
  Is this really what you *want*?
 
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?
 
  --
  Eliezer S. Yudkowsky  http://singinst.org/
  Research Fellow, Singularity Institute for Artificial Intelligence
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60236618-350050


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
Linas, BillK

It might currently be hard to accept for association-based human
minds, but things like roses, power-over-others, being worshiped
or loved are just waste of time with indirect feeling triggers
(assuming the nearly-unlimited ability to optimize).

Regards,
Jiri Jelinek

On Nov 2, 2007 12:56 PM, Linas Vepstas [EMAIL PROTECTED] wrote:
 On Fri, Nov 02, 2007 at 12:41:16PM -0400, Jiri Jelinek wrote:
  On Nov 2, 2007 2:14 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
  if you could have anything you wanted, is this the end you
  would wish for yourself, more than anything else?
 
  Yes. But don't forget I would also have AGI continuously looking into
  how to improve my (/our) way of perceiving the pleasure-like stuff.

 This is a bizarre line of reasoning. One way that my AGI might improve
 my perception of pleasure is to make me dumber -- electroshock me --
 so that I find gilligan's island reruns incredibly pleasurable. Or,
 I dunno, find that heroin addiction is a great way to live.

 Or help me with fugue states: what is the sound of one hand clapping?
 feed me zen koans till my head explodes.

 But it might also decide that I should be smarter, so that I have a more
 acute sense and discernement of pleasure. Make me smarter about roses,
 so that I can enjoy my rose garden in a more refined way. And after I'm
 smarter, perhaps I'll have a whole new idea of what pleasure is,
 and what it takes to make me happy.

 Personally, I'd opt for this last possibility.

 --linas

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60582722-508dcb


Re: [agi] Nirvana? Manyana? Never!

2007-11-02 Thread Jiri Jelinek
On Nov 2, 2007 2:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
  Could you please provide one specific example of a human goal which
  isn't feeling-based?

 It depends on what you mean by 'based' and 'goal'. Does any choice
 qualify as a goal? For example, if I choose to write certain word in
 this e-mail, does a choice to write it form a goal of writing it?
 I can't track source of this goal, it happens subconsciously.

Choice to take particular action generates sub-goal (which might be
deep in the sub-goal chain). If you go up, asking why? on each
level, you eventually reach the feeling level where goals (not just
sub-goals) are coming from. In short, I'm writing these words because
I have reasons to believe that the discussion can in some way support
my /or someone else's AGI R /or D. I want to support it because I
believe AGI can significantly help us to avoid pain and get more
pleasure - which is basically what drives us [by design]. So when we
are 100% done, there will be no pain and an extreme pleasure. Of
course I'm simplifying a bit, but what are the key objections?

 Saying just 'Friendly AI' seems to be
 sufficient to specify a goal for human researchers, but not enough to
 actually build one.

Just build AGI that follows given rules.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60681447-d775a0


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
 Is this really what you *want*?
 Out of all the infinite possibilities, this is the world in which you
 would most want to live?

Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all related tasks including safety issues etc. The quality of our
life is in feelings. Or do we know anything better? We do what we do
for feelings and we alter them very indirectly. We can optimize and
get the greatest stuff allowed by the current design by direct
altering/stimulations (changes would be required so we can take it
non-stop). Whatever you enjoy, it's not really the thing you are
doing. It's the triggered feeling which can be obtained and
intensified more directly. We don't know exactly how those great
feelings (/qualia) work, but there is a number of chemicals and brain
regions known to play key roles.

Regards,
Jiri Jelinek


On Nov 2, 2007 12:54 AM, Eliezer S. Yudkowsky [EMAIL PROTECTED] wrote:
 Jiri Jelinek wrote:
 
  Let's go to an extreme: Imagine being an immortal idiot.. No matter
  what you do  how hard you try, the others will be always so much
  better in everything that you will eventually become totally
  discouraged or even afraid to touch anything because it would just
  always demonstrate your relative stupidity (/limitations) in some way.
  What a life. Suddenly, there is this amazing pleasure machine as a new
  god-like-style of living for poor creatures like you. What do you do?

 Jiri,

 Is this really what you *want*?

 Out of all the infinite possibilities, this is the world in which you
 would most want to live?

 --
 Eliezer S. Yudkowsky  http://singinst.org/
 Research Fellow, Singularity Institute for Artificial Intelligence

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60223315-7fc1f8


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
 ED So is the envisioned world is one in which people are on something
 equivalent to a perpetual heroin or crystal meth rush?

Kind of, except it would be safe.

 If so, since most current humans wouldn't have much use for such people, I
 don't know why self-respecting productive human-level AGIs would either.

It would not be supposed to think that way. It does what it's tasked
to do (no matter how smart it is).

 And, if humans had no goals or never thought about intelligence or problems,
 there is no hope they would ever be able to defend themselves from the
 machines.

Our machines would work for us and do everything much better so - no
reason for us to do anything.

 I think it is important to keep people in the loop and substantially in
 control for as long as possible,

My initial thought was the same but if we have narrow AI safety_tools
doing a better job in that area for *very* *very* long time, we will
get convinced that there is simply no need for us being directly
involved.

 at least until we make a transhumanist transition.
 I think it is important that most people have some sort of
 work, even if it is only in helping raise children, taking care of the old,
 governing society, and managing machines.

My thought was in very distant [potential] future. World will change
drastically. There will be no [desire for] children and no old (we
will live forever). Our cells are currently programed to die - that
code will be rewritten if we stick with cells. The meaning of the term
society will change and at certain stage, we will IMO not care about
any concept you can name today. But we better spend more time with
trying to figure out how to design the first powerful AGI at this
stage + how to keep extending our life so WE can make it to those
fairy tale future worlds.

 Freud said work of some sort was
 important, and a lot of people think he was right.

It will be valid for a while :-)

 Even as humans increasingly become more machine through intelligence
 augmentation, we well have problems.  Even if the machines totally take over
 they will have problems.  Shit happens -- even to machines.

Right, but they will be better shit-fighters.

 So I think having more pleasure is good, but trying to have so much pleasure
 that you have no goals, no concern for intelligence, and never think of
 problems is a recipe for certain extinction.

Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do  how hard you try, the others will be always so much
better in everything that you will eventually become totally
discouraged or even afraid to touch anything because it would just
always demonstrate your relative stupidity (/limitations) in some way.
What a life. Suddenly, there is this amazing pleasure machine as a new
god-like-style of living for poor creatures like you. What do you do?

Regards,
Jiri Jelinek


 You know, survival of the
 fittest and all that other boring rot that just happens to dominate reality.

 Nirvana? Manyana? Never!

 Of course, all this is IMHO.

 Ed Porter

 P.S. If you ever make one of your groove machines, you could make billions
 with it. 
  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60220603-cef30c


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:


Let's go to an extreme: Imagine being an immortal idiot.. No matter
what you do  how hard you try, the others will be always so much
better in everything that you will eventually become totally
discouraged or even afraid to touch anything because it would just
always demonstrate your relative stupidity (/limitations) in some way.
What a life. Suddenly, there is this amazing pleasure machine as a new
god-like-style of living for poor creatures like you. What do you do?


Jiri,

Is this really what you *want*?

Out of all the infinite possibilities, this is the world in which you 
would most want to live?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60221250-a74559


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Stefan Pernar
On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:

  Is this really what you *want*?
  Out of all the infinite possibilities, this is the world in which you
  would most want to live?

 Yes, great feelings only (for as many people as possible) and the
 engine being continuously improved by AGI which would also take care
 of all related tasks including safety issues etc. The quality of our
 life is in feelings. Or do we know anything better? We do what we do
 for feelings and we alter them very indirectly. We can optimize and
 get the greatest stuff allowed by the current design by direct
 altering/stimulations (changes would be required so we can take it
 non-stop). Whatever you enjoy, it's not really the thing you are
 doing. It's the triggered feeling which can be obtained and
 intensified more directly. We don't know exactly how those great
 feelings (/qualia) work, but there is a number of chemicals and brain
 regions known to play key roles.


Your feelings form a guide that has evolved in the course of natural
selection to reward you for doing things that increase your fitness and
punish you for things that decrease your fitness. If you abuse this
mechanism by merely pretending that you are increasing your fitness in the
form of releasing appropriate chemicals in your brain then you are hurting
yourself by closing your eyes to reality. This is bad because you
effectively deny yourself the potential for further increasing your fitness
and thereby will eventually be replaced by an agent that does concern itself
with increasing its fitness.

In short: your bliss wont last long.

-- 
Stefan Pernar
3-E-101 Silver Maple Garden
#6 Cai Hong Road, Da Shan Zi
Chao Yang District
100015 Beijing
P.R. CHINA
Mobil: +86 1391 009 1931
Skype: Stefan.Pernar

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60225009-df9d21

Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Jiri Jelinek
Stefan,

 closing your eyes to reality. This is bad because you
 effectively deny yourself the potential for further increasing your fitness

I'm closing my eyes, but my AGI - which is an extension of my
intelligence (/me) - does not. I fact it opens them more than I could.
We and our AGI should be viewed as a whole in this respect.

Regards,
Jiri Jelinek

On Nov 2, 2007 1:37 AM, Stefan Pernar [EMAIL PROTECTED] wrote:
 On Nov 2, 2007 1:19 PM, Jiri Jelinek [EMAIL PROTECTED] wrote:

 
   Is this really what you *want*?
   Out of all the infinite possibilities, this is the world in which you
   would most want to live?
 
  Yes, great feelings only (for as many people as possible) and the
  engine being continuously improved by AGI which would also take care
  of all related tasks including safety issues etc. The quality of our
  life is in feelings. Or do we know anything better? We do what we do
  for feelings and we alter them very indirectly. We can optimize and
  get the greatest stuff allowed by the current design by direct
  altering/stimulations (changes would be required so we can take it
  non-stop). Whatever you enjoy, it's not really the thing you are
  doing. It's the triggered feeling which can be obtained and
  intensified more directly. We don't know exactly how those great
  feelings (/qualia) work, but there is a number of chemicals and brain
  regions known to play key roles.

 Your feelings form a guide that has evolved in the course of natural
 selection to reward you for doing things that increase your fitness and
 punish you for things that decrease your fitness. If you abuse this
 mechanism by merely pretending that you are increasing your fitness in the
 form of releasing appropriate chemicals in your brain then you are hurting
 yourself by closing your eyes to reality. This is bad because you
 effectively deny yourself the potential for further increasing your fitness
 and thereby will eventually be replaced by an agent that does concern itself
 with increasing its fitness.

 In short: your bliss wont last long.

 --
 Stefan Pernar
 3-E-101 Silver Maple Garden
 #6 Cai Hong Road, Da Shan Zi
 Chao Yang District
 100015 Beijing
 P.R. CHINA
 Mobil: +86 1391 009 1931
 Skype: Stefan.Pernar 

  This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60226663-83d320


Re: [agi] Nirvana? Manyana? Never!

2007-11-01 Thread Eliezer S. Yudkowsky

Jiri Jelinek wrote:

Is this really what you *want*?
Out of all the infinite possibilities, this is the world in which you
would most want to live?


Yes, great feelings only (for as many people as possible) and the
engine being continuously improved by AGI which would also take care
of all related tasks including safety issues etc. The quality of our
life is in feelings. Or do we know anything better? We do what we do
for feelings and we alter them very indirectly. We can optimize and
get the greatest stuff allowed by the current design by direct
altering/stimulations (changes would be required so we can take it
non-stop). Whatever you enjoy, it's not really the thing you are
doing. It's the triggered feeling which can be obtained and
intensified more directly. We don't know exactly how those great
feelings (/qualia) work, but there is a number of chemicals and brain
regions known to play key roles.


I didn't ask whether it's possible.  I'm quite aware that it's 
possible.  I'm asking if this is what you want for yourself.  Not what 
you think that you ought to logically want, but what you really want.


Is this what you lived for?  Is this the most that Jiri Jelinek wants 
to be, wants to aspire to?  Forget, for the moment, what you think is 
possible - if you could have anything you wanted, is this the end you 
would wish for yourself, more than anything else?


--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=60231781-e47c04