Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Shane Legg

Eliezer,

I suppose my position is similar to Ben's in that I'm more worried
about working out the theory of AI than about morality because until
I have a reasonable idea of how an AI is going to actually work I
don't see how I can productively think about something as abstract
as AI morality.

I do however agree that it's likely to be very important, in fact
one of the most important things for humanity to come to terms with
in the not too distant future.  Thus I am open to be convinced that
productive thinking in this area is possible in the absence of any
specific and clearly correct designs for AGI and super intelligence.



crosses the cognitive threshold of superintelligence it takes actions 
which wipe out the human species as a side effect.

AIXI, which is a completely defined formal system, definitely undergoes 
a failure of exactly this type.

Why?

As I see it an AIXI system only really cares about one thing: Getting
lots of reward signals.  It doesn't have an ego, or... well, anything
human really; all it cares about is its reward signals.  Anything else
that it does is secondary and is really only an action aimed towards
some intermediate goal which, in the longer term, will produce yet
more reward signals.  Thus the AI will only care about taking over the
world if it thinks that doing so is the best path towards getting more
reward signals.  In which case: Why take over the world to get more
reward signals when you are a computer program and could just hack
your own reward system code?  Surely the latter would be much easier?
Kind of "AI drugs" I suppose you could say.  Surely for a super smart
AI this wouldn't be all that hard to do either.

I raised this possibility with Marcus a while back and his reply was
that an AI probably wouldn't for the same reason that people generally
say away from hard drugs: they have terrible long term consequences.
Which is to say that we think that the the short term pleasure produced
by the drugs will be out weighed by the longer term pain that results.

However for an AI I think the picture is slightly different as it
wouldn't have a body which would get sick or damanged or die like a
person does.  The consequences for an AI just don't seem to be as bad
as for a human.  The only risk that I can see for the computer is that
somebody might not like having a spaced out computer and then shut it
down and reinstall the system or what ever, i.e. "kill" the AI.  That
wouldn't be optimal for the AI as it would reduce its expected future
reward signal: dead AI's don't get reward signals.  Thus the AI will
want to do what ever it needs to do to survive in the future in order
to maximise its expected future reward signal.

This secondary goal, the goal to survive, could come into conflict
with our goals especially if we are seen as a threat to the AI.

Like I said, reasoning about this sort of thing is tricky so I'm not
overly confident that my arguments are correct...

Cheers
Shane


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Philip Sutton
Eliezer,

Thanks for being clear at last about what the deep issue is that you 
were driving at.  Now I can start getting my head around what you are 
trying to talk about.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel


> Your intuitions say... I am trying to summarize my impression of your
> viewpoint, please feel free to correct me... "AI morality is a matter of
> experiential learning, not just for the AI, but for the programmers.  To
> teach an AI morality you must give it the right feedback on moral
> questions and reinforce the right behaviors... and you must also learn
> *about* the deep issues of AI morality by raising a young AI.  It isn't
> pragmatically realistic to work out elaborate theories of AI morality in
> advance; you must learn what you need to know as you go along.  Moreover,
> learning what you need to know, as you go along, is a good strategy for
> creating a superintelligence... or at least, the rational estimate of the
> goodness of that strategy is sufficient to make it a good idea to try and
> create a superintelligence, and there aren't any realistic
> strategies that
> are better.  An informal, intuitive theory of AI morality is good enough
> to spark experiential learning in the *programmer* that carries you all
> the way to the finish line.  You'll learn what you need to know as you go
> along.  The most fundamental theoretical and design challenge is
> making AI
> happen, at all; that's the really difficult part that's defeated everyone
> else so far.  Focus on making AI happen.  If you can make AI happen,
> you'll learn how to create moral AI from the experience."

Hmmm.  This is almost a good summary of my perspective, but you've still
not come to grips with the extent of my uncertainty ;)

I am not at all SURE that "An informal, intuitive theory of AI morality is
good enough to spark experiential learning in the *programmer* that carries
you all the way to the finish line." where by the "finish line" you mean
an AGI whose ongoing evolution will lead to beneficial effects for both
humans and AGI's.

I'm open to the possibility that it may someday become clear, as AGI work
progresses, that a systematic theory of AGI morality is necessary in order
to proceed safely.

But I suspect that, in order for me to feel that such a theory was
necessary,
I'd have to understand considerably more about AGI than I do right now.

And I suspect that the only way I'm going to come to understand considerably
more about AGI, is through experimentation with AGI systems.  (This is where
my views differ from Shane's; he is more bullish on the possibility of
learning a lot about AGI through mathematical theory.  I think this will
happen, but I think the math theory will only get really useful when it is
evolving in unison with practical AGI work.)

Right now, it is not clear to me that a systematic theory of AGI morality
is necessary in order to proceed safely.  And it is also not clear to me
that a systematic theory of AGI morality is possible to formulate based
on our current state of knowledge about AGI.

> In contrast, I felt that it was a good idea to develop a theory of AI
> morality in advance, and have developed this theory to the point where it
> currently predicts, counter to my initial intuitions and to my
> considerable dismay:
>
> 1)  AI morality is an extremely deep and nonobvious challenge
> which has no
> significant probability of going right by accident.

I agree it's a deep and nonobvious challenge.  You've done a great job of
demonstrating that.

I don't agree that any of your published writings have shown it "has no
significant probability of going right by accident."

> 2)  If you get the deep theory wrong, there is a strong possibility of a
> silent catastrophic failure: the AI appears to be learning
> everything just
> fine, and both you and the AI are apparently making all kinds of
> fascinating discoveries about AI morality, and everything seems to be
> going pretty much like your intuitions predict above, but when the AI
> crosses the cognitive threshold of superintelligence it takes actions
> which wipe out the human species as a side effect.

Clearly this could happen, but I haven't read anything in your writings
leading to even a heuristic, intuitive probability estimate for the
outcome.

> AIXI, which is a completely defined formal system, definitely undergoes a
> failure of exactly this type.
>
> Ben, you need to be able to spot this.  Think of it as a practice run for
> building a real transhuman AI.  If you can't spot the critical structural
> property of AIXI's foundations that causes AIXI to undergo silent
> catastrophic failure, then a real-world reprise of that situation with
> Novamente would mean you don't have the deep theory to choose good
> foundations deliberately, you can't spot bad foundations deductively, and
> because the problems only show up when the AI reaches superintelligence,
> you won't get experiential feedback on the failure of your theory until
> it's too late.  Exploratory research on AI morality doesn't work for AIXI
> - it doesn't even visibly fail.  It *appears* to work until it's
> too late.
>   If you don't spot the problem in advance, you lose.
>

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel


> I can spot the problem in AIXI because I have practice looking for silent
> failures, because I have an underlying theory that makes it immediately
> obvious which useful properties are formally missing from AIXI, and
> because I have a specific fleshed-out idea for how to create
> moral systems
> and I can see AIXI doesn't work that way.  Is it really all that
> implausible that you'd need to reach that point before being able to
> create a transhuman Novamente?  Is it really so implausible that AI
> morality is difficult enough to require at least one completely dedicated
> specialist?
>
> --
> Eliezer S. Yudkowsky  http://singinst.org/

There's no question you've thought a lot more about AI morality than I
have... and I've thought about it a fair bit.

When Novamente gets to the point that its morality is a significant issue,
I'll be happy to get you involved in the process of teaching the system,
carefully studying the design and implementation, etc.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel

> Your intuitions say... I am trying to summarize my impression of your
> viewpoint, please feel free to correct me... "AI morality is a matter of
> experiential learning, not just for the AI, but for the programmers.

Also, we plan to start Novamente off with some initial goals embodying
ethical notions.  These are viewed as "seeds" of its ultimate ethical goals.

So it's not the case that we intend to rely ENTIRELY on experiential
learning; we intend to rely on experiential learning from an engineering
initial condition, not from a complete tabula rasa.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel

Hi,

> 2)  If you get the deep theory wrong, there is a strong possibility of a
> silent catastrophic failure: the AI appears to be learning
> everything just
> fine, and both you and the AI are apparently making all kinds of
> fascinating discoveries about AI morality, and everything seems to be
> going pretty much like your intuitions predict above, but when the AI
> crosses the cognitive threshold of superintelligence it takes actions
> which wipe out the human species as a side effect.
>
> AIXI, which is a completely defined formal system, definitely undergoes a
> failure of exactly this type.

*Definitely*, huh?  I don't really believe you...

I can see the direction your thoughts are going in

Supppose you're rewarding AIXI for acting as though it's a Friendly AI.

Then, by searching the space of all possible programs, it finds some
program P that causes it to act as though it's a Friendly AI, satisfying
humans thoroughly in this regard.

There's an issue that a lot of different programs P could fulfill this
criterion.

Among these are programs P that will cause AIXI to fool humans into thinking
it's Friendly, until such a point as AIXI has acquired enough physical power
to annihilate all humans -- and which, at that point, will cause AIXI to
annihilate all humans.

But I can't see why you think AIXI would be particularly likely to come up
with programs P of this nature.

Instead, my understanding is that AIXI is going to have a bias to come up
with the most compact program P that maximizes reward.

And I think it's unlikely that the most compact program P for "impressing
humans with Friendliness" is one that involves "acting Friendly for a while,
then annihilating humanity."

You could argue that the system would maximize its long-term reward by
annihilating humanity, because after pesky humans are gone, it can simply
reward itself unto eternity without caring what we think.

But, if it's powerful enough to annihilate us, it's also probably powerful
enough to launch itself into space and reward itself unto eternity without
caring what we think, all by itself (an Honest Annie type scenario).  Why
would it prefer "annihilate humans" P to "launch myself into space" P?

But anyway, it seems to me that the way AIXI works is to maximize expected
reward assuming that its reward function continues pretty much as it has
in the past.  So AIXI is not going to choose programs P based on a desire
to bring about futures in which it can masturbatively maximize its own
rewards.  At least, that's my understanding, though I could be wrong.

This whole type of scenario is avoided by limitations on computational
resources, because I believe that "impressing humans regarding Friendliness
by actually being Friendly" is a simpler computational problem than
"impressing humans regarding Friendliness by subtly emulating Friendliness
but really concealing murderous intentions."  Also, I'd note that in a
Novamente, one could most likely distinguish these two scenarios by looking
inside the system and studying the Atoms and maps therein.

Jeez, all this talk about the future of AGI really makes me want to stop
e-mailing and dig into the damn codebase and push Novamente a little closer
to being a really autonomous intelligence instead of a partially-complete
codebase with some narrow-AI applications !!! ;-p

-- Ben G



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel
> So it's not the case that we intend to rely ENTIRELY on experiential
> learning; we intend to rely on experiential learning from an engineering
> initial condition, not from a complete tabula rasa.
>
> -- Ben G

"engineered" initial condition, I meant, oops

[typed in even more of a hurry as I get up to leave the house for a few
hours...]

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Eliezer S. Yudkowsky wrote:
> 1)  AI morality is an extremely deep and nonobvious challenge which has 
> no significant probability of going right by accident.

> 2)  If you get the deep theory wrong, there is a strong possibility of 
> a silent catastrophic failure: the AI appears to be learning everything 
> just fine, and both you and the AI are apparently making all kinds of
> fascinating discoveries about AI morality, and everything seems to be
> going pretty much like your intuitions predict above, but when the AI
> crosses the cognitive threshold of superintelligence it takes actions
> which wipe out the human species as a side effect.

> AIXI, which is a completely defined formal system, definitely undergoes 
> a failure of exactly this type.

You have not shown this at all. From everything you've said it seems
that you are trying to trick Ben into having so many misgivings about
his own work that he holds it up while you create your AI first. I hope
Ben will see through this deception and press ahead with novamente. -- A
project that I give even odds for sucess... 


-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Philip Sutton
Alan,

> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I
> hope Ben will see through this deception and press ahead with
> novamente. -- A project that I give even odds for sucess... 

Ben asked you not to flame.  I find this sort of paranoid delusional stuff 
annoying.

Cheers, Philip

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Michael Roy Ames
Alan Grimes wrote:
>
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I
> hope Ben will see through this deception and press ahead with
> novamente. -- A project that I give even odds for sucess...
>

AFAIK this list was initiated to facilitate debate on AGI issues for the
benefit of all.  Deception and trickery are counterproductive to this
goal, and so far I have detected neither.  Alan, if you cannot tell the
difference between differring opinions vs. deception & trickery, then
you should not post to this list.

Michael Roy Ames


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Eliezer S. Yudkowsky
Ben Goertzel wrote:
>
>> Your intuitions say... I am trying to summarize my impression of your
>> viewpoint, please feel free to correct me... "AI morality is a
>> matter of experiential learning, not just for the AI, but for the
>> programmers.  To teach an AI morality you must give it the right
>> feedback on moral questions and reinforce the right behaviors... and
>> you must also learn *about* the deep issues of AI morality by raising
>> a young AI.  It isn't pragmatically realistic to work out elaborate
>> theories of AI morality in advance; you must learn what you need to
>> know as you go along.  Moreover, learning what you need to know, as
>> you go along, is a good strategy for creating a superintelligence...
>> or at least, the rational estimate of the goodness of that strategy
>> is sufficient to make it a good idea to try and create a
>> superintelligence, and there aren't any realistic strategies that are
>> better.  An informal, intuitive theory of AI morality is good enough
>> to spark experiential learning in the *programmer* that carries you
>> all the way to the finish line.  You'll learn what you need to know
>> as you go along.  The most fundamental theoretical and design
>> challenge is making AI happen, at all; that's the really difficult
>> part that's defeated everyone else so far.  Focus on making AI
>> happen.  If you can make AI happen, you'll learn how to create moral
>> AI from the experience."
>
> Hmmm.  This is almost a good summary of my perspective, but you've
> still not come to grips with the extent of my uncertainty ;)
>
> I am not at all SURE that "An informal, intuitive theory of AI morality
> is good enough to spark experiential learning in the *programmer* that
> carries you all the way to the finish line." where by the "finish line"
> you mean an AGI whose ongoing evolution will lead to beneficial effects
> for both humans and AGI's.
>
> I'm open to the possibility that it may someday become clear, as AGI
> work progresses, that a systematic theory of AGI morality is necessary
> in order to proceed safely.

You are, however, relying on experiential learning to tell you *whether* a 
systematic theory of AGI morality is necessary.  This is what I meant by 
trying to summarize your perspective as "An informal, intuitive theory of 
AI morality is good enough to spark experiential learning in the 
*programmer* that carries you all the way to the finish line."

The problem is that if you don't have a systematic theory of AGI morality 
you can't know whether you *need* a systematic theory of AGI morality. 
For example, I have a systematic theory of AGI morality which says that a 
programmer doing such-and-such is likely to see such-and-such results, 
with the result that experiential learning by the programmer is likely to 
result in the programmer solving *some* necessary AGI problems - enough 
for the programmer to feel really enthusiastic about all the progress 
being made.  But when I model the programmer's expectations and the AGI's 
actions, I find that there are some classes of foundational error which, 
if the programmer's expectations are such as to make this error, the AGI's 
actions will not contradict the programmer's expectations until it's too late.

Let me give an example.  Suppose a programmer's intuitive theory of AGI 
morality is that the foundation of all morality is rational self-interest, 
and that the programmer's intuitive theory of AGI self-improvement 
trajectories is that AGIs are capable of improving at most linearly over 
time.  On this theory, the programmer creates an absolutely selfish AI.  I 
know enough evolutionary psychology to be terribly, terribly scared by 
this.  Others may not realize just how absolutely unhuman absolutely 
pragmatic selfishness is, but may at least realize that an *absolutely* 
selfish AI would be a poor idea.

But the programmer doesn't see anything wrong.  If you take an absolutely 
selfish entity that's relying on you for most of its information, and you 
tell it that behaving cooperatively serves its selfish interests, and you 
have enough control over it while it's growing up to slap it whenever it 
does something that doesn't accord with your theory of pragmatic 
cooperation, then the absolutely selfish AI will appear to be learning 
good behavior, looking much like the programmer expects, as a 
pragmatically selfish strategy, which is what the programmer thinks 
*should* be happening.

Once the AI's intelligence passes a certain point, it will realize that 
the programmer is mistaken on the following factual points:

1)  That it is selfishly pragmatic for a superintelligence to deal with 
humans economically rather than converting them to computronium.

2)  That the AI's intelligence growth rate is linear.

Explaining the logic behind selfishly pragmatic cooperation requires 
explaining "deception" as a game-theoretical strategy that certain forms 
of cooperation are intended to counter.  So at this point, th

RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel


Alan Grimes wrote:
> You have not shown this at all. From everything you've said it seems
> that you are trying to trick Ben into having so many misgivings about
> his own work that he holds it up while you create your AI first. I hope
> Ben will see through this deception and press ahead with novamente. -- A
> project that I give even odds for sucess...

Alan,

Eliezer knows me well enough to know there's no chance he's going to induce
me to stop doing my work ;)

Also, if he managed to create an AGI first, I'd be happy, not jealous, so
long as it was a good AGI.

I would like to be the first one to create a powerful AGI.  However, it's
vastly more important to me that a powerful, beneficent AI is created, than
that I be the one who creates it.

Perhaps living in Washington has made me a little paranoid, but I am
continually aware of the increasing threats posed by technology to
humanity's survival.  I often think of humanity's near-term future as a race
between destructive and constructive technologies.  I really hope Friendly
AI can outpace, for example, bio-engineered pathogens...

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
This is slightly off-topic but no more so than the rest of the thread...

> 1)  That it is selfishly pragmatic for a superintelligence to deal with
> humans economically rather than converting them to computronium.

For convenience, lets rephrase this 


"the majority of arbitrarily generated superintelligences would prefer
to convert everything in the solar system into computronium than deal
with humans within their laws and social norms."


This rephrasing might not be perfectly fair and I invite anyone to
adjust it to their taste and prefferances.

Now here is my question, it's going to sound silly but there is quite a
bit behind it: 

"Of what use is computronium to a superintelligence?" 

This is not a troll or any other abuse of the members of the list. It is
no less serious or relevant than the assertion it addresses. 

I hope that many people on this list will answer this. I should warn you
about how I am going to treat those answers. Any answer in the negitive,
that the SI doesn't need vast quantities of computronium, will be
applauded. Any answer in the affirmative and which would fit in five
lines of text will be either wrong or so grossly incomplete as to be
utterly meaningless and unworthy of anything more than a tearse retort. 

Longer answers will be treated with much greater interest and will be
answered with far greater attention. My primary instrument in this will
be the question "Why?". The answers, I expect, will either spiral into
circular reasoning or to such a ludacrous absurdities as to be totally
irrational. 

The utility of this debate will be to show that the need for a Grand
Theory of Friendliness is not something that needs to be argued as far
simpler and perfectly obvious engineering constraints common to
absolutly all technologies will be totally sufficient asside from the
more complex implementation. 

I want this list to be useful to me and not have to skim through
hundreds of e-mails watching the rabbi drive conversation into useless
spirals as he works on the implementation details of the real problems.
Really, I'm getting dizzy from all of this. Lets start walking in a
streight line now. =( 

-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
Goertzel the good wrote: 

> Perhaps living in Washington has made me a little paranoid, but I am
> continually aware of the increasing threats posed by technology to
> humanity's survival.  I often think of humanity's near-term future as a 
> race between destructive and constructive technologies.  I really hope 
> Friendly AI can outpace, for example, bio-engineered pathogens...

I have absolutly no hope at all that any AI developed along the lines of
the Friendly AI would be, in any way, compatible with my personal
beleifs and aditudes. I am closer to being a transtopian singularitan
and I reject the Yudowskyian version completely.


As for Dr. Strangelove who, in another post, said that we should just
accept being made 'obsolete' (or any other scenereo which involves the
extinction of humanoid life) I have only one thing to say: Right this
moment, call St. Elizabeth's and reserve yourself a room!!! 

Like any other technology, AI should be by humans and for humans without
regards for the preferred lifestyle of said humans. 

-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Jonathan Standley



> Now here is my question, it's going to sound 
silly but there is quite a> bit behind it: > > "Of what use 
is computronium to a superintelligence?" > 
 
If the superintelligence perceives a need for vast computational resources, 
then computronium would indeed be very useful.  Assuming said SI is 
friendly to humans, one thing I can think of that *may* need such power would be 
certain megascale engineering projects.  Keeping track of everything 
involved in, for example, opening a wormhole could require unimaginable 
resources. (this is just a wild guess, aside from a Stephen Hawking book or two, 
I'm rather clueless when it comes to quantum-ish stuff).
 
The smaller, more compact the components are in a 
system, the closer they can be to each other, reducing speed of light 
communications delays.  By my reasoning that is the only real advantage of 
computronium (unless energy efficiency  is an 
overwhelming concern).
 
This is getting sorta off track, 
but...
 
Imagine if one could create a new universe, and 
then move into it. This universe would be however you want it to be; you are 
omniscient and omnipotent within it. There are no limits once you move 
in.  In some sense, you could consider making such a universe a 'goal to 
end all goals', since literally anything that the creator wishes is possible and 
easy within the new universe.  
 
Assuming all the above, the issue becomes 'what 
resources are required to reach the be-all end-all of goals?'
All of the energy of the visible universe, and 10 
trillion years could be the minimum.  Or... the matter (converted to energy 
and computational structures) that makes up a single 50km object in the asteroid 
belt could be enough.  At this point in time, we have no way of even making 
an educated guess. If the requirements are towards the low end of the scale, 
even an AI with insane ambitions to godhood wouldn't need to turn the whole 
solar system into computronium
 
J Standley
http://users.rcn.com/standley/


Re: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Alan Grimes
> Jonathan Standley wrote:
> > Now here is my question, it's going to sound silly but there is
>> quite a bit behind it:

> > "Of what use is computronium to a superintelligence?"

> If the superintelligence perceives a need for vast computational
> resources, then computronium would indeed be very useful.  Assuming
> said SI is friendly to humans, one thing I can think of that *may*
> need such power would be certain megascale engineering projects.
> Keeping track of everything involved in, for example, opening a
> wormhole could require unimaginable resources. (this is just a wild
> guess, aside from a Stephen Hawking book or two, I'm rather clueless
> when it comes to quantum-ish stuff).

OK, that is a reasonable answer however I can't immagine even a dycen's
sphere (assuming it had a sufficiently regular design) would require
much more than what would fit on my desk to work out. 

> The smaller, more compact the components are in a system, the closer
> they can be to each other, reducing speed of light communications
> delays.  By my reasoning that is the only real advantage of
> computronium (unless energy efficiency  is an overwhelming concern).

Ofcourse, there's your tradeoff. It would seem that this would place an
upper bound on how much matter you would want to use before
communication delays start getting really annoying. (and hence cause the
evil AI to stop after consuming a county or two). 

> Imagine if one could create a new universe, and then move into it.
> This universe would be however you want it to be; you are omniscient
> and omnipotent within it. There are no limits once you move in.  In
> some sense, you could consider making such a universe a 'goal to end
> all goals', since literally anything that the creator wishes is
> possible and easy within the new universe.

A few people would find that emotionally rewarding. As for me, I rarely
play video games anymore. In the past I have found that the best games,
such as Dragon Warrior [sometimes Dragon Quest] IV required only 800kb
and provided a rich and detailed world on only an 8-bit processor with
hardly any ram. 

On balance, this idea is, practically speaking, pointless. It would be
much cheaper to deploy technology in this universe and tweak it as you
like. 

On a more personal note, when I was a little kid I once (maybe a few
times) had a dream where I had managed to escape into a metaverse which
had the topology of a torus and was somewhat red in color... In this
metaverse I could "Reset"  the universe to any pattern I chose and live
in it from the beginning in any way I chose. Anyway, that's waaay off
topick...

> Assuming all the above, the issue becomes 'what resources are required
> to reach the be-all end-all of goals?'

I don't beleive any such goal exists. 

> All of the energy of the visible universe, and 10 trillion years could
> be the minimum.  Or... the matter (converted to energy and
> computational structures) that makes up a single 50km object in the
> asteroid belt could be enough.  At this point in time, we have no way
> of even making an educated guess. If the requirements are towards the
> low end of the scale, even an AI with insane ambitions to godhood
> wouldn't need to turn the whole solar system into computronium

Now this gets interesting. 
Here we need to start thinking in terms of goals: 

A fairly minimal goal system would be to master mathematics, physics,
chemestry, engineering, and a number of other diciplines and have enough
capacity in reserve to persue any project one might be interested in,
mostly having to do with survival. Depending on your assumptions about
the efficacy of nanotech, such a device wouldn't be much bigger than the
HD in your computer. 

If one wanted to start doing grand experaments in this universe, such as
probing down to the plank length (10^-35 M) to see if you can dig your
way into some other universe you might need to build some kind of
reactor that could be quite large but not be much bigger than the moon.
Another method might involve constructing a particle accelerator
billions of miles long to take an electron or something close enough to
the speed of light to get to that scale... In that case you probably
wouldn't need anything larger that jupiter to do it. 

Can anyone else think of any better goals?

-- 
I WANT A DEC ALPHA!!! =)
21364: THE UNDISPUTED GOD OF ALL CPUS.
http://users.rcn.com/alangrimes/
[if rcn.com doesn't work, try erols.com ]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] unFriendly AIXI... and Novamente?

2003-02-12 Thread Ben Goertzel

Alan,

With comments like this

> I want this list to be useful to me and not have to skim through
> hundreds of e-mails watching the rabbi drive conversation into useless
> spirals as he works on the implementation details of the real problems.
> Really, I'm getting dizzy from all of this. Lets start walking in a
> streight line now. =(

you really test my tolerance as list moderator.

Please, please, no personal insults.  And no anti-Semitism or racism of any
kind.

I guess that your reference to Eliezer as "the rabbi" may have been meant as
amusing, but as a person of Jewish descent who experienced plenty of
anti-Semitism in his youth, I didn't find it all that hilarious, really...

If you don't find a certain list thread interesting, by all means use your
DELETE key.  I've found this thread on AIXI reasonably stimulating,
personally.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]