On Mon, Mar 10, 2008 at 3:04 AM, Mark Waser [EMAIL PROTECTED] wrote:
1) If I physically destroy every other intelligent thing, what is
going to threaten me?
Given the size of the universe, how can you possibly destroy every other
intelligent thing (and be sure that no others ever
Mark Waser wrote:
Part 4.
... Eventually, you're going to get down to Don't mess with
anyone's goals, be forced to add the clause unless absolutely
necessary, and then have to fight over what when absolutely necessary
means. But what we've got here is what I would call the goal of a
It *might* get stuck in bad territory, but can you make an argument why
there is a *significant* chance of that happening?
Not off the top of my head. I'm just playing it better safe than sorry
since, as far as I can tell, there *may* be a significant chance of it
happening.
Also, I'm not
I find myself totally bemused by the recent discussion of AGI friendliness.
I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
The three most common of these assumptions are:
1) That it will have the same motivations as humans, but with a
tendency toward the worst that we show.
2) That it will have some kind of Gotta Optimize My Utility
Function motivation.
3) That it will have an intrinsic urge to
On Mon, Mar 10, 2008 at 5:47 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
In the past I have argued strenuously that (a) you cannot divorce a
discussion of friendliness from a discussion of what design of AGI you
are talking about, and (b) some assumptions about AGI motivation are
I am in sympathy with some aspects of Mark's position, but I also see a
serious problem running through the whole debate: everyone is making
statements based on unstated assumptions about the motivations of AGI
systems.
Bummer. I thought that I had been clearer about my assumptions. Let me
On Mon, Mar 10, 2008 at 6:13 PM, Mark Waser [EMAIL PROTECTED] wrote:
I can destroy all Earth-originated life if I start early enough. If
there is something else out there, it can similarly be hostile and try
destroy me if it can, without listening to any friendliness prayer.
All
For instance, a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function
Some of the system's activity will be spontaneous ... i.e. only
implicitly goal-oriented .. and as such
Mark Waser wrote:
I am in sympathy with some aspects of Mark's position, but I also see
a serious problem running through the whole debate: everyone is
making statements based on unstated assumptions about the motivations
of AGI systems.
Bummer. I thought that I had been clearer about my
On Mon, Mar 10, 2008 at 8:10 PM, Mark Waser [EMAIL PROTECTED] wrote:
Information Theory is generally accepted as
correct and clearly indicates that you are wrong.
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think
First off -- yours was a really helpful post. Thank you!
I think that I need to add a word to my initial assumption . . . .
Assumption - The AGI will be an optimizing goal-seeking entity.
There are two main things.
One is that the statement The AGI will be a goal-seeking entity has
many
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical environment?
No, I think that you're asserting that the virtual environment is close
On Mon, Mar 10, 2008 at 11:36 PM, Mark Waser [EMAIL PROTECTED] wrote:
Note that you are trying to use a technical term in a non-technical
way to fight a non-technical argument. Do you really think that I'm
asserting that virtual environment can be *exactly* as capable as
physical
errata:
On Tue, Mar 11, 2008 at 12:13 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
I'm sure that
for computational efficiency it should be a very strict limitation.
it *shouldn't* be a very strict limitation
--
Vladimir Nesov
[EMAIL PROTECTED]
---
On Tue, Mar 11, 2008 at 12:37 AM, Mark Waser [EMAIL PROTECTED] wrote:
How do we get from here to there? Without a provable path, it's all
just
magical hand-waving to me. (I like it but it's ultimately an
unsatifying
illusion)
It's an independent statement.
No, it
Mark Waser wrote:
...
The motivation that is in the system is I want to achieve *my* goals.
The goals that are in the system I deem to be entirely irrelevant
UNLESS they are deliberately and directly contrary to Friendliness. I
am contending that, unless the initial goals are deliberately
My second point that you omitted from this response doesn't need there
to be universal substrate, which is what I mean. Ditto for
significant resources.
I didn't omit your second point, I covered it as part of the difference
between our views.
You believe that certain tasks/options are
Part 5. The nature of evil or The good, the bad, and the evil
Since we've got the (slightly revised :-) goal of a Friendly individual and the
Friendly society -- Don't act contrary to anyone's goals unless absolutely
necessary -- we now can evaluate actions as good or bad in relation to that
I think here we need to consider A. Maslow's hierarchy of needs. That an
AGI won't have the same needs as a human is, I suppose, obvious, but I
think it's still true that it will have a hierarchy (which isn't
strictly a hierarchy). I.e., it will have a large set of motives, and
which it is
On 27/02/2008, a [EMAIL PROTECTED] wrote:
This causes real controversy in this discussion list, which pressures me
to build my own AGI.
How about joining effort with one of the existing AGI projects?
--linas
---
agi
Archives:
21 matches
Mail list logo