I agree with Shane ... this approach suffers from the same sort of problem
that AIXI suffers from, Friendliness-wise
When the system is smart enough, it will learn to outsmart the posited
Control Code, and the ethics-monitor AGI You might want to avoid this
by making the ethics-monitor AGI
Ben Goertzel wrote:
However, the society approach does not prevent a whole society of AGI's from
drifting into evil. How good is our understanding of AGI sociodynamics???
;-) This approach just replaces one hard problem with another... which may
or may not be even harder...
Indeed; if one cannot
To me
the distinction is between
A)
"Explicit programming-in of ethical principles" (EPIP)
versus
B)
"Explicit programming-in of methods specially made for the learning of ethics
through experience and teaching"
versus
C)
"Acquisition of ethics through experience and teaching,
Ben,
would you rather
have one person with an IQ of 200, or 4 people with
IQ's of 50? Ten
computers of intelligence N, or one computer with
intelligence
10*N ? Sure, the intelligence of the ten computers of
intelligence
N will be a little smarter than N, all together, because
of
Ben said,
When the system is smart enough, it will learn to outsmart the posited
Control Code, and the ethics-monitor AGI
This isn't apparent at all, given that the Control Code could be pervasively
imbedded and keyed to things beyond the AGI's control. The idea is to limit
the AGI and
Hi,
I
don't see that you've made a convincing argument that a society of AI's is safer
than an individual AI. Certainly among human societies, the only analogue
we have, society-level violence and madness seems even MORE common than
individual-level violence and madness. Often societies
Hello all..
I was wondering what people thought the relative risks were
between a super smart AGI that cannot yet self modify(change its own source
code), and an AGI that can self modify?
Do we see inherently less risk in case one? Perhaps some
"hard wired" ethics in case 1 would be much
It seems to me that communication and "thought
sharing" between various AGI's would be so intertwined that each one will become
indistinguishable from the other. So in essence you still have "one"
AGI..
Kevin
- Original Message -
From:
Ben Goertzel
To: [EMAIL
I havetwo new drafts for comments:"Non-Axiomatic Logic", at
http://www.cis.temple.edu/~pwang/drafts/NAL.pdfThis
is a complete description of the logic I've been working on."A Term
Logic for Cognitive Science", athttp://www.cis.temple.edu/~pwang/drafts/TermLogic.pdfThis
is a comparison
I would point out that our legal frameworks are designed under the
assumption that there is rough parity in intelligence between all
actors in the system. The system breaks badly when you have extreme
disparities in the intelligence of the actors because you are breaking
one of the
(1) Since we cannot accurately predicate the future
implications of our
action, almost all research can lead to deadly result ---
just see what has
been used as weapons in the current world. If we ask for a
guaranty for
safety before a research, then we cannot do anything. I don't
think
Kevin Copple wrote: Ben said,
When the system is smart enough, it will learn to outsmart the posited
Control Code, and the ethics-monitor AGI
This isn't apparent at all, given that the Control Code could be pervasively
imbedded and keyed to things beyond the AGI's control. The idea is to
Extra credit:
I've just read the Crichton novel PREY. Totally transparent movie-scipt but
a perfect text book on how to screw up really badly. Basically the formula
is 'let the military finance it'. The general public will see this
inevitable movie and we we will be drawn towards the moral
One thing I should add:
It's the same hubris I mentioned in my previous message that prompted us to send out
satellites effectively bearing our home address and basic physiology on a plaque in
the hope that aliens would find it and come to us. Even NASA scientists seem to have
no fear of
Hi Pei / Colin,
Pei: This is
the conclusion that I have been most afraid of from this
Friendly
AI discussion. Yes, AGI can be very dangerous, and I don't
think any of
the solutions proposed so far can eliminate the danger
completely. However
I don't think this is a valid reason to
Well, that's one hell of a good reason to slow down the whole AGI
project. Doesn't it strike you that it's kind of reckless to create
something that could change society/the world drastically and bring it
on before society has had the time to develop some safequards or
safety net?
This
Ben,
In reply to my para saying :
if the one AGI goes feral the rest of us are going to need to access
the power of some pretty powerful AGIs to contain/manage the feral
one. Humans have the advantage of numbers but in the end we may not
have the intellectual power or speed to counter
PhilipI personally think humans as a society are
capable of saving themselves from their own individual and collective
stupidity. I've worked explicitly on this issue for 30 years and still
retain some optimism on the subject. Colin: I'm with Pei
Wang. Let's explore and deal with it.OK, if
Ben,
Ben: That paragraph
gave one possible dynamic in a society of AGI's,
but there are
many many other possible social dynamics
Of course. What you say is quite
true. But so what?
Let's go back to that one possible
dynamic. Can't you bring yourself to
agree that if a one-and-only
Pei,
I also have a very low expectation on what the current Friendly AI
discussion can contribute to the AGI research.
OK - that's a good issue to focus on then.
In an earlier post Ben described three ways that ethical systems could
be facilitated:
A) Explicit programming-in of ethical
Alan,
I've asked you repeatedly not to make insulting or anti-Semitic comments on
this list. But yet, you feel you have to keep referring to Eliezer as the
rabbi and making other similar choice comments. This is not good!
As list moderator, I am hereby forbidding you to post to the AGI list
Ben,
I think Pei's
point is related to the following point
We're now working
on aspects of
A) explicit programming-in
of ideas and processes
B) Explicit programming-in
of methods specially made for the learning
of ideas and processes through experience and teaching
and that until
Ben Goertzel wrote:
Yes, I see your point now.
If an AI has a percentage p chance of going feral, then in the case of
a society of AI's, only p percent of them will go feral, and the odds
are that other AI's will be able to stop it from doing anything bad.
But in the case of only one AI,
Ben,
I can see some
possible value in giving a system these goals, and
giving it a strong
motivation to figure out what the hell humans mean
by the words
care, living, etc. These rules are then really rule
templates
with instructions for filling them in...
Yes.
However, I view
Hi Eliezer,
This does not
follow. If an AI has a P chance of going feral, then a
society of AIs
may have P chance of all simultaneously going feral
I can see you point but I don't agree
with it.
If General Motors churns out 100,000
identical cars with all the same
charcteristics
Ben Goertzel wrote:
Yes, I see your point now.
If an AI has a percentage p chance of going feral, then in the case of
a society of AI's, only p percent of them will go feral, and the odds
are that other AI's will be able to stop it from doing anything bad.
But in the case of only
Philip Sutton wrote:
Ben,
Ben: That paragraph gave one possible dynamic in a society of AGI's,
but there are many many other possible social dynamics
Of course. What you say is quite true. But so what?
Let's go back to that one possible dynamic. Can't you bring yourself to
agree that if a
Philip Sutton wrote:
Hi Eliezer,
This does not follow. If an AI has a P chance of going feral, then a
society of AIs may have P chance of all simultaneously going feral
I can see you point but I don't agree with it.
If General Motors churns out 100,000 identical cars with all the same
Eliezer is certainly correct here -- your analogy ignores probabilistic
dependency, which is crucial.
Ben
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Eliezer S. Yudkowsky
Sent: Tuesday, March 04, 2003 1:33 AM
To: [EMAIL PROTECTED]
Subject: Re:
Eliezer,
That's because your view of this problem has automatically factored
out all the common variables. All GM cars fail when dropped off a
cliff. All GM cars fail when crashed at 120 mph. All GM cars fail on
the moon, in space, underwater, in a five-dimensional universe. All
GM cars
30 matches
Mail list logo