is there an archive for this list?
thanx,
y
__
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
http://mailplus.yahoo.com
---
To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to
It strikes me that what many of the messages refer to as ethical stances
toward life, the earth, etc., are actually simply extensions of self
interest.
In fact, ethical systems of cooperation are really, on a very simplistic
level, ways of improving the lives of individuals. And this is not true
Even though this application appears to replicate the sounds birds make, it
does not appear to have any understanding as to *what* it is saying.
Perhaps through making various utterances and observing birds behavior, they
will be able to infer certain meanings that are associated with certain
David Noziglia wrote:
In fact, ethical systems of cooperation are really, on a very simplistic
level, ways of improving the lives of individuals. And this is not true
because of strictures from on high, but for reasons of real-world
self-interest. Thus, the Nash Equilibrium, or the results
Kevin is correct. You'd need a system that had birdlike perceptual organs,
and was able to gather data similar to the data birds gather. Then it could
learn to make the calls birds made in proper situated context. *This* would
constitute a beginning understanding of bird language.
-- Ben
Rational self-interest does not stop us from knocking down forests to
build
cities, in spite of all the ants and squirrels that are
rendered homeless
or
dead as a consequence.
My point being that maybe it should. Our destruction of the
environment can
be seen as not just ethically
Hi,
I think that to suggest that evolutionary wiring is the root of
our problems
is suspect at best. there are many great beings who have walked
this earth
that were subject to the same evolution, yet not at the whim of the
destructive emotions.
Causality is a very subtle notion
I agree with your ultimate objective, the big question is *how* to do it.
What is clear is that no one has any idea that seems to be guaranteed to
work in creating an AGI with these qualities. We are currently
resigned to
let's build it and see what happens. Which is quite scary for some,
Ben,
These precautions seem prudent..I'm glad you have thought this thru deeply..
Regarding the idea that the absence of evolutionary wiring will diminsh the
effect of the negative afflictions is an interesting one. It will be
interesting to see whether that holds to be true. One could argue
3) an intention to implement a careful AGI sandbox that we won't release
our AGI from until we're convinced it is genuinely benevolent
-- Ben
Unfortunately, what one says and what one's intent is can be two completely
different things. It's unlikely, to my mind, that the sandbox restriction
Eliezer,
I certainly remember all those discussions on the SL4 list.
I did not mean to imply that the AGI sandbox would be a perfect mechanism.
Like everything else I mentioned, it is an imperfect mechanism.
Of course, there is a nonzero chance that the AGI will turn evil and escape
from the
Agreed, Tim, no sandbox environment can be sufficient for determining
benevolence.
Such an environment can only be a heuristic guide.
We will gather data about an AGI's benevolence from its behavior in the
sandbox, and from our knowledge of its internal state. And we will make our
best
maitri wrote:
I agree with your ultimate objective, the big question is *how* to do it.
What is clear is that no one has any idea that seems to be guaranteed to
work in creating an AGI with these qualities. We are currently resigned to
let's build it and see what happens. Which is quite scary
This type of training should be given to the AGI as early as it is
understandable in order to ensure proper consideration of the welfare
of it's creators.
Not so simple:
The human brain has evolved a special agent modeling circuit that
exists in the frontal lobe. (probably having a
This type of training should be given to the AGI as early as it is
understandable in order to ensure proper consideration of the welfare
of it's creators.
Not so simple:
The human brain has evolved a special agent modeling circuit that
exists in the frontal lobe. (probably having a
Kevin et al.,
Fascinating set of observations, conjectures, and methodologies, well
worth
considering. And it seems that you have ultimately touched on the kernel
of
the dilemma of man v. machine en route to the so called 'singularity'.
If I've understood you correctly vis-a-vis the
Boy!! How timely was this!!
Since i have been called for displaying my eastern thought,
here is a wonderful stanza from the *Catholic* Mystic Thomas Merton that I just
received in my email...
The unitive knowledge of God in love is not the knowledge of an object
by a subject, but a far
Superior in intelligence doesn't necessarily mean superior in wisdom ...
there are plenty of examples of that in human history.
intelligence in the wrong hands is the most dangerous things...we are
seeing
that right now in our govt IMO
And just WHERE do you see evidence of intelligence?
In stating that evil is the natural result of a strong sense of self, I
washoping to avoid detailed discussion about good and evil, and instead
propose a possible direction by which a solution can be found. Namely, do
not instill a strong sense of self into the AGI...
This is a very
I still hold that *if* and AGI has a sense of self, without the
concomitant
wisdom needed, it *will* develop the destructive emotions...
I agree that it will develop SOME destructive emotions, and I think that
any
mind necessarily will develop SOME destructive emotions -- which it then
Hi all,
I find the friendliness issue fairly infertile ground tackled way too soon.
Go back to where we are: the beginning. I'm far more interested in the
conferring of a will to live. Our natural tendency is to ascribe this will
to live to our intelligent artifacts. This 'seed' is by far the
At some early point the AGI will have to learn to equate pleasure with
learning and acquiring new experience.
With a biological organism the stimuli are provided as pain and
pleasure.
As we mature many of our pleasure causers are increasingly subtle and
are actually learned pleasure generators
On Thu, Jan 09, 2003 at 11:24:14AM -0500, Ben Goertzel wrote:
I think the issues that are problematic have to do with the emotional
baggage that humans attach to the self/other distinction. Which an AGI will
most likely *not* have, due to its lack of human evolutionary wiring...
Damien Sullivan wrote:
You _MIGHT_ be able to produce a proof of concept that way...
However, a practical working AI, such as the one which could help me
design my my next body, would need to be quite a bit more. =\
Why? Why should such a thing require replacing the original
On Thu, Jan 09, 2003 at 10:57:41PM -0800, Alan Grimes wrote:
It would be a service-driven motovation system but I would expect a much
more sophisticated implementation of agency beyond a windows shell or
something.
Quite possibly. But my point is that the evolutionary root _and_ guiding
Alan Grimes wrote:
My position is that you don't really need friendly AI, you simply need
to neglect to include the take over world motovator...
I think that is a VERY bad approach !!!
I don't want a superhuman AGI to destroy us by accident or through
indifference... which are possibilities
I tend to agree with Damien.
I see no intrinsic reason why a service-driven AGI system could not become
as intelligent as humans and then more intelligent.
Suppose an AGI is given an initial motivational structure that rewards it
for
* serving people effectively
* discovering and creating new
Ben Goertzel wrote:
I think that is a VERY bad approach !!!
I don't want a superhuman AGI to destroy us by accident or through
indifference... which are possibilities just as real as aggression.
Positive action requires positive motovation.
--
Linux programmers: the only people in the
On Thu, Jan 09, 2003 at 11:18:36PM -0800, Alan Grimes wrote:
Damien Sullivan wrote:
Quite possibly. But my point is that the evolutionary root _and_
guiding principle would be that of a (Unix, ahem) shell.
Are you nuts?
Unix is the most user-hostile system still in common use! PUKE!!!
Colin said:
I find the friendliness issue fairly infertile ground tackled way too
soon.
Go back to where we are: the beginning. I'm far more interested in the
conferring of a will to live. Our natural tendency is to ascribe this will
to live to our intelligent artifacts.
snip
My feeling at
30 matches
Mail list logo