Hi
Again I stress that I am not saying we should
try to stop development (I do not think we can)
But what is wrong with thinking about the
possible outcomes and try to be prepared?
To try to affect the development and stear it
in better directions to take smaller steps to
wherever we are going.
http://www.memphisdailynews.com/Editorial/StoryLead.aspx?id=101671
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
On Wed, Mar 5, 2008 at 2:46 AM, rg [EMAIL PROTECTED] wrote:
Anthony: Do not sociopaths understand the
rules and the justice system ?
Two responses come to mind. Both will be unsatisfactory probably, but oh
well...
1. There's a difference between understanding rules and the justice system
rg wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will regardless of what we decide.
I am asking for what should do to
On 3/5/08, david cash [EMAIL PROTECTED] wrote:
In my opinion, instead of having to cherry-pick desirable and
undesirable traits in an unconscious AGI entity, that we, of course, wish to
have consciousness and cognitive abilites like reasoning, deductive and
inductive logic comprehension skills,
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
Initially I believe that a distributed AGI will do what we want it to do
because it will evolve in a competitive, hostile environment that rewards
Matt Mahoney wrote:
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
Wrong.
*You* cannot define friendliness for reasons of your own. Others cmay
well be able to do so.
It would be fine to
ok see my responses below..
Matt Mahoney wrote:
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we can't.
We could say behavior that is acceptable in our society then..
In your mail you
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Friendliness, briefly, is a situation in which the motivations of the
AGI are locked into a state of empathy with the human race as a whole.
Which is fine as long as there is a sharp line dividing human from non-human.
When that line goes away,
On 3/4/08, Mark Waser [EMAIL PROTECTED] wrote:
But the question is whether the internal knowledge representation of
the AGI needs to allow ambiguities, or should we use an ambiguity-free
representation. It seems that the latter choice is better.
An excellent point. But what if the
Hi
You said friendliness was AGIs locked in empathy towards mankind.
How can you make them feel this?
How did we humans get empathy?
Is it not very likely that we have empathy because
it turned out to be an advantage during our evolution
ensuring the survival of groups of humans.
So if an AGI
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Matt Mahoney wrote:
--- Richard Loosemore [EMAIL PROTECTED] wrote:
Friendliness, briefly, is a situation in which the motivations of the
AGI are locked into a state of empathy with the human race as a whole.
Which is fine as long as
--- rg [EMAIL PROTECTED] wrote:
ok see my responses below..
Matt Mahoney wrote:
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we
can't.
We could say behavior that is
--- rg [EMAIL PROTECTED] wrote:
Matt: Why will an AGI be friendly ?
The question only makes sense if you can define friendliness, which we
can't.
Why Matt, thank you for such a wonderful opening . . . . :-)
Friendliness *CAN* be defined. Furthermore, it is my contention that
On 3/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
Rather, I think the right goal is to create an AGI that, in each
context, can be as ambiguous as it wants/needs to be in its
representation of a given piece of information.
Ambiguity allows compactness, and can be very valuable in this regard.
On 03/05/2008 12:36 PM,, Mark Waser wrote:
snip...
The obvious initial starting point is to explicitly recognize that the
point of Friendliness is that we wish to prevent the extinction of the
*human race* and/or to prevent many other horrible nasty things that
would make *us* unhappy.
rg wrote:
Hi
I made some responses below.
Richard Loosemore wrote:
rg wrote:
Hi
Is anyone discussing what to do in the future when we
have made AGIs? I thought that was part of why
the singularity institute was made ?
Note, that I am not saying we should not make them!
Because someone will
1. How will the AI determine what is in the set of horrible nasty
thing[s] that would make *us* unhappy? I guess this is related to how you
will define the attractor precisely.
2. Preventing the extinction of the human race is pretty clear today, but
*human race* will become increasingly
18 matches
Mail list logo