face with goal system through custom-made abstractions).
> A quick question for Richard and others -- Should adults be allowed to
> drink, do drugs, wirehead themselves to death?
>
> - Original Message -
> From: "Vladimir Nesov" <[EMAIL PROTECTED]>
> T
Beyond AI pp 253-256, 339. I've written a few thousand words on the subject,
myself.
a) the most likely sources of AI are corporate or military labs, and not just
US ones. No friendly AI here, but profit-making and "mission-performing" AI.
b) the only people in the field who even claim to be in
On 10/2/07, Mark Waser wrote:
> A quick question for Richard and others -- Should adults be allowed to
> drink, do drugs, wirehead themselves to death?
>
This is part of what I was pointing at in an earlier post.
Richard's proposal was that humans would be asked in advance by the
AGI what level
J Storrs Hall, PhD wrote:
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote:
... Since the AGIs are all built to be friendly, ...
The probability that this will happen is approximately the same as the
probability that the Sun could suddenly quantum-tunnel itself to a new
positio
Okay, I'm going to wave the white flag and say that what we should do is
all get together a few days early for the conference next March, in
Memphis, and discuss all these issues in high-bandwidth mode!
But one last positive thought. A response to your remark:
So let's look at the mappings
On Tuesday 02 October 2007 10:17:42 am, Richard Loosemore wrote:
> ... Since the AGIs are all built to be friendly, ...
The probability that this will happen is approximately the same as the
probability that the Sun could suddenly quantum-tunnel itself to a new
position inside the perfume departm
uot; <[EMAIL PROTECTED]>
To:
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Re: [agi] Religion-free technical content
But yet robustness of goal system itself is less important than
intelligence that allows system to recognize influence on its goal
system and preserve it. Intelligence
necessary for both scale-invariance and scalability
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Tuesday, October 02, 2007 9:49 AM
Subject: **SPAM** Distributed Semantics [WAS Re: [agi] Religion-free
technical content]
Mark Waser wrote:
Mark
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Monday, October 01, 2007 8:36 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Mark Waser wrote:
And apart from the global differences between the two types of AGI, it
would
But yet robustness of goal system itself is less important than
intelligence that allows system to recognize influence on its goal
system and preserve it. Intelligence also allows more robust
interpretation of goal system. Which is why the way particular goal
system is implemented is not very impor
Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I
would argue that the semantics *don't* have to be distributed. My
argument/proof would be that I believe that *anything* can be described
in words -- and that I believe that previous narrow AI are brittle
ber 01, 2007 9:14 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
So this "hackability" is a technical question about possibility of
closed-source deployment that would provide functional copies of the
system but would prevent users from modifying its goal system. Is it
but I would be
> interested, if you would, in hearing more as to why you believe that
> semantics *must* be distributed (though I will immediately concede that it
> will make them less hackable).
>
> Mark
>
> - Original Message -----
> From: "Richard Loosemore" &
ediately concede that it
will make them less hackable).
Mark
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Monday, October 01, 2007 8:36 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Mark Waser wrote:
And apart
Mark Waser wrote:
And apart from the global differences between the two types of AGI, it
would be no good to try to guarantee friendliness using the kind of
conventional AI system that is Novamente, because inasmuch as general
goals would be encoded in such a system, they are explicitly coded a
On Sun, Sep 30, 2007 at 12:49:43PM -0700, Morris F. Johnson wrote:
> Integration of sociopolitical factors into a global evolution predictive
> model will require something the best
> economists, scientists, military strategists will have to get right or risk
> global social anarchy.
FYI, there w
Answer in this case: (1) such elemental things as protection from
diseases could always be engineered so as not to involve painful
injections (we are assuming superintelligent AGI, after all),
:-)First of all, I'm not willing to concede an AGI superintelligent
enough to solve all the worl
at your
system is less comprehensible).
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Monday, October 01, 2007 4:53 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Replies to several posts, omnibus edition:
Replies to several posts, omnibus edition:
Edward W. Porter wrote:
Richard and Matt,
The below is an interesting exchange.
For Richard I have the question, how is what you are proposing that
different than what could
Jef Allbright wrote:
[snip]
Jef, I accept that you did not necessarily introduce any of the
confusions that I dealt with in the snipped section, above: but your
question was ambiguous enough that many people might have done so, so I
was just covering all the bases, not asserting that you had
On 10/1/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
> > Right, now consider the nature of the design I propose: the
> > motivational system never has an opportunity for a point failure:
> > everything that happens is multiply-
On Monday 01 October 2007 11:34:09 am, Richard Loosemore wrote:
> Right, now consider the nature of the design I propose: the
> motivational system never has an opportunity for a point failure:
> everything that happens is multiply-constrained (and on a massive scale:
> far more than is the c
On 10/1/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Jef Allbright wrote:
> > On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >>> The motivational system of some types of AI (the types you would
> >>> classify as tainted by complexity) can be made so reliable that
> >>> the likeli
On 10/1/07, Richard Loosemore wrote:
>
> 3) The system would actually be driven by a very smart, flexible, subtle
> sense of 'empathy' and would not force us to do painful things that were
> "good" for us, for the simple reason that this kind of nannying would be
> the antithesis of really intellig
death?
Nannying of adults is something that our society does too much of -- but
there are places where it is appropriate
- Original Message -
From: "Richard Loosemore" <[EMAIL PROTECTED]>
To:
Sent: Monday, October 01, 2007 2:18 PM
Subject: **SPAM** Re: [agi] Religion-free
at am I missing?
Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 1:41 PM
To: agi@v2.listbox.com
Sub
Jef Allbright wrote:
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
The motivational system of some types of AI (the types you would
classify as tainted by complexity) can be made so reliable that
the likelihood of them becoming unfriendly would be similar to
the likelihood of the m
xeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Monday, October 01, 2007 12:01 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content
In my last post I had in mind RSI at the
Matt Mahoney wrote:
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of thousands
of hand-coded soft constraints or rules. Presumably with so many rules, we
should be able to cover every conceivable situ
ctober 01, 2007 12:57 PM
Subject: **SPAM** Re: [agi] Religion-free technical content
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of
thousands
of hand-coded soft constraints or rules. Presumably wi
Richard,
Let me make sure I understand your proposal. You propose to program
friendliness into the motivational structure of the AGI as tens of thousands
of hand-coded soft constraints or rules. Presumably with so many rules, we
should be able to cover every conceivable situation now or in the fu
Derek Zahn wrote:
Richard:
> You agree that if we could get such a connection between the
> probabilities, we are home and dry? That we need not care about
> "proving" the friendliness if we can show that the probability is simply
> too low to be plausible?
Yes, although the probability i
]
>
>
>
> -Original Message-
> From: Matt Mahoney [mailto:[EMAIL PROTECTED]
> Sent: Sunday, September 30, 2007 8:36 PM
> To: agi@v2.listbox.com
> Subject: RE: [agi] Religion-free technical content
>
>
> --- "Edward W. Porter" <[EMAIL PROTEC
cellent way to try for little-f
friendliness, which is probably our best and only option. I like it a lot.
> Date: Mon, 1 Oct 2007 11:34:09 -0400> From: [EMAIL PROTECTED]> To:
> agi@v2.listbox.com> Subject: Re: [agi] Religion-free technical content> >
> Derek Zahn w
Derek Zahn wrote:
Richard Loosemore writes:
> You must remember that the complexity is not a massive part of the
> system, just a small-but-indispensible part.
>
> I think this sometimes causes confusion: did you think that I meant
> that the whole thing would be so opaque that I could not
Edward W. Porter writes:> To Matt Mahoney.
> Your 9/30/2007 8:36 PM post referred to mine in reply to Derek Zahn and
> implied RSI
> (which I assume from context is a reference to Recursive Self Improvement) is
> necessary for general intelligence.
> So could you, or someone, please define exa
Richard Loosemore writes:> You must remember that the complexity is not a
massive part of the > system, just a small-but-indispensible part.> > I think
this sometimes causes confusion: did you think that I meant > that the whole
thing would be so opaque that I could not understand > *anything* a
riginal Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 8:36 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content
--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> To Derek Zahn
>
> You're 9/30/2007 10:58
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> > The motivational system of some types of AI (the types you would
> > classify as tainted by complexity) can be made so reliable that the
> > likelihood of them becoming unfriendly would be similar to the
> > likelihood of the molecule
Matt Mahoney wrote:
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Derek Zahn wrote:
Richard Loosemore writes:
> It is much less opaque.
>
> I have argued that this is the ONLY way that I know of to ensure that
> AGI is done in a way that allows safety/friendliness to be guaranteed.
>
On 10/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
> I remain skeptical. Your argument applies to an AGI not modifying its own
> motivational system. It does not apply to an AGI making modified copies of
> itself. In fact you say:
>
> > Also, during the development of the first true AI, we wo
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Derek Zahn wrote:
> > Richard Loosemore writes:
> >
> > > It is much less opaque.
> > >
> > > I have argued that this is the ONLY way that I know of to ensure that
> > > AGI is done in a way that allows safety/friendliness to be guaranteed.
--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> To Derek Zahn
>
> You're 9/30/2007 10:58 AM post is very interesting. It is the type of
> discussion of this subject -- potential dangers of AGI and how and when do
> we deal with them -- that is probably most valuable.
>
> In response I have t
Derek Zahn wrote:
Richard Loosemore writes:
> It is much less opaque.
>
> I have argued that this is the ONLY way that I know of to ensure that
> AGI is done in a way that allows safety/friendliness to be guaranteed.
>
> I will have more to say about that tomorrow, when I hope to make an
Richard Loosemore writes:> It is much less opaque.> > I have argued that this
is the ONLY way that I know of to ensure that > AGI is done in a way that
allows safety/friendliness to be guaranteed.> > I will have more to say about
that tomorrow, when I hope to make an > announcement.
Cool. I'm s
Derek Zahn wrote:
[snip]
Surely certain AGI efforts are more dangerous than others, and the
"opaqueness" that Yudkowski writes about is, at this point, not the
primary danger. However, in that context, I think that Novamente is, to
an extent, opaque in the sense that its actions may not be red
1:12 PM
To: agi@v2.listbox.com
Subject: RE: [agi] Religion-free technical content
First, let me say I think this is an interesting and healthy discussion
and has enough "technical" ramifications to qualify for inclusion on this
list.
Second, let me clarify that I am not proposing th
When presenting reasons for developing IGI to the general public one should
refer to a list of
problems that are generally insoluble with current computational technology.
Global weather modelling and technology to predict very long term effects of
energy expended to modify climate so that a least
On 9/30/07, Edward W. Porter wrote:
>
> I think you, Don Detrich, and many others on this list believe that, for at
> least a couple of years, it's still pretty safe to go full speed ahead on
> AGI research and development. It appears from the below post that both you
> and Don agree AGI can poten
; Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]
-Original Message-
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Sunday, September 30, 2007 10:11 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On 9/30/
On 9/30/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:
Quoting Eliezer:
> ... Evolutionary programming (EP) is stochastic, and does not
> precisely preserve the optimization target in the generated code; EP
> gives you code that does what you ask, most of the time, under the
> tested circumstances, bu
First, let me say I think this is an interesting and healthy discussion and
has enough "technical" ramifications to qualify for inclusion on this list.
Second, let me clarify that I am not proposing that the dangers of AGI be
"swiped under the rug" or that we should be "misleading" the public.
I suppose I'd like to see the list management weigh in on whether this type of
talk belongs on this particular list or whether it is more appropriate for the
"singularity" list.
Assuming it's okay for now, especially if such talk has a technical focus:
One thing that could improve safety is t
On 29/09/2007, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Although it indeed seems off-topic for this list, calling it a
> religion is ungrounded and in this case insulting, unless you have
> specific arguments.
>
> Killing huge amounts of people is a pretty much possible venture for
> regular hum
On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> So, let's look at this from a technical point of view. AGI has the potential
> of becoming a very powerful technology and misused or out of control could
> possibly be dangerous. However, at this point we have little idea of how
> thes
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> You know, I'm struggling here to find a good reason to disagree with
> you, Russell. Strange position to be in, but it had to happen
> eventually ;-).
"And when Richard Loosemore and Russell Wallace agreed with each
other, it was also a s
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/29/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> > I'd be curious to see these, and I suspect many others would, too.
> > (Even though they're probably from lists I am on, I haven't followed
> > them nearly as actively as I could've.)
>
>
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Saturday, September 29, 2007 9:09 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> I've been through the specific arguments at length on lists where
>
On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> Oops, I thought we were having fun, but it looks like I have offended
> somebody, again. I plead guilty for being somewhat off the purely technical
> discussion topic, but I thought "Edward W. Porter" and I were having a
> pretty inter
Oops, I thought we were having fun, but it looks like I have offended
somebody, again. I plead guilty for being somewhat off the purely technical
discussion topic, but I thought "Edward W. Porter" and I were having a
pretty interesting discussion. However it seems my primary transgression is
focus
On 9/29/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> I'd be curious to see these, and I suspect many others would, too.
> (Even though they're probably from lists I am on, I haven't followed
> them nearly as actively as I could've.)
http://lists.extropy.org/pipermail/extropy-chat/2006-May/026943.ht
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> I've been through the specific arguments at length on lists where
> they're on topic, let me know if you want me to dig up references.
I'd be curious to see these, and I suspect many others would, too.
(Even though they're probably from list
On 9/29/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> I just want to point out that by itself such assertion seems to serve
> no positive/informative purpose.
I will be more than happy to refrain on this list from further mention
of my views on the matter - as I have done heretofore. I ask only
I just want to point out that by itself such assertion seems to serve
no positive/informative purpose. You could just say about off-topic
part, unless you specifically want to discuss religion part.
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/29/07, Vladimir Nesov <[EMAIL PROTECT
On 9/29/07, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Although it indeed seems off-topic for this list, calling it a
> religion is ungrounded and in this case insulting, unless you have
> specific arguments.
I've been through the specific arguments at length on lists where
they're on topic, let
Although it indeed seems off-topic for this list, calling it a
religion is ungrounded and in this case insulting, unless you have
specific arguments.
Killing huge amounts of people is a pretty much possible venture for
regular humans, so it should be at least as possible for artificial
ones. If ar
I unsubscribed from the various Singularitarian mailing lists when I
grew out of believing computers are going to conquer the world, and
stayed on this one because I understood it to be for technical content
rather than religion; now I find it's being continually flooded with
the nerdocalypse stuff
101 - 167 of 167 matches
Mail list logo