On 30/09/2007, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> The real danger is this: a program intelligent enough to understand software
> would be intelligent enough to modify itself.
Well it would always have the potential. But you are assuming it is
implemented on standard hardware.
There are man
--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Derek Zahn wrote:
> > Richard Loosemore writes:
> >
> > > It is much less opaque.
> > >
> > > I have argued that this is the ONLY way that I know of to ensure that
> > > AGI is done in a way that allows safety/friendliness to be guaranteed.
--- "J Storrs Hall, PhD" <[EMAIL PROTECTED]> wrote:
> The simple intuition from evolution in the wild doesn't apply here, though.
> If
> I'm a creature in most of life's history with a superior mutation, the fact
> that there are lots of others of my kind with inferior ones doesn't hurt
> me -
--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> To Derek Zahn
>
> You're 9/30/2007 10:58 AM post is very interesting. It is the type of
> discussion of this subject -- potential dangers of AGI and how and when do
> we deal with them -- that is probably most valuable.
>
> In response I have t
Derek Zahn wrote:
Richard Loosemore writes:
> It is much less opaque.
>
> I have argued that this is the ONLY way that I know of to ensure that
> AGI is done in a way that allows safety/friendliness to be guaranteed.
>
> I will have more to say about that tomorrow, when I hope to make an
Richard Loosemore writes:> It is much less opaque.> > I have argued that this
is the ONLY way that I know of to ensure that > AGI is done in a way that
allows safety/friendliness to be guaranteed.> > I will have more to say about
that tomorrow, when I hope to make an > announcement.
Cool. I'm s
Derek Zahn wrote:
[snip]
Surely certain AGI efforts are more dangerous than others, and the
"opaqueness" that Yudkowski writes about is, at this point, not the
primary danger. However, in that context, I think that Novamente is, to
an extent, opaque in the sense that its actions may not be red
Don,
I think we agree on the basic issues.
The difference is one of emphasis. Because I believe AGI can be so very
powerful -- starting in a perhaps only five years if the right people got
serious funding -- I place much more emphasis on trying to stay way ahead
of the curve with regard to avoid
On 9/30/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> What would be the simplest system capable of recursive self improvement, not
> necessarily with human level intelligence? What are the time and memory
> costs? What would be its algorithmic complexity?
Depends on what metric you use to judge
The simple intuition from evolution in the wild doesn't apply here, though. If
I'm a creature in most of life's history with a superior mutation, the fact
that there are lots of others of my kind with inferior ones doesn't hurt
me -- in fact it helps, since they make worse competitors. But on th
Matt Mahoney wrote:
What would be the simplest system capable of recursive self improvement, not
necessarily with human level intelligence? What are the time and memory
costs? What would be its algorithmic complexity?
In the space of all possible Turing machines, I bet the answers are
"ridic
What would be the simplest system capable of recursive self improvement, not
necessarily with human level intelligence? What are the time and memory
costs? What would be its algorithmic complexity?
One could imagine environments that simplify the problem, e.g. "Core Wars" as
a competitive evolut
When presenting reasons for developing IGI to the general public one should
refer to a list of
problems that are generally insoluble with current computational technology.
Global weather modelling and technology to predict very long term effects of
energy expended to modify climate so that a least
On 9/30/07, Edward W. Porter wrote:
>
> I think you, Don Detrich, and many others on this list believe that, for at
> least a couple of years, it's still pretty safe to go full speed ahead on
> AGI research and development. It appears from the below post that both you
> and Don agree AGI can poten
Kaj,
Another solid post.
I think you, Don Detrich, and many others on this list believe that, for
at least a couple of years, it's still pretty safe to go full speed ahead
on AGI research and development. It appears from the below post that both
you and Don agree AGI can potentially present grav
On 9/30/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:
Quoting Eliezer:
> ... Evolutionary programming (EP) is stochastic, and does not
> precisely preserve the optimization target in the generated code; EP
> gives you code that does what you ask, most of the time, under the
> tested circumstances, bu
First, let me say I think this is an interesting and healthy discussion and
has enough "technical" ramifications to qualify for inclusion on this list.
Second, let me clarify that I am not proposing that the dangers of AGI be
"swiped under the rug" or that we should be "misleading" the public.
I suppose I'd like to see the list management weigh in on whether this type of
talk belongs on this particular list or whether it is more appropriate for the
"singularity" list.
Assuming it's okay for now, especially if such talk has a technical focus:
One thing that could improve safety is t
On 29/09/2007, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> Although it indeed seems off-topic for this list, calling it a
> religion is ungrounded and in this case insulting, unless you have
> specific arguments.
>
> Killing huge amounts of people is a pretty much possible venture for
> regular hum
On 9/30/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
> So, let's look at this from a technical point of view. AGI has the potential
> of becoming a very powerful technology and misused or out of control could
> possibly be dangerous. However, at this point we have little idea of how
> thes
On 9/30/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> You know, I'm struggling here to find a good reason to disagree with
> you, Russell. Strange position to be in, but it had to happen
> eventually ;-).
"And when Richard Loosemore and Russell Wallace agreed with each
other, it was also a s
On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> On 9/29/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:
> > I'd be curious to see these, and I suspect many others would, too.
> > (Even though they're probably from lists I am on, I haven't followed
> > them nearly as actively as I could've.)
>
>
22 matches
Mail list logo