Re: Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-28 Thread Ben Goertzel

Hi,


Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.


My view is that for technologies involving large investment in
manufacturing infrastructure, the US military is one very likely
source of funds.  But not the only one.  For instance, suppose that
computer manufacturers decide they need powerful nanotech in order to
build better and better processors: that would be a convincing
nonmilitary source for massive nanotech RD funds.

OTOH for technologies like AGI where the main need is innovation
rather than expensive infrastructure, I think a key role for the
military is less likely.  I would expect the US military to be among
the leaders in robotics, because robotics is
costly-infrastructure-centric.  But not necessarily in robot
*cognition* (as opposed to hardware) because cognition RD is more
innovation-centric.

Not that I'm saying the US military is incapable of innovation, just
that it seems to be more reliable as a source of development $$ for
technologies not yet mature enough to attract commercial investment,
than as a source for innovative ideas.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Ben Goertzel

Hi,


The problem, Ben, is that your response amounts to I don't see why that
would work, but without any details.


The problem, Richard, is that you did not give any details as to why
you think your proposal will work (in the sense of delivering a
system whose Friendliness can be very confidently known)


The central claim was that because the behavior of the system is
constrained by a large number of connections that go from motivational
mechanism to thinking mechanism, the latter is tightly governed.


But this claim, as stated, seems not to be true  The existence of
a large number of constraints does not intrinsically imply tight
governance.

Of course, though, one can posit the existence of a large number of
constraints that DOES provide tight governance.

But the question then becomes whether this set of constraints can
simultaneously provide

a) the tightness of governance needed to guarantee Friendliness

b) the flexibility of governance needed to permit general, broad-based learning

You don't present any argument as to why this is going to be the case

I just wonder if, in this sort of architecture you describe, it is
really possible to guarantee Friendliness without hampering creative
learning.  Maybe it is possible, but you don't give an argument re
this point.

Actually, I suspect that it probably **is** possible to make a
reasonably benevolent AGI according to the sort of NN architecture you
suggest ... (as well as according to a bunch of other sorts of
architectures)

However, your whole argument seems to assume an AGI with a fixed level
of intelligence, rather than a constantly self-modifying and improving
AGI.  If an AGI is rapidly increasing its hardware infrastructure and
its intelligence, then I maintain that guaranteeing its Friendliness
is probably impossible ... and your argument gives no way of getting
around this.

In a radically self-improving AGI built according to your
architecture, the set of constraints would constantly be increasing in
number and complexity ... in a pattern based on stimuli from the
environment as well as internal stimuli ... and it seems to me you
have no way to guarantee based on the smaller **initial** set of
constraints, that the eventual larger set of constraints is going to
preserve Friendliness or any other criterion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]