--- "Sergey A. Novitsky" <[EMAIL PROTECTED]>
wrote:

> Dear all,
> 
> Perhaps, the questions below were already touched
> numerous times in the
> past.
> 
> Could someone kindly point to discussion threads
> and/or articles where these
> concerns were addressed or discussed?
> 
>  
> 
> Kind regards,
> 
> Serge
> 
>
----------------------------------------------------------------------------
> -----------------
> 
>  
> 
> *     If AI is going to be super-intelligent, it may be
> treated by
> governments as some sort of super-weapon.

Anyone intelligent enough to realize the power in AGI
is going to be intelligent enough to realize that this
power won't be on the scale of your ordinary human
"super-weapon". The danger is that someone who
realizes the power of AGI may talk about it with
someone else, who will then form his own conception of
AGI which is totally unrelated to what the first
person is trying to say. Thus, it is perhaps a good
thing that politicians do not listen to us.

> *     As it already happened with nuclear weapons, there
> may be treaties
> constraining AI development.

Which will be promptly ignored. Even if such a treaty
did come into being, there's no way it could ever be
enforced.  The primary requirement for designing AGI
is a large group of intelligent researchers, not
political power or piles of money, and so getting all
the groups with political power and piles of money to
stop development won't eliminate more than a small
fraction of the possible AGI-builders.

> *     As it may be the case with nanotechnology, AI may
> be used in
> reconnaissance or for complex conflict simulations,
> so it becomes the number
> one target in a potential war conflict, especially
> if it's tied to one
> location, and if this location is known.

Nanotechnology research projects are easy to locate
and destroy. Nanotechnology itself is not, because
once you build in you can carry it around in your coat
pocket.

> Besides, it
> becomes the number one
> target for possible terrorist activities.

Even if terrorists did break into a nanotech facility,
they couldn't do much. Any hostile use of nanotech by
terrorists will require widely-available,
user-friendly nanotech, as terrorists are not widely
renown for their technical skill.

> *     Because of the reasons above, governments and
> corporations may soon
> start heavy investments in AI research,

Non sequitur. Even if AGI actually has a huge impact,
that doesn't mean anyone has to realize it. The
airplane was a war-winning instrument only twelve
years after its design, yet was the government pouring
billions into airplane research in 1900? Why do you
think the Wright Brothers had day jobs as bicycle
mechanics?

> and as a
> result of this, the rules
> of ethics and friendliness may get tweaked to suit
> the purposes of those
> governments or big companies.

If the government manages to develop an AGI that won't
turn the planet into computronium, that's probably 90%
of the work right there. Figuring out what to do with
a stable AGI once you have one is the easy part.

> *     If AI makes an invention (e.g. new medicine), the
> invention will
> automatically become property of the investing party
> (government or
> corporation), gets patented, etc.

Er, so?

> *     If AI is supposed to acquire free will, it may
> become (unless
> corrupted) the enemy number of one of certain
> governments and/or big
> companies (if there is a super-cheap cure for
> cancer, AIDS, or whatever, it
> means big profit losses to some players).

The AI does not think like you do. It does not
automatically import all these nice, neat human
concepts of "corruption", "free will", and "big
company" into its head. As far as
99.99999999999999999999999999999% of AIs are
concerned, humans are indistinguishable from oddly
shaped blocks of CNHO. The AI is not a new human being
that has to join up with one human side or another;
the AI is on its own side, in the same manner as a
force of nature such as a volcano.

> *     If a super-intelligent AI is going to increase
> transparency and
> interconnectedness in the world, it may also be not
> in the interests of some
> powers whose very survival depends on secrecy.

Human power *does not matter* once AGI is created.
Political and corporate bosses will almost certainly
have *no clue whatsoever* what AGI is or how it will
change the world unless someone they trust gives them
a good, simple explanation they can understand.

> 
> *     Based on the ideas above, if seems probable that
> if some sort of
> super-intelligent AI is created, it will be:
> 
> *     Created with large investments from
> companies/governments.

Companies/governments are run by bureaucrats, and
bureaucrats must make safe decisions that can be
justified. AGI can't be justified in a simple,
obvious, convincing manner until after it is already
built.

> *     Tailored to suit specific purposes of its
> creators.

The creators will *try* to tailor it for a purpose;
this doesn't mean they will actually succeed, or come
anywhere within a parsec of success.

> *     Be subject to all sorts of attacks.

By who? If the likelihood of one government realizing
the power of AGI is low, the likelihood of two
realizing it is low^2. Even if a government does start
a big research project, other governments will
probably dismiss it as quackery, like the CIA's
experiments with mind control.

> *     Be deprived of free will or be given limited free
> will (if such a
> concept is applicable to AI).

An excellent analogy to a superintelligent AGI is a
really good chess-playing computer program. The
computer program doesn't realize you're there, it
doesn't know you're human, it doesn't even know what
the heck a human is, and it would gladly pump you full
of gamma radiation if it made you a worse player.
Nevertheless, it is still intelligent, more so than
you are: it can foresee everything you try and do, and
can invent new strategies and use them to come out of
nowhere and beat you by surprise. Trying to deprive a
superintelligent AI of free will is as absurd as Gary
Kasparov trying to deny Deep Blue free will within the
context of the gameboard.

>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
>  
> 
> -----
> This list is sponsored by AGIRI:
> http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/?&;

 - Tom


 
____________________________________________________________________________________
Bored stiff? Loosen up... 
Download and play hundreds of games for free on Yahoo! Games.
http://games.yahoo.com/games/front

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to