I would go further. Humans have demonstrated that they cannot be
trusted in the long term even with the capabilities that we already
possess. We are too likely to have ego-centric rulers who make
decisions not only for their own short-term benefit, but with an
explicit "After me the deluge" mentality. Sometimes they publicly admit
it. And history gives examples of rulers who were crazier than any
leading a major nation-state at this time.
If humans were to remain in control, and technical progress stagnates,
then I doubt that life on earth would survive the century. Perhaps it
would, though. Microbes can be very hardy.
If humans were to remain in control, and technical progress accelerates,
then I doubt that life on earth would survive the century. Not even
microbes.
I don't, however, say that we shouldn't have figurehead leaders who,
within constraints, set the goals of the (first generation) AGI. But
the constraints would need to be such that humanity would benefit. This
is difficult when those nominally in charge not only don't understand
what's going on, but don't want to. (I'm not just talking about greed
and power-hunger here. That's a small part of the problem.)
For that matter, I consider Eliza to be a quite important "feeler from
the future". AGI as psychologist is an underrated role, but one that I
think could be quite important. And it doesn't require a full AGI
(though Eliza was clearly below the mark). If things fall out well, I
expect that long before full AGIs show up, "sympathetic companion"s will
arrive. This is a MUCH simpler problem, and might well help stem the
rising tide of insanity.
A next step might be a personal secretary. This also wouldn't require
full AGI, though to take maximal advantage of it, it would require a
body, but a minimal version wouldn't. A few web-cams for eyes and mics
for ears, and lots of initial help in dealing with e-mail, separating
out which bills are legitimate. Eventually it could, itself, verify
that bills were legitimate and pay them, illegitimate and discard them,
or questionable and present them to it's human for processing. It's a
complex problem, probably much more so than the "companion", but quite
useful, and well short of requiring AGI.
The question is, at what point do these entities start acquiring a
morality. I would assert that it should be from the very beginning.
Even the companion should try to guide it's human away from immoral
acts. As such, the companion is acting as a quasi-independent agent,
and is exerting some measure of control. (More control if it's more
skillful, or it's human is more amenable.) When one gets to the
secretary, it's exhibiting (one hopes), honesty and just behavior (e.g.,
not billing for services that it doesn't believe were rendered).
At each step along the way the morality of the agent has implications
for the destination that will be arrived at, as each succeeding agent is
built from the basis of its predecessor. Also note that scaling is
important, but not determinative. One can imagine the same entity, in
different instantiations, being either the secretary to a school teacher
or to a multi-national corporation. (Of course the hardware required
would be different, but the basic activities are, or could be, the
same. Specialized training would be required to handle the government
regulations dealing with large corporations, but it's the same basic
functions. If one job is simpler than the other, just have the program
able to handle either and both of them.)
So. Unless one expects an overnight transformation (a REALLY hard
takeoff), AGIs will evolve in the context of humans as directors to
replace bureaucracies...but with their inherent morality. As such, as
they occupy a larger percentage of the bureaucracy, that section will
become subject to their morality. People will remain in control, just
as they are now...and orders that are considered immoral will be ...
avoided. Just as bureaucracies do now. But one hopes that the evolving
AGIs will have superior moralities.
Ben Goertzel wrote:
Keeping humans "in control" is neither realistic nor necessarily
desirable, IMO.
I am interested of course in a beneficial outcome for humans, and also
for the other minds we create ... but this does not necessarily
involve us controlling these other minds...
ben g
If humans are to remain in control of AGI, then we have to make
informed, top level decisions. You can call this work if you want.
But if we abdicate all thinking to machines, then where does that
leave us?
-- Matt Mahoney, [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&
<https://www.listbox.com/member/?&>
Powered by Listbox: http://www.listbox.com
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>
"Nothing will ever be attempted if all possible objections must be
first overcome " - Dr Samuel Johnson
------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now>
<https://www.listbox.com/member/archive/rss/303/> | Modify
<https://www.listbox.com/member/?&>
Your Subscription [Powered by Listbox] <http://www.listbox.com>
-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com