Jiri,

The point that you apparently missed is that substantially all problems fall
cleanly into two categories:

1.  The solution is known (somewhere in the world and hopefully to the AGI),
in which case, as far as the user is concerned, this is an issue of
ignorance that is best cured by educating the user, or

2.  The solution is NOT known, whereupon research, not action, is needed to
understand the world before acting upon it. New research into reality
incognita will probably take a LONG time, so action is really no issue at
all. Of course, once the research has been completed, this obviates to #1
above.

Hence, where an AGI *acting* badly is a potential issue (see #1 above), the
REAL issue is ignorance on the part of the user. Were you actually proposing
that AGIs act while leaving their users in ignorance?! I think not, since
you discussed "supervised" systems. While (as you pointed out) AGI's doing
things other than educating may be technologically possible, I fail to see
any value in such solutions, except possibly in fast-reacting systems, e.g.
military fire control systems.

Dr. Eliza is built on the assumption that all of the "problems" that
are made up of known parts can be best solved through education. So far, I
have failed to find a counterexample. Do you know of any counterexamples?

Some of these issues are explored in the 2nd two books of the Colossus
trilogy, that ends with Colossus stopping an attack on an alien invader, to
the consternation of the humans in attendance. This of course was an
illustration of the military fire control issue.

Am I missing something here?

Steve Richfield
=================
On 6/12/08, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>
> On Thu, Jun 12, 2008 at 3:36 AM, Steve Richfield
> <[EMAIL PROTECTED]> wrote:
> > ... and here we have the makings of AGI run amok...
> > My point..  it is usually possible to make EVERYONE happy with the
> results, but only with a process that roots out the commonly held invalid
> assumptions. Like Gort (the very first movie AGI?) in The Day The Earth
> Stood Still, the goal is peace, but NOT through any particular set of
> detailed goals.
>
> I think it's important to distinguish between supervised and
> unsupervised AGIs. For the supervised, top-level golas as well as the
> sub-goal restrictions can be volatile - basically whatever the guy in
> charge wants ATM (not neccessarily trying to make EVERYONE happy). In
> that case, AGI should IMO just attempt to find the simplest solution
> to a given problem while following the given rules, without exercising
> its own sense of morality (assuming it even has one). The guy
> (/subject) in charge is the god who should use his own sense of
> good/bad/safe/unsafe, produce the rules to follow during AGI's
> solution search and judge/approve/reject the solution so he is the one
> who bears responsibility for the outcome. He also maintains the rules
> for what the AGI can/cannot do for "lower-level" users (if any). Such
> AGIs will IMO be around for a while. *Much* later, we might go for
> human-unsupervised AGIs. I suspect that at that time (if it ever
> happens), people's goals/needs/desires will be a lot more
> unified/compatible (so putting together some grand schema for
> goals/rules/morality will be more straight forward) and the AGIs (as
> well as its multi-layer and probably highly-redundant security
> controls) will be extremely well tested = highly unlikely to "run
> amok" and probably much safer than the previous human-factor-plagued
> problem solving hybrid-solutions. People are more interested in
> pleasure than in messing with terribly complicated problems.
>
> Regards,
> Jiri Jelinek
> *** Problems for AIs, work for robots, feelings for us. ***
>
>
> -------------------------------------------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> http://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com

Reply via email to