Hello,

On Mon, May 21, 2007 5:48 am, Matt Mahoney wrote:
> I wonder if we will figure out how to program a computer to wonder?  And
> if we
> do, should we?
[....]
> First, there is
> no need to duplicate human weaknesses.  A replica of a human brain would
> perform worse at simple arithmetic problems than your calculator.
IMHO if you want a true AGI, you need to give it the chance to judge your
question, interpret it, and possibly do anything with it, even what you
didn't expect. When you tell it "this sentence is false" you expect your
AGI to do something else than:
a) return true
b) return false
c) loop forever
A real intelligent behavior is to realize that "there's something special
with this!". This might raise a "smile" exception, a "do you have other
questions like this, I'm interested, I want to know more" interruption,
whatever.

Thinking of your calculator objection, I'd say that the advantage of a
computerized AGI, even if it mimics human brain on a large scale, is that
it should probably use a real good old calculator in a much more elegant
and efficient way than us. This AGI would acknowledge after some time that
using its AGIsh brain for arithmetics is stupid, and it should use a plain
old calculator to do it. Point is it seems easier to wire a 1 million PC
network to one more PC for the sole purpose of calculation, than to wire
ourselves to a computer. You could imagine the extra PC for calculation as
part of the body, not the brain. An AGI would delegate some tasks, the
same way we delegate tasks to various tools (hammer, car, computer...).

But, of course, in this model, there's always a slight chance that the AGI
does something "wrong". Perfection and intelligence are exclusive (my
opinion).

> Second, do you really want a machine with human emotions?  We want
> machines
> that obey our commands.  But this is controversial.  Should a machine obey
> a
> command to destroy itself or harm others?  Do you want a gun that fires
> when
> you squeeze the trigger, or a gun that makes moral judgments and refuses
> to
> fire when aimed at another person?
Probably, you don't want an intelligent gun, as well as most of the time
you don't want an intelligent soldier. Problem with soldiers is that they
are intelligent, so some of them take decisions. Usually, when a soldier
takes a decision by himself, it's bad news. You want him to obey, not to
play the philosopher and weight the pros and cons of war.

I suspect that the ability to push intelligence beyond the "expert system"
limit, that's to say create an AGI which is capable of solving - or at
least, if not solving, thinking and imagining solutions for - *any*
problem, you need to allow this AGI to refuse your own conception of the
field you want its intelligence to explore. Or else you build "yet another
expert system", which something pretty usefull, which can possibly help us
build the real AGI, but it's not the real AGI.

Have a nice day,

Christian.

-- 
Christian Mauduit <[EMAIL PROTECTED]>     __/\__ ___
                                        \~/ ~/(`_ \   ___
http://www.ufoot.org/                   /_o _\   \ \_/ _ \_
http://www.ufoot.org/ufoot.pub (GnuPG)    \/      \___/ \__)

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Reply via email to