Joshua Fox wrote:
Abram,
Let's say that the builders want to keep things safe and simple for
starters, and concentrate on the best possible AGI theorem-prover,
rather than some complex do-gooding machine.
The best way for the machine to achieve its assigned goal is to improve
not only its own software but also its hardware, and so, by hook or by
crook, with trickiness and wile (remember, this is an Artificial
_General_ Intelligence, not just a glorified Deep Blue; if necessary, it
improves its own wiliness), it converts
Planet Earth into silicon chips (or actually, into better-than-silicon
hardware that it invents if necessary; call it "computronium").
Of course, the AGI builder would put in safeguards to keep this from
happening, but when you start trying to figure out what safeguards would
work on something which is _smarter_than_you_, you find yourself deep
into full-fledged Friendliness research before you know it.
(The above is just my modest effort to summarize Yudkowsky's writings,
which express all this better than I do.)
Joshua,
I am ... speechless.
This is a characterization of the structure of an AI that is so divorced
from reality that I hardly know how to begin to address its problems.
This is a grotesque parody of the way that an intelligent system might work.
Can you explain how an AGI that is driven by a goal stack in which goals
are represented as statements in a high level language could interpret
those high level statements when it is in the process of learning about
the world, and does not yet have the high-level concepts needed to
interpret those goal statements in a meaningful way? And if a system
tried to interpret such high-level statements without the ability to
fully understand what they mean, could you explain to me how that
system's behavior would constitute "intelligence"? Could you
demonstrate that the behavior would converge on something intelligent at
some later time?
Could you please explain how a machine that proves theorems would come
to have some kind of sensorimotor connection to the real world: a
connection that would allow it to build things? A connection that
allowed it to sense that there was anything in the real world?
Could you explain how a system would acquire "trickiness and wile" when
it is motivated by a goal stack so primitive that it could not
understand anything except making theorems?
Could you explain the difference between a goal stack motivation system
and other kinds of motivation system, well enough to make it clear that
anything in the above scenario is feasible?
That said, your statement does probably "summarize Yudkowsky's writings"
quite well. But why are you even trying to summarize the writings of a
raving narcissist who does not have any qualifications in the AI field?
Someone who explodes into titanic outbursts of uncontrollable,
embarrassing rage when someone with real knowledge of this area dares to
disagree with him?
Richard Loosemore.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8