On 9/30/07, Kaj Sotala <[EMAIL PROTECTED]> wrote:

Quoting Eliezer:

> ... Evolutionary programming (EP) is stochastic, and does not
> precisely preserve the optimization target in the generated code; EP
> gives you code that does what you ask, most of the time, under the
> tested circumstances, but the code may also do something else on the
> side. EP is a powerful, still maturing technique

Yes...

> that is intrinsically unsuited to the demands of Friendly AI.

... as long as one persists in framing the problem of (capital-F)
Friendly machine intelligence in terms of an effective infinity of
hypothetically unbounded recursive self improvement.

More realistically, uncertainty (and meta-uncertainty) is intrinsic to
subjective agency and growth, and essential to the dynamics of any
system of value.

We co-exist in an inherently dangerous world, and we will do well to
invest in (lower-case) friendly machine intelligence to assist us with
this phase of the Red Queen's Race rather than staying in a room of
our own construction, trying to sketch a vision of what amounts to a
finish line on the walls.

> Friendly AI, as I have proposed it, requires repeated cycles of recursive
> self-improvement that precisely preserve a stable optimization target.

While the statement above uses technical terms, it's not a technical
problem statement in the very sense sometimes criticized by Eliezer --
it lacks a coherent referent in a game where not only the the players,
but the game itself is evolving.

Vitally lacking, in my opinion, is informed consideration of the
critical role of constraints in any system of growth, and the limits
of **effective** intelligence starved for relevant sources of novelty
in the environment of adaptation. Lacking meaningful constraints on
its trajectory,  an AI with computational capacity no matter how vast
will cease to gain relevance as it explores the far vaster space of
possibility.

Notwithstanding the above, I am pleased that SIAI, not only Eliezer,
are achieving some progress raising the level of thinking about the
very significant and unprecedented risks of self-improving machine
intelligence.

- Jef

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48263416-e140dc

Reply via email to