Kaj: I'm not sure if I'd agree with a sentiment saying that it is
always /impossible/ to control an agent's interpretations. Obviously,
if you merely create a reasoning system and then use natural language
to feed it goals you're bound to be screwed. But I wonder if one
couldn't take the very networks of the agent that /create/ the
interpretations and tune them so that they work in the intended
fashion.
If you want your agent to have any conceptual system like language - which
is adaptive and adapted to the real world, then yes it is impossible - and
you don't WANT it to be possible..
I started to explain this in my reply to Vladimir - & I'm not sure whether
anyone has offered this explanation before - I'd welcome comments.
A real world conceptual system (like language) has to be dynamic and
evolving in its meanings and senses, in order to a) capture a dynamic and
evolving world and b) to order our dynamic, evolving knowledge of that
world.
Concepts exist primarily to capture 1) individuals and groups in the real
world - from inanimate objects to living organisms
2) movements and behaviour of those individuals and groups. Those
individuals and groups and their behaviour are liable to keep changing, and,
even if they don't, the more I know of them, the more my generalisations
about them are liable to keep changing.
"Kaj Sotala" like every other human being keeps changing in physique,
personality and many other respects. So do "Russians" and "Americans" and
"houses" and "computers" and every artefact and machine. So does "the
weather" ... and you get the idea.
So does every kind of behaviour ... "sex," "communication."
And, of course, our knowledge itself of even stable entities keeps
changing - I & everyone may think Kaj is a bastard, & then we learn he
contributes billions to charity.. & so on.
Conceptual systems like language are in fact evolved to be open-ended not
closed-ended in meaning and reference. Both AI and linguistic purists who
want their meanings to be precise, and who complain when meanings change,
are, intellectually, not living in the real world.
What you are expressing in the above, what Vladimir was expressing - what
everyone concerned in any way with AGI is experiencing - is what you could
call the "AGI identity crisis."
You still want a machine that can be controlled, however subtly, and is
basically predictable. Classical AI is about machines that produce
controlled, predictable results. (Even if the computations are so complex,
that the human minders have no or little idea how they will turn out, they
can still be described as controlled and basically predictable).
The main point of an AGI machine is that it is going to be fundamentally
surprising and unpredictable. What we really want practically is a machine
that can - like an intelligent human being - be given a general
instruction - "order my office," say - and come up with a new, surprising
interpretation - a new filing system - that is as good as, or better than
any we have thought of. That kind of adaptivity depends on having a
conceptual system which is open-ended, in which "order," for example, can
continually be interpreted in new ways.
Higher adaptivity - the essential requirement for AGI - is by definition
surprising and unpredictable.
The inevitable price of that adaptivity is that that machine will be able to
interpret concepts in new ways that you don't like and turn against you -
just as every human employee can turn against you. That doesn't stop us
employing humans - the positive potentials outweigh the negative ones. (Can
you think of a guaranteed way of controlling human employees' behaviour?)
The "AGI identity crisis" is that everyone currently in AGI - AFAIK -
including Ben, Pei, Marvin et al - is still caught between the
"psychological totalitarianism" of classical AI, with its need for total
control, and the "psychological freedom & democracy" that is necessary for
true, successful AGI - & that includes an acceptance of the open-ended
nature of language & conceptual systems.
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=22820468-32e245