On 7/12/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
CW: A problem is that I do not quite grasp the concept of "general
conceptual goals".

Yes, this is interesting, because I suddenly wondered whether any AI systems
currently can be said to have goals in the true sense.

Even though I said this was a problem, what I states in my previous
posts applies for arbitrary agents.


Goals are general - examples are: "eat/ get food", "drink," "sleep,"
"kill," "build a shelter."

A goal can be variously and open-endedly instantiated/particularised  - so
an agent can only have a goal of "food" if it is capable of eating various
kinds of food. A car or any machine that can only use one kind of fuel can't
have such a goal. Ditto an agent has general goals, if it can drink, sleep,
kill, build a shelter etc in various ways.


To be precise, you may need to state that it is goals containing one
state and one state only since you may for instance consume fuel with
varying parameters (minor changes in densities, temperatures,
position). Otherwise, you're probably making an artificial distinction
on a purely intuitive basis. To have exactly one in comparison to one
or more possible states I agree can have somewhat important
implications. Albeit one cannot take two states and say "they are just
x and should be considered one" for relations where general goals are
used, such as by claiming certain parameters to be
external/internal/core or that one concept (state-set in this case)
being common while another is made up. One may choose to discuss on
one basis or the other but the other view still exists and is
equivalent. I am merely noting this to give an idea of what I am
trying to say in the larger discussion.

All animals have general goals. The open-ended generality of their goals is
crucial to their being adaptive and able to find new and different ways of
satisfying those goals - and developing altogether new goals - to their
finding new kinds of food, drink, ways of resting and attacking or killing,
and developing new habitats - both reactively when current paths are blocked
and proactively by way of curious exploration.

Once more, all behaviour can be desbribed by a policy or a norm and if
an agent acquires a new goal, it must have been at least implicitly
implied by the policy or norm. I think it's a bit easier if one focus
on the largest goal (i.e. the goal involving all present, past, and
future goals, i.e. a history-dependent goal).


What we are interested here surely is in the development of an AGI that has
"General" intelligence and can pursue general goals and be adaptive.

I'm saying that if you want an agent with general goals and adaptivity, then
you won't be able to control it deterministically - as you can with AI
programs.


Indeterminism and history-dependence are two different things. A
policy or norm may be stationery in that it is defined over all time
(all possible histories). I have assumed that the problem in question
is well-posed, or rather, possible to pose well in some system which
would then be a counter-example against our original hypothesis (h1):
That there is no agent with general conceptual goals for which one can
assure that the goals are always interpreted in the ways they were
intended to. If we are dealing with a problem for which we assume that
upon the design of the agents, there should be ill-formedness for
languages/interpretations in a way not yet specified, then it's a
different question. Certainly we do not know the parameteres for NL
but that does not mean we may reach theoretical fundamental results
such as h1 for all languages and interpretations. For instance, we may
design an agent using a well-defined language in a well-defined
environment for "general" goals such as the previously mentioned
dark/light robot. A problem with examples is the charged 'generality'
term which you may continue to alter or specify until we reach very
specific cases.

You will be able to constrain it heavily, but it will (and must if
it is to survive) have the capacity to break or amend any rules you may give
it, and also develop altogether new ones that may not be to your liking -
just like all animals and human beings.

You need to be more precise and strict. What do you call rules here?

2."An AI programmed only to help humanity will only help humanity."  Really?
George Bush along with every other leader is programmed to "help humanity."
---
Our definitions and connotations of "human" have to keep changing - to deal
with new social and cultural realities like these strange transhumanists and
genetic engineering and cloning etc.

Upon giving the definition and designing the interpretation the
possible meanings over all histories and their probabilities have been
decided. If we state that the agent fulfills certain properties [over
all histories] then the agent's policy must fit this or we are lying.
It is a simple logical contradiction.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=22132986-efffbc

Reply via email to