CW: A problem is that I do not quite grasp the concept of "general
conceptual goals".

Yes, this is interesting, because I suddenly wondered whether any AI systems currently can be said to have goals in the true sense.

Goals are general - examples are: "eat/ get food", "drink," "sleep," "kill," "build a shelter."

A goal can be variously and open-endedly instantiated/particularised - so an agent can only have a goal of "food" if it is capable of eating various kinds of food. A car or any machine that can only use one kind of fuel can't have such a goal. Ditto an agent has general goals, if it can drink, sleep, kill, build a shelter etc in various ways.

All animals have general goals. The open-ended generality of their goals is crucial to their being adaptive and able to find new and different ways of satisfying those goals - and developing altogether new goals - to their finding new kinds of food, drink, ways of resting and attacking or killing, and developing new habitats - both reactively when current paths are blocked and proactively by way of curious exploration.

What we are interested here surely is in the development of an AGI that has "General" intelligence and can pursue general goals and be adaptive.

I'm saying that if you want an agent with general goals and adaptivity, then you won't be able to control it deterministically - as you can with AI programs. You will be able to constrain it heavily, but it will (and must if it is to survive) have the capacity to break or amend any rules you may give it, and also develop altogether new ones that may not be to your liking - just like all animals and human beings.

If you disagree, and think deterministically controllable, general goals are possible, you must give an example..

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=20433705-717c77

Reply via email to