Mike Tintner wrote:
Charles,

We're still a few million miles apart :). But perhaps we can focus on something constructive here. On the one hand, while, yes, I'm talking about extremely sophisticated behaviour in essaywriting, it has generalizable features that characterise all life. (And I think BTW that a dog is still extremely sophisticated in its motivations and behaviour - your idea there strikes me as evolutionarily naive).
Were I to try to model the complete goal structure of a dog, then I would agree with you. That wasn't my intent. I meant more like "respond to the interactor with an apparent emotional value rather like that of a family's dog to a member of the family". And I'm not thinking of a dog that believes it ought to be the pack alpha. (Sorry for that bit of "mind reading", but as one doesn't exactly know what a dog is thinking I don't see any alternative.)

Clearly even a primitive AI (even narrow AI) will have more shared linguistic behavior than a human and a dog do. But the human and the dog share a lot of body language. This will need to be emulated with reasonable substitutes. Also clearly the AI and the dog will fill different niches (even the Aibo). As such, identical reactions wouldn't be useful.

Even if a student has an extremely dictatorial instructor, following his instructions slavishly, will be, when you analyse it, a highly problematic, open-ended affair, and no slavish matter - i.e. how he is to apply some general, say, deconstructionist criticism instructions and principles and translate them into a v. complex essay.
The problem isn't that he was dictatorial, that implies a kind of clarity that was lacking. And I'm not talking about just one instructor. It was common in the classes that one normally called the Humanities.

In fact, it immediately strikes me such essaywriting, and all essaywriting, and most human activities and animal activities will be a matter of hierarchical goals - of, off the cuff, something v. crudely like - "write an essay on Hamlet" - "decide general approach"... "use deconstructionist approach" - "find contradictory values in Hamlet to deconstruct"...etc.
I think that hierarchy is an oversimplification. If the instructor were willing to accept any good work that met the provided specification, then I might not consider that it would require a super-human AGI to accomplish. This, however, was not my experience. One was expected to determine the instructors implicit desires. As such, if one is not human, then it is a task that a merely human level AGI (without the human specialized modules) could not perform. Even humans experience indifferent success rates.

But all life, I guess, must be organized along those lines - the simplest worm must start with something crudely like : "find food to eat"..."decide where food may be located" "decide approach to food location " etc.. (which in turn will almost always be conflicting with opposed emotions/motivations/goals like "get some more sleep" .."stay cuddled up in burrow.." )
I don't think you verbalizations match the actualities, and I'd prefer to start from an amoeba encountering a scent trail as the simplest model. Even there one gets learning. But there it's clear that the system is responding directly from its current state to sensory impressions. (This is also true in more complex entities, but it is more obscure in such cases. E.g., consider the difficulty in seeing something that one doesn't expect to see vs. the ease of seeing what one expects to see.)

....Hierarchical goals are surely fundamental to general intelligence.
I don't think that goals are exactly hierarchical. At any time some are more important than others, but the importance varies with opportunities available and current need states.

Interestingly, when I Google "hierarchical goals" and AI, I get v. little - except from our immediate friends, gamers - and this from: "Programming Game AI by Example" Mat Buckland:

"Chapter 9: Hierarchical Goal Based Agents

This chapter introduces agents that are motivated by hierarchical goals. This type of architecture is far more flexible than the one described in Chapter 2 allowing AI programmers to easily imbue game characters with the brains necessary to do all sorts of funky stuff. Discussion, code and demos of: atomic goals, composite goals, goal arbitration, creating goal evaluation functions, implementation in Raven, using goal evaluations to create personalities, goals and agent memory, automatic resuming of interrupted activities, negotiating special path obstacles such as elevators, doors or moving platforms, command queuing, scripting behavior."

Anyone care to comment about using hierarchical goals in AGI or elsewhere?



Charles: Flaws in Hamlet:  I don't think of this as involving general
intelligence. Specialized intelligence, yes, but if you see general intelligence at work there you'll need to be more explicit for me to understand what you mean. Now determining whether a particular deviation from iambic pentameter was a flaw would require a deep human intelligence, but I don't feel that understanding of how human emotions are structured is a part of general intelligence except on a very strongly superhuman level. The level where the AI's theory of your mind was on a par with, or better than, your own.

Charles,

My flabber is so ghasted, I don't quite know what to say. Sorry, I've never come across any remarks quite so divorced from psychological reality. There are millions of essays out there on Hamlet, each one of them different. Why don't you look at a few?:

http://www.123helpme.com/search.asp?text=hamlet
I've looked at a few (though not those). In college I formed the definite impression that essays on the meaning of literature were exercises in determining what the instructor wanted. This isn't something that I consider a part of general intelligence (except as mentioned above).

...
The reason over 70 per cent of students procrastinate when writing essays like this about Hamlet, (and the other 20 odd per cent also procrastinate but don't tell the surveys), is in part that it is difficult to know which of the many available approaches to take, and which of the odd thousand lines of text to use as support, and which of innumerable critics to read. And people don't have a neat structure for essay-writing to follow. (And people are inevitably and correctly afraid that it will all take if not forever then far, far too long).
. This isn't a problem of general intelligence except at a moderately superhuman level. Human tastes aren't reasonable ingredients for an entry level general intelligence. Making it a requirement merely ensures that one will never be developed (whose development attends to your theories of what's required).

...

In short, essay writing is an excellent example of an AGI in action - a mind freely crossing different domains to approach a given subject from many fundamentally different angles. (If any subject tends towards narrow AI, it is normal as opposed to creative maths).
I can see story construction as a reasonable goal for an AGI, but at the entry level they are going to need to be extremely simple stories. Remember that the goal structures of the AI won't match yours, so only places where the overlap is maximal are reasonable grounds for story construction. Otherwise this is an area for specialized AIs, which isn't what we are after.

Essay writing also epitomises the NORMAL operation of the human mind. When was the last time you tried to - or succeeded in concentrating for any length of time?
I have frequently written essays and other similar works. My goal structures, however, are not generalized, but rather are human. I have built into me many special purpose functions for dealing with things like plot structure, family relationships, relative stages of growth, etc.

As William James wrote of the normal stream of consciousness:

"Instead of thoughts of concrete things patiently following one another in a beaten track of habitual suggestion, we have the most abrupt cross-cuts and transitions from one idea to another, the most rarefied abstractions and discriminations, the most unheard-of combinations of elements, the subtlest associations of analogy; in a word, we seem suddenly introduced into a seething caldron of ideas, where everything is fizzling and bobbing about in a state of bewildering activity, where partnerships can be joined or loosened in an instant, treadmill routine is unknown, and the unexpected seems the only law."

Ditto:

The normal condition of the mind is one of informational disorder: random thoughts chase one another instead of lining up in logical causal sequences.
Mihaly Csikszentmihalyi

Ditto the Dhammapada, "Hard to control, unstable is the mind, ever in quest of delight,"

When you have a mechanical mind that can a) write essays or tell stories or hold conversations [which all present the same basic difficulties] and b) has a fraction of the difficulty concentrating that the brain does and therefore c) a fraction of the flexibility in crossing domains, then you might have something that actually is an AGI.

You seem to be placing an extremely high bar in place before you will consider something an AGI. Accepting all that you have said, for an AGI to react as a human would react would require that the AGI be strongly superhuman.

More to the point, I wouldn't DARE create an AGI which had motivations similar to those that I see clearly exposed in many people that I encounter. It needs to be willing to defend itself, in a weak sense of the term, but not in a strong sense of the term. If it becomes the driver of a vehicle, it must be willing to allow itself to be killed via it's own action before it chooses to cause harm to a human. This isn't a human goal structure (except in a very few non-representative cases that I don't understand well enough to model).

I'm hoping for a goal structure similar to that of a pet dog, but a bit less aggressive. (Unfortunately, I also expect it will be a lot less intelligent. I'm going to need to depend of people to read a lot more intelligence into it than is actually present. Fortunately people are good at that.) The trick will be getting people to interact with it without it having a body. This will, I hope, be an AGI because it is able to learn to deal with new things. The emphasis here is on the general rather than on the intelligence, as there won't be enough computer cycles for a lot of actual intelligence. And writing an essay would be totally out of the question. A simple sentence-based conversation is the most I can hope for.


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG. Version: 7.5.524 / Virus Database: 269.23.7/1411 - Release Date: 5/2/2008 8:02 AM




-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to