Mike Tintner wrote:
Bob: > As a roboticist I can say that a physical body resembling that of a
human isn't really all that important.  You can build the most
sophisticated humanoid possible, but the problems still boil down to
how such a machine should be intelligently directed by its software.

What embodiment does provide are *instruments of causation* and closed
loop control.  The muscles or actuators cause events to occur, and
sensors then observe the results.  Both actuation and sensing are
subject to a good deal of uncertainty, so an embodied system needs to
be able to cope with this adequately, at least maintaining some kind
of homeostatic regime.  Note that "actuator" and "sensor" could be
broadly interpreted, and might not necessarily operate within a
physical domain.

The main problem with non-embodied systems from the past is that they
tended to be open loop (non reflective) and often assumed crisp logic.

Certainly from a marketing perspective - if you're trying to promote a
particular line of research - humanoid-like embodiment certainly helps
people to identify with what's going on.  Also if you're trying to
understand human cognition by attempting to reproduce results from
developmental psychology a humanoid form may also be highly desirable.


Bob,

I think you are v. seriously wrong - and what's more, I suspect, robotically as well as humanly wrong. You are, in a sense, missing literally "the whole point."

What mirror neurons are showing is that our ability to understand humans - as say portrayed in The Dancers :

http://www.csudh.edu/dearhabermas/matisse_dance_moma.jpg

comes from our capacity to simulate them with our whole body-and-brain all-at-once. Note that our brain does not just simulate their particular movement at the given point in time on that canvas - it simulates and understands their *manner* of movement - and you can get up and dance like them, and continue their dance, and produce/predict *further* movements that will be a reasonable likeness of how those dancers might dance - all from that one captured pose.

Our ability to understand animals and how they will move and emote and generally respond similarly comes from our ability to simulate them with our whole body-and-brain all at once - hence it is that we can go still further and liken humans to almost every animal under the sun - "he's a snake/lizard/angry bear/slug/busy bee" etc. etc.

Not only do we understand animals but also inanimate matter and its movements or non-movements with our whole body. Hence we see a book as "lying" on the table, and a wardrobe as "standing" in a room. This capacity is often valuable for inventors, who use it to imagine, for example, how liquids will flow through a machine, or scientists like Einstein who imagined himself riding a beam of light, or Kekule who imagined the atoms of a benzene molecule coiling like a snake.

We can only understand the entire world and how it behaves by embodying it within ourselves... or embodying ourselves within it.

This capacity shows that our self is a whole-brain-and-body unit. If I ask you to "change your self" - and please try this mentally - to simulate/ imagine yourself walking as - say,a flaming diva.. John Wayne... John Travolta... Madonna... you should find that you will immediately/instinctively start to do this with your whole body and brain at once.As one integral unit.

Now my v. garbled understanding (& please comment) is that those Carnegie Mellon starfish robots show that such an integrated whole self is both possible - and perhaps vital - for robots too. You need a whole-body-self not just to understand/embody the outside world and predict its movements, but to understand your inner body/world and how it's "holding up" and "how together" or "falling apart" it is - and whether you will/won't be able to execute different movements and think thoughts. You see, I hope, why I say you are missing the "whole" point.


Mike,

Sigh. Your point of view is heavily biased by the unspoken assumption that AGI must be Turing-indistinguishable from humans. That it must be AGHI. This is not necessarily a bad idea, it's just the wrong idea given our (lack of) understanding of general intelligence. Turing was a genius, no doubt about that. But many geniuses have been wrong. Turing was tragically wrong in proposing (and AI researchers/engineers terribly naive in accepting) his infamous "imitation test," a simple test that has, almost single-handedly, kept AGI from becoming a reality for over fifty years. The idea that "AGI won't be real AGI unless it is embodied" is a natural extension of Turing's imitation test and, therefore, inherits all of its wrongness.

I believe the time, effort and money spent attempting to develop an embodied AGI (one with simulated human sensorimotor capabilities) would be much better spent, at this point in time, building a *human-compatible AGI*. A human-compatible AGI is an AGI capable of empathizing with humans, but that needn't think exactly like a human nor act exactly like a human. Indeed, a human-compatible AGI should, in general, be able to cogitate *better* than a human and neither need nor want to act exactly (or at all) like a human.

I am not alone in rejecting the Turing test (and its embodiment extension) as a true measure of successful AI/AGI:

"This focus of AI research on imitation of human performance has at least three unfortunate consequences. First it does not seem to have been very productive. Second, as I have argued at length elsewhere (Whitby 1988) it is unlikely to lead to profitable or safe applications of AI. New technology is generally taken up quickly where there is a clear deficiency in existing technologies and very slowly, if at all, where it offers only a marginal improvement over existing technologies. Even an amateur salesman of AI should be able to see that researchers should be steered away from programs that imitate human beings. The old quip about there being no shortage of natural intelligence contains an important truth. There are many safe, profitable applications for AI, but programs inspired by the imitation game are unlikely to lead towards them. This sort of research is more likely to produce interesting curiosities such as ELIZA than working AI applications." (http://www.cogs.susx.ac.uk/users/blayw/tt.html)

And there are dozens (perhaps hundreds) of other documents on the Internet that are critical of Turing's test and which go into great detail describing the damage it (and its various interpretations/extensions) has done (and continues to do) to the AI field.

As Bob Mottram noted, an AGHI would make a rock-the-house demo (we humans do tend to be quite full of ourselves). But, the resources consumed creating it will almost certainly delay further the achievement of beneficial (to humans) AGI. Truth be told, I doubt that such a demo is even possible until we, first, design and build a human-compatible AGI.

If we succeed at building a human-compatible AGI, we will have built an AGI that can help us attain AGHI (if humans believe it is advantageous at that time) a lot faster than we could ever hope to do by trying to get there from where we are right now.

Cheers,

Brad



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to