Re: [agi] Tommy

2007-05-14 Thread J Storrs Hall, PhD
On Sunday 13 May 2007 08:14:43 am Kingma, D.P. wrote:
 John, as I wrote earlier, I'm very interested in learning more about your
 particular approach to:
  - Concept and pattern representation. (i.e. types of concept, patterns,
 relations?)

As I mentioned in another note, (about the tennis ball), a concept is a set of 
programs that embody your abilities to recognize, manipulate, and predict 
some thing, and to make inferences about its past. 

  - Concept creation. (searching for statistically signifcant spatiotemporal
 correlations, genetic programming, neural networks, ?)

I think that the brain has a lot of what Ben calls pattern mining in the 
hardware; and for our purposes, conventional (including some recent) work in 
ML/PR seems to be adequate.

The main key to the process, though, is to have what programming language 
theorists call a reflective tower: for the language to be able to represent 
itself, and thus reason about its own programs, and thus about the programs 
it uses to reason about programs, etc, etc. (BTW, this is where standard 
programming language theory fails us, in that they are very enamored of 
provably complete logics below the Gödel line. This is all very well for 
writing provably correct red-black tree implementations, but they are playing 
with pebbles on the beach, as it were, with a great ocean of truth lying 
undiscovered before them.)

  - Concept revision/optimization. You mention you use search techniques,
 could you be a little more specific?

I try to represent the data as numeric vectors as much as possible so I can 
use standard scientific regression methods to create the functions that 
predict it. Discrete stuff is harder -- AI has tackled it for 50 years with 
modest results. However, it's my intuition and hope that continuous models 
underneath (and lots of brute force processing) will provide the intuition 
to conquer the combinatorial explosion. (Consider backprop: the finished 
neural net is not unlike the same function implemented in perceptrons, except 
that with a step function instead of a sigmoid you have no traction 
whatsoever for hill-climbing towards an optimum.)

 Unfortunately you did not go into specifics yet. Since you wrote that your
 ideas are firm enough to start doing experiments, I was hoping you could
 give a glimpse of the underlying idea's. I.e., what's your You're speaking
 of a high-level functional reactive programming language exactly?

If I were going to spend 8 hours chopping down a tree, said Abraham 
Lincoln, I'd spend the first 6 sharpening my axe.  When you're trying to 
write an AI, you don't need the distractions and extra work of worrying about 
storage allocation, process migration and communication, data formats, and so 
forth -- those wheels have already been invented (as have statistical 
analysis, regression, and so forth). Build a system with those easily 
available and interoperable, and worry about AI things thereafter.

The easiest languages to reflect about are the functional ones -- ones where 
the semantics resemble math more than assembly language. This is all well and 
good, except that the pure functional paradigm leaves out the ability to deal 
with time. (You can't write X=X+1 in a functional language, because it isn't 
true!) 

There are 4 main ways people have tried to extend functional languages to deal 
with time. The first is ad hoc, as in Lisp: mix a functional language with an 
imperative one. Second, in what were called applicative state transition 
systems, model a big state machine and use the functional language to write 
the transition function. Third, the current favorite in the PLT crowd, is 
category theoretic monads: write a function that computes a list that is a 
trace of the behavior you want the program to enact. And finally, functional 
reactive programming, is to write a function that is interpreted as a 
circuit, where each value is actually a signal that varies in time. This, it 
turns out, is how the physicists and engineers have done it all along: a 
systems-and-signals circuit in control theory or cybernetics is isomorphic to 
a FR program.

 One reason I'm interested is that there are many approaches to unsupervised
 learning of physical (or just graphical) concepts, and I'm thinking of Serre
 et al, Hawkins, neural network specialists (Geoffrey Hinton) and there could
 be many more. But none of these theories I know of are strong enough to
 extract high-level concepts such as 'gravity'.

Neither is the average human being -- Newton was one of the great geniuses. 
What the average human does is expect things to fall down. Experiments with 
people who haven't studied physics show that they have a very poor ability to 
predict what will happen in simple experiments. (For example, more than half 
of undergrads and non-science faculty, when asked to draw the path that would 
be taken by a ball rolling off the edge of a table, don't draw anything 
resembling a parabola.)
 
 Now when 

Re: [agi] Tommy

2007-05-14 Thread J Storrs Hall, PhD
On Saturday 12 May 2007 10:24:03 pm Lukasz Stafiniak wrote:

 Do you have some interesting links about imitation? I've found these,
 not all of them interesting, I'm just showing what I have:

Thanks -- some of those look interesting. I don't have any good links, but I'd 
reccomend Hurley  Chater, eds, Perspectives on Imitation (in 2 vols).

Also anything you can find on case-based reasoning, tho it is woefully rare.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-13 Thread Kingma, D.P.

John, as I wrote earlier, I'm very interested in learning more about your
particular approach to:
- Concept and pattern representation. (i.e. types of concept, patterns,
relations?)
- Concept creation. (searching for statistically signifcant spatiotemporal
correlations, genetic programming, neural networks, ?)
- Concept revision/optimization. You mention you use search techniques,
could you be a little more specific?

Unfortunately you did not go into specifics yet. Since you wrote that your
ideas are firm enough to start doing experiments, I was hoping you could
give a glimpse of the underlying idea's. I.e., what's your You're speaking
of a high-level functional reactive programming language exactly?

One reason I'm interested is that there are many approaches to unsupervised
learning of physical (or just graphical) concepts, and I'm thinking of Serre
et al, Hawkins, neural network specialists (Geoffrey Hinton) and there could
be many more. But none of these theories I know of are strong enough to
extract high-level concepts such as 'gravity'.

Now when I imagine a hypothetical system capable of extracting such thing as
'gravity', he would have to go through a process of many stages, one of the
first to learn about the general spatial phenomenon 'object', later the
temporal phenomenon 'velocity', and would eventually find out that the
vertical element of the velocity vector is ever decreasing with constant
amount. Now, a system that is capable of doing this without any prior
knowledge is pretty damn interesting, and you're promising it.

I've been doing some vaguely similar experiments (minus motor feedback) with
a multilayer spatio-temporal pattern classifier network and concluded that
it's not easy (in contrary) to let a system extract concepts like 'gravity'.

Kind regards,
Durk Kingma

On 5/12/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:On Friday 11 May
2007 08:26:03 pm Pei Wang wrote:


As you can see from my comment and paper, I agree with your idea in
 its basic spirit. However, I think your above presentation is too
 vague, and far from enough for semantic analysis.

True enough -- my ideas tend to form like planets, a la the nebular
hypothesis :-)  But at this point, having gnawed on them for a few years,
I
think they're firm enough to start doing experiments.




On 5/13/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:


On Saturday 12 May 2007 09:00:46 am Pei Wang wrote:
 I see --- it is fine to stress the procedural aspect of concept given
 your context. However, to make your design flexible and general, even
 in that case you will still need some language to specify your
 concepts, rather than in a pin-ball-specific manner, right?

Sure -- tho in my case it looks more like a very high-level functional
reactive programming language than FOPL.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-12 Thread Pei Wang

Mike,

I just wonder that whenever you find there's one thing so screamingly
obvious that you guys don't seem to be taking it into account, have
it ever occurred to you that there may be a valid reason?

For the current issue, whether there is still human in the loop has
little to do with the machine's intelligence, as far as the human is
not responsible for specifying the machine's operation step-by-step.
In the long run, machines will surely become more and more autonomous,
but we probably still want to stay in the loop, even though
technically it won't be necessary. Anyway, taking humans out of the
loop doesn't sound like a good choice for AGI development at the
moment, unless you have a concrete design to show us otherwise.

Pei

On 5/11/07, Mike Tintner [EMAIL PROTECTED] wrote:

 Josh,

 Since the 90s there has been a strand in AI research that claims that
 robotics is necessary to the enterprise, based on the notion that
 having a body is necessary to intelligence. Symbols, it is said, must
 be grounded in physical experience to have meaning. Without such
 grounding AI practitioners are deceiving themselves by calling their
 Lisp atoms things like MOTHER-IN-LAW when they really mean no more
 than G2250.

Pei:  I think these people correctly recognized a problem in traditional AI,
 though they attributed it to a wrong cause.. Every implemented system
 already has a body --- the hardware, and
as long as the system has input and output, it has experience that
comes from its body. Of course, since the body is not human body, the
experience is not human experience. However, as far as this discussion
is concerned, it doesn't matter, since this kind of experience is
genuine experience that can be used to ground meaning of concepts.

Er, there's one thing so screamingly obvious that you guys don't seem to be
taking it  into account here. All these machines you are talking about are
basically inert lumps of metal and don't exist without human beings to
switch them on, feed them  interpret them. Humans are still, pace Rodney B,
in the loop. Try taking humans out of the loop and then see what these
standalone computers do and don't understand - or what they do, period.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread J Storrs Hall, PhD
On Friday 11 May 2007 08:55:12 pm Mike Tintner wrote:

...All these machines you are talking about are 
 basically inert lumps of metal and don't exist without human beings to 
 switch them on, feed them  interpret them. 

Same is true of a baby, except for the part where you can turn it off and get 
a full night's sleep.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread J Storrs Hall, PhD
On Friday 11 May 2007 09:15:56 pm Mike Tintner wrote:

 I'm saying the last 400 years have been framed by Descartes' and science's 
 mind VERSUS body dichotomy. That in turn has been expressed in a whole 
 variety of subsidiary dichotomoies and cultural battles:

 ... mind vs body
 ... reason vs emotion ...

I think all your comparisons are actually different axes in reality, and they 
don't have enough in common to conflate usefully. For example, in my reading 
of the Scientific/Industrial Revolution, body and reason got accelerated 
while mind and emotion got left behind. Couldn't happen if they were the same 
dichotomy.

Josh


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread J Storrs Hall, PhD
On Friday 11 May 2007 08:26:03 pm Pei Wang wrote:

 *. Meaning come from experience, and is grounded in experience.

I agree with this in practice but I don't think it's necessarily, 
definitionally true. In practice, experience is the only good way we know of 
to build the models that provide us the ability to predict the world. AI 
tried it by hand-building models throughout the 80s (the expert system era) 
and mostly failed.

However, if I have a new robot, I can copy the bits from an old one and its 
mind will have just as much meaning as the old one. Thus in theory, any other 
way I could have come up with the same string of bits will also give me 
meaning.

 A more detained discussion and a proposed solution can be found in
 http://nars.wang.googlepages.com/wang.semantics.pdf

Model-theoretic semantics in logic has a meaning more or less opposite that of 
the use of model in AI -- in the former case the world is a model for the 
logical system, in the latter the logical system is a model of the world.

To avoid any confusion, let me point out that I always use the word in the AI 
sense.

 As you can see from my comment and paper, I agree with your idea in
 its basic spirit. However, I think your above presentation is too
 vague, and far from enough for semantic analysis.

True enough -- my ideas tend to form like planets, a la the nebular 
hypothesis :-)  But at this point, having gnawed on them for a few years, I 
think they're firm enough to start doing experiments.


2. The hard part is learning: the AI has to build its own world
   model. My instinct and experience to date tell me that this is
   computationally expensive, involving search and the solution of
   tough optimization problems.
 
 Agree, though I've been avoiding the phrase world model, because the
 intuitive picture it provides: there is a objective world out there,
 and an AI is building an internal model of it, where the concepts
 represent objects, and beliefs represent factual relations among
 objects --- this is a picture you don't subscribe, I guess.

World model has a very well established meaning in AI (50 years old by now) 
and I find the basic idea sound. I DON'T think that one should assume at the 
outset there are objects and relations -- I'm using a representation where 
objects can be represented if experience indicates it's a useful category, 
but other ways of representing the world are equally accessible.

 A good idea. As I said above: input/output is necessary for AGI, but
 any concrete form of them is not, in principle. An AGI doesn't have to
 be able to move itself around in the physical world (though it must
 somehow change its environment), and doesn't have to have a certain
 human sensor (though it must somehow sense its environment).

Agreed.

 I'd suggest to add the muscle in as soon as possible to get a
 complete sensor-motor cycle.

Help from anyone on this list with experience with the GNU toolchain on 
ARM-based microcontrollers will be gratefully accepted :-)

 I fully agree with your focus. I guess your concepts are patterns or
 structures formed from certain semantic primitives by a fixed set of
 operators or connectors. I'm very interested in your choice.

My major hobby-horse in this area is that a concept has to be an active 
machine, capable of recognition, generation, inference, and prediction. Of 
course we know that any machine can be represented by a program and thus 
given a declarative representation, but for practical purposes, I'm fairly 
far over toward the procedural embedding of knowledge end of the spectrum.

  I claim that most current AI experiments that try to mine meaning out
  of experience are making an odd mistake: looking at sources that are
  too rich, such as natural language text found on the Internet. The
  reason is that text is already a highly compressed form of data; it
  takes a very sophisticated system to produce or interpret. Watching a
  ball roll around a blank tabletop and realizing that it always moves
  in parabolas is the opposite: the input channel is very low-entropy
  (in actual information compared to nominal bits), and thus there is
  lots of elbow room for even poor, early, suboptimal interpretations to
  get some traction.
 
 I don't think you have convinced me that this kind of experiment is
 better than the others (such as those in NLP) , but you get a good
 idea and it is worth a try.

“Two roads diverged in a yellow wood and I, 
I took the path less travelled by, 
and that has made all the difference.”

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-12 Thread Mike Tintner
The point re the computer-human nexus is simply that it's rather like the 
horse doing the trick of counting for its master - if the master is there, 
you can't be sure that the horse really is doing counting or understands 
what it's doing, or that it could do the trick without its master.


Let me give you another analogy if it's of any help. The whole of science in 
applying its current mechanistic paradigm to the world has also forgotten 
that machines don't exist without humans. So cognitive psychology treats the 
human mind like a computer. Actually it would be much truer and more 
productive to treat the human mind like a human-computer hybrid, i.e. a 
human using a computer. That opens new dimensions on human thinking - you 
realise that human intelligent performance may not be so much a case of 
some people having, and others not having, certain faculties but of some 
people using, and others not using their faculties - as some do and some 
don't use their computer's faculties  (something that scientific psychology 
rarely considers).


So I disagree - you have to look at the totality of how your computer is 
used by a human, before you can be sure of how intelligent it is or isn't. 
And I'm not trying to be difficult or patronising - if the whole of science 
and major scientific minds can leave out the human factor, so can Pei.


- Original Message - 
From: Pei Wang [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, May 12, 2007 12:10 PM
Subject: Re: [agi] Tommy



Mike,

I just wonder that whenever you find there's one thing so screamingly
obvious that you guys don't seem to be taking it into account, have
it ever occurred to you that there may be a valid reason?

For the current issue, whether there is still human in the loop has
little to do with the machine's intelligence, as far as the human is
not responsible for specifying the machine's operation step-by-step.
In the long run, machines will surely become more and more autonomous,
but we probably still want to stay in the loop, even though
technically it won't be necessary. Anyway, taking humans out of the
loop doesn't sound like a good choice for AGI development at the
moment, unless you have a concrete design to show us otherwise.

Pei

On 5/11/07, Mike Tintner [EMAIL PROTECTED] wrote:

 Josh,

 Since the 90s there has been a strand in AI research that claims that
 robotics is necessary to the enterprise, based on the notion that
 having a body is necessary to intelligence. Symbols, it is said, must
 be grounded in physical experience to have meaning. Without such
 grounding AI practitioners are deceiving themselves by calling their
 Lisp atoms things like MOTHER-IN-LAW when they really mean no more
 than G2250.

Pei:  I think these people correctly recognized a problem in traditional 
AI,

 though they attributed it to a wrong cause.. Every implemented system
 already has a body --- the hardware, and
as long as the system has input and output, it has experience that
comes from its body. Of course, since the body is not human body, the
experience is not human experience. However, as far as this discussion
is concerned, it doesn't matter, since this kind of experience is
genuine experience that can be used to ground meaning of concepts.

Er, there's one thing so screamingly obvious that you guys don't seem to 
be
taking it  into account here. All these machines you are talking about 
are

basically inert lumps of metal and don't exist without human beings to
switch them on, feed them  interpret them. Humans are still, pace Rodney 
B,

in the loop. Try taking humans out of the loop and then see what these
standalone computers do and don't understand - or what they do, period.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition. Version: 7.5.467 / Virus Database: 
269.6.8/800 - Release Date: 11/05/2007 19:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread Mike Tintner
That's a cute comparison, but pretty insulting both to infants and 
developmental psychology, which continues to paint an ever more detailed 
picture of what restless, exploratory scientists infants are. Damn noisy 
too, I agree.  Surely a major challenge for AGI/robotics is to build an 
agent/robot that is only fractionally as exploratory.


- Original Message - 
From: J Storrs Hall, PhD [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Saturday, May 12, 2007 12:41 PM
Subject: Re: [agi] Tommy



On Friday 11 May 2007 08:55:12 pm Mike Tintner wrote:


...All these machines you are talking about are
basically inert lumps of metal and don't exist without human beings to
switch them on, feed them  interpret them.


Same is true of a baby, except for the part where you can turn it off and 
get

a full night's sleep.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.6.8/800 - Release Date: 11/05/2007 
19:34






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread Pei Wang

On 5/12/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

On Friday 11 May 2007 08:26:03 pm Pei Wang wrote:

 *. Meaning come from experience, and is grounded in experience.

I agree with this in practice but I don't think it's necessarily,
definitionally true. In practice, experience is the only good way we know of
to build the models that provide us the ability to predict the world. AI
tried it by hand-building models throughout the 80s (the expert system era)
and mostly failed.

However, if I have a new robot, I can copy the bits from an old one and its
mind will have just as much meaning as the old one. Thus in theory, any other
way I could have come up with the same string of bits will also give me
meaning.


That is also my plan. Experience is not restricted to direct,
personal experience. When I'm reading, I'm getting other people's
experience. The key difference is that whether the meaning of a
concept is determined by its experienced relation with others
concepts, or by its denotation in the world.


Model-theoretic semantics in logic has a meaning more or less opposite that of
the use of model in AI -- in the former case the world is a model for the
logical system, in the latter the logical system is a model of the world.


In that sense, yes, but even in AI, meaning is still traditionally
treated as denotation, that is, the outside object/event referred to
by a symbol. If you want your robot to build a world model to
describe the world as it is, it will run into the same trouble as
model-theoretic semantics. My understanding is that this is not what
you mean. Instead, your world model is, in essence, a bunch of if I
do this, I'll observe that, which is a summary of experience, or
interactions between the system and its environment, rather than the
environment by itself.


 I fully agree with your focus. I guess your concepts are patterns or
 structures formed from certain semantic primitives by a fixed set of
 operators or connectors. I'm very interested in your choice.

My major hobby-horse in this area is that a concept has to be an active
machine, capable of recognition, generation, inference, and prediction. Of
course we know that any machine can be represented by a program and thus
given a declarative representation, but for practical purposes, I'm fairly
far over toward the procedural embedding of knowledge end of the spectrum.


I see --- it is fine to stress the procedural aspect of concept given
your context. However, to make your design flexible and general, even
in that case you will still need some language to specify your
concepts, rather than in a pin-ball-specific manner, right?

Pei

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread Mike Tintner

Josh:My major hobby-horse in this area is that a concept has to be an active
machine, capable of recognition, generation, inference, and prediction.

This sounds very like Jeff Hawkins, (just reading On Intelligence now). Do 
you see your position as generally accepted, or at the forefront of changing 
AI attitudes to concepts?


And if it's not too much to ask (and it may be), would you care to give a 
particular concept example of what you mean? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Tommy

2007-05-12 Thread Derek Zahn




 [EMAIL PROTECTED] writes:
 Help from anyone on this list with experience with the GNU toolchain on  
 ARM-based microcontrollers will be gratefully accepted :-)
 
I have a lot of such experience and would be happy to help out with whatever 
you need.  Post more details here if you think they are of general interest or 
otherwise just contact me directly:  [EMAIL PROTECTED] .
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-12 Thread J Storrs Hall, PhD
On Saturday 12 May 2007 09:18:16 am Mike Tintner wrote:
 Josh:My major hobby-horse in this area is that a concept has to be an active
 machine, capable of recognition, generation, inference, and prediction.
 
 This sounds very like Jeff Hawkins, (just reading On Intelligence now). Do 
 you see your position as generally accepted, or at the forefront of changing 
 AI attitudes to concepts?

Procedural embedding of knowledge was used, and the phrase introduced, by 
Winograd in the 1970's. It became passe in the 80s when people tried to pack 
lots of knowledge into the expert systems but essentially traded quality 
for quantity.  

BTW, Hawkins has been discussed at length here -- some of his ideas are 
valuable, but none is particularly original, and in many places where he says 
things like nobody has tried or is doing X he's often speaking from 
ignorance.

 And if it's not too much to ask (and it may be), would you care to give a 
 particular concept example of what you mean? 

Consider my concept of a tennis ball. I have circuitry in my brain -- a neural 
FPGA is closer to the way I think about it than a sequential program is -- to 
recognize it when I see it, when I feel it, when I hear it bounce, when I put 
my foot down on it without having seen it. I have similar machinery for 
throwing it, and for predicting what it's going to do when it's thrown by 
someone else. Indeed the circuitry is good enough to control a racquet within 
the  seconds of arc angle and milliseconds of time to volley it to a chosen 
spot when it's hit at me at over 50 FPS -- these circuits are specialized 
enough that you can tell from an EEG trace whether a tennis pro is playing 
with natural or synthetic strings in his racquet, so it's pretty clear that 
they are specific to tennis balls as the projectile. My concept of a tennis 
ball includes the ability to squeeze one and vary my predictions of its 
trajectory after a bounce depending on the feel. It includes the motor 
circuitry to shape the hand to hold 2, 3, or 4 of them (not easy) and to know 
how many I have in my pocket by the pressure on the leg and the stretch of 
the pants fabric. Of course it also includes declarative stuff like the facts 
that they are yellow (but were typically white 30 years ago), round, 2.25 in 
diameter, and cost about a dollar. 

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread Lukasz Stafiniak

On 5/13/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

On Saturday 12 May 2007 09:00:46 am Pei Wang wrote:

 ...My understanding is that ..., your world model is, in essence, a bunch
of if I
 do this, I'll observe that, which is a summary of experience, or
 interactions between the system and its environment, rather than the
 environment by itself.

It's both. As Yogi Berra said, You can observe a lot by just watching. So
the model is a bunch of This happened, and thus that happened, where I did
this is a particularly important special case of this happened. But
watching someone else do something is key to imitation, which is key to
learning.


Do you have some interesting links about imitation? I've found these,
not all of them interesting, I'm just showing what I have:

* [[Learning How to Do Things with Imitation -
http://citeseer.ist.psu.edu/339624.html]]
* [[Reinforcement Learning with Imitation in Heterogeneous Multi-Agent
Systems - http://citeseer.ist.psu.edu/35684.html]]
* [[http://citeseer.ist.psu.edu/jenkins00primitivebased.html |
Primitive-Based Movement Classification for Humanoid Imitation]]
* [[Self-Segmentation of Sequences: Automatic Formation of Hierarchies
of Sequential Behaviors - http://citeseer.ist.psu.edu/286643.html]]
* [[Imitation as a First Step to Social Learning in Synthetic
Characters: A Graph-based Approach -
http://alumni.media.mit.edu/~daphna/sca_final_electronic.pdf]]
* [[Human's Meta-cognitive Capacities and Endogenization of Mimetic
Rules in Multi-Agents Models -
http://www.uni-koblenz.de/~essa/ESSA2003/ChavalariasESSA03.pdf]]
* [[http://girardianlectionary.net/covr2004/Chavalariasabst.pdf |
Metareflexive Mimetism: The prisoner free of the dilemma]]
* [[http://www.aisb.org.uk/publications/proceedings/aisb05/3_Imitation_Final.pdf
| Proceedings of the Third International Symposium on Imitation in
Animals and Artifacts]]
* [[http://ecagents.istc.cnr.it/dllink.php?id=214type=Document | The
progress drive hypothesis: an interpretation of early imitation]]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-12 Thread J Storrs Hall, PhD
Thanks!  I'll be in touch.

Josh

On Saturday 12 May 2007 10:08:26 am Derek Zahn wrote:
 
  [EMAIL PROTECTED] writes:
  Help from anyone on this list with experience with the GNU toolchain on  
ARM-based microcontrollers will be gratefully accepted :-)
  
 I have a lot of such experience and would be happy to help out with whatever 
you need.  Post more details here if you think they are of general interest 
or otherwise just contact me directly:  [EMAIL PROTECTED] .

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


RE: [agi] Tommy

2007-05-11 Thread Derek Zahn




J. Storrs Hall writes:
 
 Tommy, the scientific experiment and engineering project, is almost all 
 about concept formation.
 
Great project!  While I'm not quite sure about meaning in the concept of 
price-theoretical market equilibria thing, I really like your idea and it's 
similar in broad concept to my as yet very early noodling.  A couple of 
comments:
 
* To the casual observer Tommy implies that your AI is blind, deaf, and dumb, 
which might not quite be the idea you are trying to convey.
 
* It would seem more robust, easier, and cooler to pick up a real used pinball 
machine and use it instead of the abstract idealized pinball machine.
 
I look forward to seeing some results and asking: How do you think he does it? 
 I don't know!  What makes him so good?
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-11 Thread Bob Mottram

In order to differentiate this from the rest of the robotics crowd you
need to avoid building a specialised pinball playing robot.  If the
machine can learn and form concepts based upon its experiences it
should be able to do so with any kind of game, provided that suitable
actuators are attached.  It is very easy to fall into the trap of
building something which is just a physical expert system.


From long experience of trying to do things like that I think there is

no getting around the fact that in order to be truly general you have
to build world models upon which reasoning systems can act, which
means getting into the tricky business of modelling sensors and
probabilistic interactions.  It is possible to take much simpler
Brooksian approaches, but in these cases what you always end up with
is a brittle expert system.  This might be ok if all you're trying to
do is model insect-like intelligence operating within some well
defined niche, but ideally we want our robots to be smarter than
cockroaches.




On 11/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

Since the 90s there has been a strand in AI research that claims that
robotics is necessary to the enterprise, based on the notion that
having a body is necessary to intelligence. Symbols, it is said, must
be grounded in physical experience to have meaning. Without such
grounding AI practitioners are deceiving themselves by calling their
Lisp atoms things like MOTHER-IN-LAW when they really mean no more
than G2250.

This has given rise to a plethora of silly little robots (in Minsky's
view, anyway) that scurry around the floor picking up coffeecups and
like activities.

My view lies somewhere between the extremes on this issue:

a) Meaning does not lie in a physical connection. I find meaning in
the concept of price-theoretical market equilibria; I've never seen,
felt, or smelled one. Meaning lies in working computational models,
and true meaning lies in ones that ones that can make correct
predictions.

b) On the other hand, the following are true:

  1. Without some connection to external constraints, there is a
 strong temptation on the part of researchers to define away the
 hard parts of the AI problem. Even with the best will in the
 world, this happens subconsciously.

  2. The hard part is learning: the AI has to build its own world
 model. My instinct and experience to date tell me that this is
 computationally expensive, involving search and the solution of
 tough optimization problems.


That deaf, dumb, and blind kid sure plays a mean pinball.


Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous. This not only saves a lot of construction but
allows me to run the AI on the biggest system I can afford (currently
ten processors) rather than having to shoehorn code and data into
something run off a battery.

Tommy, the pinball wizard kid, was chosen as a name for the system
because of a compelling, to me anyway, parallel between a pinball game
and William James' famous description of a baby's world as a
blooming, buzzing confusion. The pinball player is in the same
position as a baby in that he has a firehose input stream of sensation
from the lights and bells of the game, but can do little but wave his
arms and legs (flip the flippers), which very rarely has any effect at
all.

Tommy, the robot, consists at the moment of a pair of Firewire cameras
and the ability to display messages on the screen and receive keyboard
input -- ironically almost the exact opposite of the rock opera Tommy.
Planned for the relatively near future is exactly one muscle: a
single flipper. Tommy's world will not be a full-fledged pinball game,
but simply a tilted table with the flipper at the bottom.


Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.

The scientific rationale for this is that visual and motor skills
arrive before verbal ones both in ontogeny and phylogeny. Thus I
assume they are more basic and the substrate on which the higher
cognitive abilities are based.  Furthermore I have a good idea what
concepts need to be formed for competence in this area, and so I'll
have a decent chance of being able to tell if the system is going in
the right direction.

I claim that most current AI experiments that try to mine meaning out
of experience are making an odd mistake: looking at sources that are
too rich, such as natural language text found on the Internet. The
reason is that text is already a highly compressed form of data; it
takes a very sophisticated system to produce or interpret. Watching a
ball roll around a blank 

RE: [agi] Tommy

2007-05-11 Thread Derek Zahn
Bob Mottram writes: In order to differentiate this from the rest of the 
robotics crowd you need to avoid building a specialised pinball playing robot. 
 
I can't speak for JoSH, but I got the impression that playing pinball or 
anything similar was not the object, the object was to provide real sensor data 
in a somewhat limited domain to experiment with and observe concept formation.  
You'd like to see it develop object permanence, ball motion, gravity, 
bouncing, and so on.  The goal not being so much to impress people with 
performance on a vertical task but rather to use the task environment as a 
somewhat rich sandbox in which general purpose capabilities can be studied.
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-11 Thread Mike Tintner

Josh: Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous

As Daniel Wolpert will tell you, the sea squirt devours its brain as soon as 
it stops moving. In the final and the first analysis, the brain is a device 
for controlling movement:


Movement is the only way we have of interacting with the world, whether 
foraging for food or attracting a waiter's attention. Indeed, all 
communication, including speech, sign language, gestures and writing, is 
mediated via the motor system. Taking this viewpoint, the purpose of the 
human brain is to use sensory signals to determine future actions. The goal 
of our lab is to understand the computational principles underlying human 
sensorimotor control.


http://learning.eng.cam.ac.uk/wolpert/

Computational and Biological Learning Lab

Having written the book, so to speak, aren't you best placed to know that 
this is the age of autonomous MOBILE robots?


P.S.  The other interesting thing here is that evolutionarily, touch 
precedes vision, no? And the two, I suggest, are intertwined in a brain that 
works by common sense rather than isolated senses. Michael Tye, I think, 
(I forget the name of his theory), has pointed out that we have the illusion 
that we can isolate our senses - just see things, for example - whereas in 
fact our sensory perception of the world is always a common sense one.


(And, thinking aloud as I write, ALL senses are moving senses. Animals, 
including the simplest one-celled organisms, move their sensors around to 
perceive the world. Touch too, of course  - you have to move your body to 
get a hold of things ).


P.P.S. Can't resist this - set your robot free::
I'm Free
[TOMMY]

I'm free -- I'm free,
And freedom tastes of reality!
I'm free -- I'm free,
And I'm waiting for you to follow me.
If I told you what it takes
To reach the highest high,
You'd laugh and say Nothing's that simple.
But you've been told many times before
Messiahs pointed to the door
And no one had the guts to leave the temple!
I'm free -- I'm free,
And I'm waiting for you to follow me.
I'm free -- I'm free,
And I'm waiting for you to follow me. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Shane Legg

Josh,

Interesting work, and I like the nature of your approach.

We have essentially a kind of a pin ball machine at IDSIA
and some of the guys were going to work on watching this
and trying to learn simple concepts from the observations.
I don't work on it so I'm not sure what the current state of
their work is.

When you publish something on this please let the list know!

thanks
Shane

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-11 Thread J Storrs Hall, PhD
On Friday 11 May 2007 02:01:09 pm Mike Tintner wrote:
...
 As Daniel Wolpert will tell you, the sea squirt devours its brain as soon as 
 it stops moving. 

As Dan Dennet has pointed out, this resembles what happens when one gets 
tenure...

 In the final and the first analysis, the brain is a device  
 for controlling movement:

Only half, even in a hunter/gatherer context. The other half is participation 
in the social process, which in its essence is pure communication. 
Manipulation of the physical world remains important but has declined 
relative to communication significantly in the modern world.

 (And, thinking aloud as I write, ALL senses are moving senses. Animals, 
 including the simplest one-celled organisms, move their sensors around to 
 perceive the world. Touch too, of course  - you have to move your body to 
 get a hold of things ).

Ultimately I'm thinking in terms of a Cog-like torso with hands -- but that's 
many years off.

 P.P.S. Can't resist this - set your robot free::
  I'm Free
 [TOMMY]

Nice. Maybe someday...

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Vladimir Nesov
Friday, May 11, 2007, J Storrs Hall, PhD wrote:

JSHP   2. The hard part is learning: the AI has to build its own world
JSHP  model.

And for this it requires complex enough world to model. Information
about the world can be given by static description (which also includes 
action-reaction
pairs, but doesn't depend on system's actions), or dynamically,
providing data on complex requests of the system.

Physical embodiment provides means to access world by
interaction (dynamic description).

Static description of physical world (as it can be accessed
in 'narural' ways through vision, hearing, etc.) is not dense in
interesting patters and is extremely expensive to analyze.

If you limit interaction with world to that single flipper, it won't
change situation dramatically from static description.

And static description can be given in much mode dense way using some
form of NL-based code.

-- 
 Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread J Storrs Hall, PhD
Right. The key issue is autogeny in the mental architecture. Learning will be 
unsupervised to start, with internal feedback from how well the system is 
expecting what it sees next. Then we move into a mode where imitation is the 
key, with the system trying to do what a person just did (e.g. catching the 
ball on the flipper, hitting some certain spot on the table, etc (note the 
flipper control is a full 1-dof signal, not just a 1-bit button). I can catch 
a ping-pong ball on a paddle in 3-d -- Tommy should be able to learn it in 
2-d with an effective 0.1 G field!) To do this he'll have to develop concepts 
to describe what it is I'm trying to do. 

There's a LOT you can do with 1 DOF output -- you could even imagine Tommy 
passing the Turing Test by sending Morse code with the flipper :-)

Josh

On Friday 11 May 2007 01:52:31 pm Derek Zahn wrote:
 Bob Mottram writes: In order to differentiate this from the rest of the 
robotics crowd you need to avoid building a specialised pinball playing 
robot. 
  
 I can't speak for JoSH, but I got the impression that playing pinball or 
anything similar was not the object, the object was to provide real sensor 
data in a somewhat limited domain to experiment with and observe concept 
formation.  You'd like to see it develop object permanence, ball motion, 
gravity, bouncing, and so on.  The goal not being so much to impress people 
with performance on a vertical task but rather to use the task environment as 
a somewhat rich sandbox in which general purpose capabilities can be studied.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Kingma, D.P.

Yes, thank you, a meaningful and very interesting project. I discussed this
kind of system with a friend of mine half an hour ago.

On 5/11/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:



  2. The hard part is learning: the AI has to build its own world
 model. My instinct and experience to date tell me that this is
 computationally expensive, involving search and the solution of
 tough optimization problems.



This must be the central part of your project. I'm very interested in how
you approach the following problems:
- Concept and pattern representation. If you use some sort of graphical
model, what types of edges, nodes, relations? Something like Ben's SMEPH?
- Concept creation. Do you have single method in mind or multiple methods,
maybe working simultaneously? Data mining methods, statistical methods,
genetic programming, NN's (e.g. Boltzmann machines), ...?
- Concept revision/optimization. You mention you use search techniques,
could you be a little more specific (or references). Will there be something
like a wake/sleep cycle, or is optimization done in real-time?

Also, why did you choose a physical implementation and not a virtual one?
Simply because it's more interesting or are there other motives?

These kind of project are, of course, very complex and multi-faceted, but
worth is because they force you to think about these extremely essential
things like model creation, concept formation, model optimization.

(BTW I ordered your new book Beyond AI this week, and looking forward to
reading it.).

Please keep us updated on your project.

Kind regards,
Durk Kingma




That deaf, dumb, and blind kid sure plays a mean pinball.



Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous. This not only saves a lot of construction but
allows me to run the AI on the biggest system I can afford (currently
ten processors) rather than having to shoehorn code and data into
something run off a battery.

Tommy, the pinball wizard kid, was chosen as a name for the system
because of a compelling, to me anyway, parallel between a pinball game
and William James' famous description of a baby's world as a
blooming, buzzing confusion. The pinball player is in the same
position as a baby in that he has a firehose input stream of sensation
from the lights and bells of the game, but can do little but wave his
arms and legs (flip the flippers), which very rarely has any effect at
all.

Tommy, the robot, consists at the moment of a pair of Firewire cameras
and the ability to display messages on the screen and receive keyboard
input -- ironically almost the exact opposite of the rock opera Tommy.
Planned for the relatively near future is exactly one muscle: a
single flipper. Tommy's world will not be a full-fledged pinball game,
but simply a tilted table with the flipper at the bottom.


Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.

The scientific rationale for this is that visual and motor skills
arrive before verbal ones both in ontogeny and phylogeny. Thus I
assume they are more basic and the substrate on which the higher
cognitive abilities are based.  Furthermore I have a good idea what
concepts need to be formed for competence in this area, and so I'll
have a decent chance of being able to tell if the system is going in
the right direction.

I claim that most current AI experiments that try to mine meaning out
of experience are making an odd mistake: looking at sources that are
too rich, such as natural language text found on the Internet. The
reason is that text is already a highly compressed form of data; it
takes a very sophisticated system to produce or interpret. Watching a
ball roll around a blank tabletop and realizing that it always moves
in parabolas is the opposite: the input channel is very low-entropy
(in actual information compared to nominal bits), and thus there is
lots of elbow room for even poor, early, suboptimal interpretations to
get some traction.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936

Re: [agi] Tommy

2007-05-11 Thread William Pearson

On 11/05/07, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:


Tommy, the scientific experiment and engineering project, is almost
all about concept formation. He gets a voluminous input stream but is
required to parse it into coherent concepts (e.g. objects, positions,
velocities, etc). None of these concepts is he given originally. Tommy
1.0 will simply watch the world and try to imagine what happens next.



Interesting. This is somewhat similar to one of the projects that I am
interested in. Assuming sufficient or the correct hardware, I'm
interested in body mounted robotics for Intelligence Augmentation,
using what people would think of as AI.

An example of the robot if not the software
http://www.robots.ox.ac.uk/ActiveVision/Projects/Wear/wear.03/index.html

I would start off with that annotating its visual streams to be passed
to head mounted display on the user. Things like tracking objects the
user has pointed at, so the user could see things not directly in
front of him, or high-lighting important objects to the user, would be
some of the things it would be initially taught. I would also give it
a controlled, low power laser pointer so it could visually mark things
for other people apart from its user.

I think this sort of system is a worthy one to study, as it allows the
user and the robot to inhabit the same world (so concepts developed by
the computer should not be too alien to the user, and thus languages
may be shared between them), it also allows for long periods of time
for the researcher to be present with the computer if such time scales
as a babies development are required for the teaching of human level
intelligence. It also tries to minimise the amount of
processing/robotics required to share the similar world, meaning more
projects could possibly be attempted at once.

While user and computer do share the same world in your experimental
setup, there may be some concepts that would be hard for it to learn
such as translation of its PoV. Whether that would be a fatal flaw in
its developed mental model of the world (and limit its ability to
communicate with as hardware and its capabilities developed), I'm not
sure. More experimentation and better theories required, as ever.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Pei Wang

Josh,

This is an interesting idea that deserves detailed discussion.


Since the 90s there has been a strand in AI research that claims that
robotics is necessary to the enterprise, based on the notion that
having a body is necessary to intelligence. Symbols, it is said, must
be grounded in physical experience to have meaning. Without such
grounding AI practitioners are deceiving themselves by calling their
Lisp atoms things like MOTHER-IN-LAW when they really mean no more
than G2250.


I think these people correctly recognized a problem in traditional AI,
though they attributed it to a wrong cause.

My opinion on this issue can be summarized as the following:

*. Meaning come from experience, and is grounded in experience.

*. However, for AGI, this experience doesn't have to be human experience.

*. Every implemented system already has a body --- the hardware, and
as long as the system has input and output, it has experience that
comes from its body. Of course, since the body is not human body, the
experience is not human experience. However, as far as this discussion
is concerned, it doesn't matter, since this kind of experience is
genuine experience that can be used to ground meaning of concepts.

*. The failure of traditional AI is not to use standard computer
hardware rather than special hardware (i.e., robot), but to ignore the
experience of the system when handling meaning of concepts.

A more detained discussion and a proposed solution can be found in
http://nars.wang.googlepages.com/wang.semantics.pdf


This has given rise to a plethora of silly little robots (in Minsky's
view, anyway) that scurry around the floor picking up coffeecups and
like activities.


I also think it is not a fruitful direction for AI to move.


My view lies somewhere between the extremes on this issue:

a) Meaning does not lie in a physical connection. I find meaning in
the concept of price-theoretical market equilibria; I've never seen,
felt, or smelled one. Meaning lies in working computational models,
and true meaning lies in ones that ones that can make correct
predictions.


As you can see from my comment and paper, I agree with your idea in
its basic spirit. However, I think your above presentation is too
vague, and far from enough for semantic analysis.


b) On the other hand, the following are true:

  1. Without some connection to external constraints, there is a
 strong temptation on the part of researchers to define away the
 hard parts of the AI problem. Even with the best will in the
 world, this happens subconsciously.


Agree.


  2. The hard part is learning: the AI has to build its own world
 model. My instinct and experience to date tell me that this is
 computationally expensive, involving search and the solution of
 tough optimization problems.


Agree, though I've been avoiding the phrase world model, because the
intuitive picture it provides: there is a objective world out there,
and an AI is building an internal model of it, where the concepts
represent objects, and beliefs represent factual relations among
objects --- this is a picture you don't subscribe, I guess.


That deaf, dumb, and blind kid sure plays a mean pinball.

Thus Tommy. My robotics project discards a major component of robotics
that is apparently dear to the embodiment crowd: Tommy is stationary
and not autonomous. This not only saves a lot of construction but
allows me to run the AI on the biggest system I can afford (currently
ten processors) rather than having to shoehorn code and data into
something run off a battery.


A good idea. As I said above: input/output is necessary for AGI, but
any concrete form of them is not, in principle. An AGI doesn't have to
be able to move itself around in the physical world (though it must
somehow change its environment), and doesn't have to have a certain
human sensor (though it must somehow sense its environment).


Tommy, the pinball wizard kid, was chosen as a name for the system
because of a compelling, to me anyway, parallel between a pinball game
and William James' famous description of a baby's world as a
blooming, buzzing confusion. The pinball player is in the same
position as a baby in that he has a firehose input stream of sensation
from the lights and bells of the game, but can do little but wave his
arms and legs (flip the flippers), which very rarely has any effect at
all.


Makes sense.


Tommy, the robot, consists at the moment of a pair of Firewire cameras
and the ability to display messages on the screen and receive keyboard
input -- ironically almost the exact opposite of the rock opera Tommy.
Planned for the relatively near future is exactly one muscle: a
single flipper. Tommy's world will not be a full-fledged pinball game,
but simply a tilted table with the flipper at the bottom.


I'd suggest to add the muscle in as soon as possible to get a
complete sensor-motor cycle.


Tommy, the scientific experiment and engineering project, is almost
all 

Re: [agi] Tommy

2007-05-11 Thread Mike Tintner

Josh,


Since the 90s there has been a strand in AI research that claims that

robotics is necessary to the enterprise, based on the notion that
having a body is necessary to intelligence. Symbols, it is said, must
be grounded in physical experience to have meaning. Without such
grounding AI practitioners are deceiving themselves by calling their
Lisp atoms things like MOTHER-IN-LAW when they really mean no more
than G2250.



Pei:  I think these people correctly recognized a problem in traditional AI,
though they attributed it to a wrong cause.. Every implemented system 
already has a body --- the hardware, and

as long as the system has input and output, it has experience that
comes from its body. Of course, since the body is not human body, the
experience is not human experience. However, as far as this discussion
is concerned, it doesn't matter, since this kind of experience is
genuine experience that can be used to ground meaning of concepts.

Er, there's one thing so screamingly obvious that you guys don't seem to be 
taking it  into account here. All these machines you are talking about are 
basically inert lumps of metal and don't exist without human beings to 
switch them on, feed them  interpret them. Humans are still, pace Rodney B, 
in the loop. Try taking humans out of the loop and then see what these 
standalone computers do and don't understand - or what they do, period.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Mike Tintner

Josh,

I'm not quite sure what your angle is here, but I don't seem to be 
communicating, (please correct me). If BTW you and/or others aren't 
interested in this whole cultural history area, please ignore.


I'm saying the last 400 years have been framed by Descartes' and science's 
mind VERSUS body dichotomy. That in turn has been expressed in a whole 
variety of subsidiary dichotomoies and cultural battles:


HUMAN SYSTEM:

mind vs body

Self vs body

reason vs emotion

rationality vs imagination

intelligence vs creativity
(convergent  vs  divergent
intelligence intelligence)

logic  vs  analogy

SIGN SYSTEMS/ MEDIA

literacy   vs   artistic education
(symbolic   vs image
media  media )



What
Josh: On Friday 11 May 2007 03:06:52 pm Mike Tintner wrote:

... the mind/body era inaugurated by
Descartes ( the first scientific revolution) is coming to an end right
across our culture?


Dualism was intellectually bankrupt by 1950, with the spate of mechanized
logic results from Godel, Turing, Church, Kleene, etc, and Shannon's
information theory and Weiner  Rosenblueth's Teleology paper that was 
one

of the foundations of cybernetics.


The illusion of pure, ethereal, rational mind, which
takes so many forms, is fast fading.


The scientific revolution was informed by the notion that the physical 
world
was mechanistic and worked by laws that could be written down and 
understood.

Descartes' dualism was a step TOWARD that from the earlier assumption that
both mind and body were moved by mystical life forces. Dualism said that
only the mind was, the body was mechanistic.

The intellectual revolution of the 20th century merely dropped the other
shoe, saying that both mind and body are mechanistic. The revolution in
physical movers was back that-a-way -- somewhere in the Midlands they 
should

be celebrating the 300th anniversary of the beginning of the Industrial
Revolution right about now.

It's been a while since putting a motor in a car earned it the
name auto-mobile -- nowadays we take for granted that it can move by
itself, and use auto to mean things that control themselves as well. The
era of mind -- mechanical rather than ethereal -- is just beginning.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.6.8/797 - Release Date: 10/05/2007 
17:10






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Mike Tintner

Josh,

[ignore previous truncated version]

I'm not quite sure what your angle is here, but I don't seem to be
communicating, (please correct me). If BTW you and/or others aren't
interested in this whole cultural history area, please ignore.

I'm saying the last 400 years have been framed by Descartes' and science's
mind VERSUS body dichotomy. That in turn has been expressed in a whole
variety of subsidiary dichotomoies and cultural battles:

HUMAN SYSTEM:

mind vs body

Self vs body

reason vs emotion

rationality vs imagination

intelligence vs creativity
(convergent  vs  divergent
intelligence intelligence)

logic  vs  analogy

intellect vs athleticism

SIGN SYSTEMS/ MEDIA

literacy   vs   artistic education
(symbolic   vs image
media  media )

(language,   vs  painting, photography, video etc)
maths


ORGANIZED KNOWLEDGE

cognitive psychology   vs   physical psychology
cognitive sciences  physiological psychology

embodied cognition


sciences vs arts
(general   particular,
abstract   concrete)

[science  vs religion]

philosophy vs  naturalistic, science-based
other-wordly,   philosophy
thought-experimental



AI

Computational
AI/AGI vsrobotics
(Symbolic AIvs (situated, embodied
 evolutionary robotics)


What has been happening over the last decade or so, is that all these 
dichotomies have been dissolving. It's arguably a consensus now that you 
can't have reason without emotion, but the other dichotomies and battles are 
still raging including throughout AI. Very soon now, I'm arguing, there will 
be a consensus about all these things - and in every case, it will be 
recognised that you can't have the left side,  the pure, rational, 
disembodied, symbolic  side WITHOUT the right side, without the imaginative, 
emotional, imagistic side - can't have mind without body - or AGI without a 
robotic body.


You will have a corporate science and AI that sees them all as inseparable 
sides of a whole. All this is happening now -  dualism may have been 
bankrupt a while ago, but Dennett has been, and still is, spending a massive 
amount of energy arguing against it, because its influence is still playing 
out, including in the current battles of AI..












What
Josh: On Friday 11 May 2007 03:06:52 pm Mike Tintner wrote:

... the mind/body era inaugurated by
Descartes ( the first scientific revolution) is coming to an end right
across our culture?


Dualism was intellectually bankrupt by 1950, with the spate of mechanized
logic results from Godel, Turing, Church, Kleene, etc, and Shannon's
information theory and Weiner  Rosenblueth's Teleology paper that was
one
of the foundations of cybernetics.


The illusion of pure, ethereal, rational mind, which
takes so many forms, is fast fading.


The scientific revolution was informed by the notion that the physical
world
was mechanistic and worked by laws that could be written down and
understood.
Descartes' dualism was a step TOWARD that from the earlier assumption that
both mind and body were moved by mystical life forces. Dualism said that
only the mind was, the body was mechanistic.

The intellectual revolution of the 20th century merely dropped the other
shoe, saying that both mind and body are mechanistic. The revolution in
physical movers was back that-a-way -- somewhere in the Midlands they
should
be celebrating the 300th anniversary of the beginning of the Industrial
Revolution right about now.

It's been a while since putting a motor in a car earned it the
name auto-mobile -- nowadays we take for granted that it can move by
itself, and use auto to mean things that control themselves as well. The
era of mind -- mechanical rather than ethereal -- is just beginning.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



--
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.467 / Virus Database: 269.6.8/797 - Release Date: 10/05/2007
17:10





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936


Re: [agi] Tommy

2007-05-11 Thread Benjamin Goertzel




Computational
AI/AGI vsrobotics
(Symbolic AIvs (situated, embodied
  evolutionary robotics)


What has been happening over the last decade or so, is that all these
dichotomies have been dissolving. It's arguably a consensus now that you
can't have reason without emotion, but the other dichotomies and battles
are
still raging including throughout AI.




It is true that these dichotomies are still a subject of debate among AI
academics,
but I actually think a significant percentage of people on this list agree
that they are misleading
dichotomies, and that you can potentially have your cake and eat it too by
integrating symbolic methods with low-level perception/action stuff as
occurs
in real and simulated robotics.  Certainly, Novamente incorporates this kind
of integration
in its design principles... and it is not the only AGI design to do so...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415user_secret=fabd7936