[agi] Grounding

2002-12-09 Thread Kevin Copple
Okay, I am bored, or maybe just lazy today, so please let me weigh in and
ramble a bit:

Vectors and scalars are great, and may be the best route to learning for a
given system, but it hardly seems obvious that they are a prerequisite to
learning for an AI that exceeds general human intellectual capacity.  I was
a chemical engineer in one of my former lives, and I can say that vectors
are definitely more lovable than the criminal defendants I was appointed to
represent in my former life as an attorney.  The defendants were mostly
interested in the rather binary guilty vs. not guilty.

Retinas have pixels don't they?  Perhaps our perception of scalars is
actually recognition of patterns in discrete points.  You could readily make
an image people recognize as a circle, using only pawns as discrete points
on a chessboard.

Wouldn't chess be a domain where an AGI could learn and excel, with no
vectors or scalars in sight?  Much of what is fundamental is binary: on/off,
dead/alive, male/female, married/single, smile/frown, and so on.

A miss is a good as a mile.

 . . . Kevin C.

P.S. To me a key fundamental is "Artificial Motivation."  Give an entity the
desire to accomplish goals, plus tools to use, then the ability to learn.

Example:  I was hungry, but now am full.  I wanted to reproduce, and
satisfied that urge.  Now I am tired of thinking, and want to consume more
of that wet fermented grain to stop the process for a while.  Ahh,
cultivating barely to make beer is good.  Oops, inadvertently founded
civilization.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread James Rogers
On 12/9/02 7:13 PM, "Pei Wang" <[EMAIL PROTECTED]> wrote:
> On this issue, we can distinguish 4 approaches:
> 
> (1) let symbols get their meaning through "interpretation" (provided in
> another language) --- this is the approach used in traditional symbolic AI.
> 
> (2) let symbols get their meaning by grounding on textual experience ---
> this is what I and Kevin suggested.
> 
> (3) let symbols get their meaning by grounding on simplified perceptual
> experience  --- this is what Ben and Shane suggested.
> 
> (4) let symbols get their meaning by grounding on human-level perceptual
> experience --- this is what Brooks (the robotics researcher at MIT) and
> Harnad (who raised the "symbol grounding" issue in the first place)
> proposed.


I can be put pretty much in the (2) camp.  This is adequate for proving the
basic capability of the system and you can incrementally add (3+) later.  I
mostly view this as a pragmatic engineering issue though; no need to
unnecessarily complicate the test environment until you can prove the system
is capable of handling the simplest environment.  It is a much easier
development trajectory unless you believe that (3) or (4) are an absolute
minimum for the system to work at all (obviously I don't).

Cheers,

-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] general patterns induction

2002-12-09 Thread James Rogers
On 12/9/02 7:33 PM, "Pablo" <[EMAIL PROTECTED]> wrote:
> 
> If Solomonoff is powerful enough, I hope it "realizes" by itself that
> data is grouped in "words" when it happens so - haha. I'm not working
> with human language, but who knows, maybe I'll get in a similar way -
> I'll tell you if that happens =)


Words will in fact become emergent entities using adaptations of this
construct with some time/training, and with even more time/training,
sentence structure will also start to emerge.  It doesn't even matter what
language you are using.  Interactive training isn't even necessary unless
you want to reduce the number of iterations.

I will warn you ahead of time though, that the real trick is the design of
the actual software.  All data structures/algorithms described in literature
thus far are naive and don't scale at all to even vaguely interesting levels
of complexity.  It can be done, but you'll have to do some original work to
make it viable. :-)

BTW, although the Li/Vitanyi book is expensive (I get all my Springer-Verlag
stuff for free), it is heavily referenced in other papers on the subject and
is designed as a graduate-level textbook, which makes it both accessible and
a good reference.  If you are serious about pursuing this line of thought,
it is about as good a book as you'll find on that area of mathematics.  It
isn't deeply speculative, but it gives you the foundations for everything
you'll need to know so that YOU can be deeply speculative. ;-)  I don't use
my copy much any more, but it is a good bible to have around.

Cheers,

-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Grounding

2002-12-09 Thread Peter Voss
True. The more fundamental point is that symbols representing entities and
concepts need to be grounded with (scalar) attributes of some sort.

How this is *implemented* is a practical matter. One important consideration
for AGI is that data is easily retrievable by vector distance (similarity)
and that new patterns can be leaned (unlearned) incrementally.

Peter

http://adaptiveai.com/



-Original Message- Behalf Of Ben Goertzel

Well, the fact that clustering requires vectors for A2I2, is a property of
your particular AI algorithms...

Our Novamente clustering MindAgent is based on the Bioclust clustering
algorithm, which does not act on vectors:

...

Translating textual experience directly into weighted graphs is often more
natural than translating it into vectors.  A lot of NLP frameworks use graph
representations

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Grounding

2002-12-09 Thread Ben Goertzel

Well, the fact that clustering requires vectors for A2I2, is a property of
your particular AI algorithms...

Our Novamente clustering MindAgent is based on the Bioclust clustering
algorithm, which does not act on vectors:

http://www.math.tau.ac.il/~rshamir/algmb/00/scribe00/html/lec12/node1.html

Rather, it acts on (undirected) weighted graphs [which exist as subsets of
Novamente's directed weighted hypergraph knowledge representation].  You can
always turn a set of vectors into a weighted graph, or vice versa, but the
transformation can be very impractical sometimes...

Translating textual experience directly into weighted graphs is often more
natural than translating it into vectors.  A lot of NLP frameworks use graph
representations

-- Ben


> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Peter Voss
> Sent: Monday, December 09, 2002 10:04 PM
> To: [EMAIL PROTECTED]
> Subject: [agi] Grounding
>
>
> I think it's more than a matter of 'pragmatics': In order to do
> unsupervised
> learning (clustering) of grounded entities and concepts, they *must* be
> derived from vector-encodable input data. Obviously, not all
> inputs need to
> represent continuous attributes/ features, but foundational ones do.
>
> Peter
>
> http://adaptiveai.com/
>
>
>
>
> -Original Message-
> Behalf Of Ben Goertzel
>
> Kevin,
>
> I'm sure you're right in a theoretical sense, but in practice, I have a
> strong feeling it will be a lot easier to teach an AGI stuff if one has a
> nonlinguistic world to communicate to it about.
>
> Rather than just communicating in math and English, I think
> teaching will be
> much easier if the system can at least perceive 2D pixel
> patterns.  It'll be
> a lot nicer to be able to tell it "There's a circle" when there's a circle
> on the screen [that you and it both see] -- to tell it "the
> circle is moving
> fast", "You stopped the circle", etc. etc.  Then to have it see a
> whole lot
> of circles so that, in an unsupervised way, it gets used to perceiving
> them
>
> This is not a matter of principle, it's a matter of
> pragmatics  I think
> that a perceptual-motor domain in which a variety of cognitively simple
> patterns are simply expressed, will make world-grounded early language
> learning much easier...
>
> -- Ben
>
> ---
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Tony's 2d World

2002-12-09 Thread Ben Goertzel

Tony's 2D training world is a lot simpler than A2I2's, for now.  [He is
quite free to share details with you or this list, though.]

For one thing, his initial shape-world is perception only, involving no
action!  The simple stuff that we're going to test with it right now, does
not involve action, and no advanced perception either, mostly just some
aspects of cognition.  We're going to meet in January to discuss the details
of incorporating action into the shape-world (among other things)...

I think that we'll spend some months playing around with very simple
prototype worlds.  Then we will want to have a really high-quality flexible
shape-world framework, and at that point, it would be really grand to
coordinate efforts with A2I2 and possibly other AGI projects as well.  If
Tony proceeds rapidly, that may be around mid-2003

I think that it would be great to have a common testing framework for AGI
system, involving 2D shape perception & manipulation, and the option
simultaneous NL communication  This would save AGI developers work, and
would also provide a nice way to compare AGI systems with each other, and to
allow various baby AGI systems to interact.

-- Ben G




> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of Peter Voss
> Sent: Monday, December 09, 2002 10:15 PM
> To: [EMAIL PROTECTED]
> Subject: [agi] Tony's 2d World
>
>
> Hey Tony - are you on this list? How are you doing? Can we have a look at
> your world (or spec)? Perhaps we can co-ordinate our efforts somehow.
>
>
> Peter
>
> http://adaptiveai.com/
>
>
>
> -Original Message-
> Behalf Of Ben Goertzel
>
>
> ... [Although, in fact, Tony Lofthouse is coding up a simple 2D
> training-world right now, just to test
> some of the current Novamente cognitive functions in isolation,
> even though
> the system is not yet ready for real experiential learning]
>
> ---
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel


> On this issue, we can distinguish 4 approaches:
>
> (1) let symbols get their meaning through "interpretation" (provided in
> another language) --- this is the approach used in traditional
> symbolic AI.
>
> (2) let symbols get their meaning by grounding on textual experience ---
> this is what I and Kevin suggested.
>
> (3) let symbols get their meaning by grounding on simplified perceptual
> experience  --- this is what Ben and Shane suggested.
>
> (4) let symbols get their meaning by grounding on human-level perceptual
> experience --- this is what Brooks (the robotics researcher at MIT) and
> Harnad (who raised the "symbol grounding" issue in the first place)
> proposed.

In Novamente, we plan to start with 3 but to fairly quickly move to a
combination of 3 and 2

I think that Peter Voss plans to stay with 3 alone for a longer period...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] general patterns induction

2002-12-09 Thread Pablo
Alan, thanks for the support!!! Your kind words really encourage me on
my work =) From now I'll say "pattern discovery"

James: I've just placed an order for that book at amazon =) I'll se what
I can get from it (I hope a lot, because the book is quite expensive!!)

Gary: your book looks interesting too, but I feel the "Solomonoff" thing
seems more generic. I'll try your book after that one.

> Your project probably involves doing the prediction on a character
rather > than a word basis but if you happen to be thinking along the
line of words > instead of characters, I would be interested in hearing
more about your
> work.

If Solomonoff is powerful enough, I hope it "realizes" by itself that
data is grouped in "words" when it happens so - haha. I'm not working
with human language, but who knows, maybe I'll get in a similar way -
I'll tell you if that happens =)

Cliff: what you told us about the languages was quite interesting.
Unfortunately for me, Trigrams is a given pattern; and what I want is
the machine to discover patterns by itself. Anyway it's a good approach.
It's always good to have handy simple tools when everything gets
fuzzy...

Thank you all again

Kind Regards
Pablo Carbonell




-Mensaje original-
De: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] En nombre
de Alan Grimes
Enviado el: Lunes, 09 de Diciembre de 2002 02:56 a.m.
Para: [EMAIL PROTECTED]
Asunto: Re: [agi] general patterns induction

Congrats!

You have just earned a spot in my list of the world's top-ten AI
researchers. 

Compared to the NLPers, you are the one who is _REALLY_ working on AI. I
don't know what the NLPer's are smoking but you are on the right track.

I generally frown on the term "pattern recognition" but, perhaps,
pattern discovery would work...

I wrote an article on my thinking on this reciently that I would have to
dig out of the archives of [EMAIL PROTECTED] to send you. 

Please consider me at your disposal for any additional help on the
subject. 

There are no books on the subject, you are writing your own... Welcome
to the cutting edge! =)


Pablo wrote:
> I'm looking for information about "pattern induction" or "general
> patterns" or anything that sounds like that...
> 
> What I want to do is, having a stream of data, predict what may come.
> (yes, and then take over the world... sorry if it sounds like Pinky
and
> The Brain!!)
> 
> I guess general patterns induction is related to data compression,
> because if we find a pattern in a string, then we don't have to write
> all the characters every time the pattern appears. Surely someone has
> already been working on that (who?)
> 
> Anyone would please give me a clue? Is there any book I should read??
Is
> there any book like "AI basics", "introduction to AI", or "AI for
> dummies" that may help before?
> 
> Thanks a lot!
> 
> Pablo Carbonell
> 
> PS: thanks Ben, Kevin and Eliezer for the previous help
> 
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your
subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.417 / Virus Database: 233 - Release Date: 08/11/2002
 

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Grounding

2002-12-09 Thread Peter Voss
I think it's more than a matter of 'pragmatics': In order to do unsupervised
learning (clustering) of grounded entities and concepts, they *must* be
derived from vector-encodable input data. Obviously, not all inputs need to
represent continuous attributes/ features, but foundational ones do.

Peter

http://adaptiveai.com/




-Original Message-
Behalf Of Ben Goertzel

Kevin,

I'm sure you're right in a theoretical sense, but in practice, I have a
strong feeling it will be a lot easier to teach an AGI stuff if one has a
nonlinguistic world to communicate to it about.

Rather than just communicating in math and English, I think teaching will be
much easier if the system can at least perceive 2D pixel patterns.  It'll be
a lot nicer to be able to tell it "There's a circle" when there's a circle
on the screen [that you and it both see] -- to tell it "the circle is moving
fast", "You stopped the circle", etc. etc.  Then to have it see a whole lot
of circles so that, in an unsupervised way, it gets used to perceiving
them

This is not a matter of principle, it's a matter of pragmatics  I think
that a perceptual-motor domain in which a variety of cognitively simple
patterns are simply expressed, will make world-grounded early language
learning much easier...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Tony's 2d World

2002-12-09 Thread Peter Voss
Hey Tony - are you on this list? How are you doing? Can we have a look at
your world (or spec)? Perhaps we can co-ordinate our efforts somehow.


Peter

http://adaptiveai.com/



-Original Message-
Behalf Of Ben Goertzel


... [Although, in fact, Tony Lofthouse is coding up a simple 2D
training-world right now, just to test
some of the current Novamente cognitive functions in isolation, even though
the system is not yet ready for real experiential learning]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Pei Wang
On this issue, we can distinguish 4 approaches:

(1) let symbols get their meaning through "interpretation" (provided in
another language) --- this is the approach used in traditional symbolic AI.

(2) let symbols get their meaning by grounding on textual experience ---
this is what I and Kevin suggested.

(3) let symbols get their meaning by grounding on simplified perceptual
experience  --- this is what Ben and Shane suggested.

(4) let symbols get their meaning by grounding on human-level perceptual
experience --- this is what Brooks (the robotics researcher at MIT) and
Harnad (who raised the "symbol grounding" issue in the first place)
proposed.

My opinion is: in principle, the approach (1) doesn't work well for AI,
while the last 3 approaches are in the same category.  Of course, the richer
the experience is, the more capable the system will be.  However, to
actually develop an AGI theory/system, I'd rather start with (2), and leave
(3) for the next step, and (4) for the future.   Therefore, though I
basically agree with what Ben and Shane said, I won't do that in NARS very
soon.

Pei

- Original Message -
From: "Shane Legg" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, December 09, 2002 9:44 PM
Subject: Re: [agi] AI on TV


>
> I think my position is similar to Ben's; it's not really what you
> ground things in, but rather that you don't expose your limited
> little computer brain to an environment that is too complex --
> at least not to start with.  Language, even reasonably simple
> context free languages, could well be too rich for a baby AI.
> Trying to process 3D input is far too complex.  Better then to
> start with something simple like 2D pixel patterns as Ben suggests.
> The A2I2 project by Peter Voss is taking a similar approach.
>
> Once very simple concepts and relations have been formed at this
> level then I would expect an AI to be better able to start dealing
> with richer things like basic language using what it learned
> previously as a starting point.  For example, relating simple
> patterns of language that have an immediate and direct relation
> to the visual environment to start with and slowly building up
> from there.
>
> Shane
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel

> I think my position is similar to Ben's; it's not really what you
> ground things in, but rather that you don't expose your limited
> little computer brain to an environment that is too complex --
> at least not to start with.  Language, even reasonably simple
> context free languages, could well be too rich for a baby AI.
> Trying to process 3D input is far too complex.  Better then to
> start with something simple like 2D pixel patterns as Ben suggests.
> The A2I2 project by Peter Voss is taking a similar approach.
>
> Once very simple concepts and relations have been formed at this
> level then I would expect an AI to be better able to start dealing
> with richer things like basic language using what it learned
> previously as a starting point.  For example, relating simple
> patterns of language that have an immediate and direct relation
> to the visual environment to start with and slowly building up
> from there.
>
> Shane

As Shane and I know, but everyone on this list may not to I'll say it
anyway, Peter Voss and I discussed this a fair bit before he started the
A2I2 project  I think we each influenced each others' ideas about AI
teaching/training a bit, although we came into the dialogue with some fairly
similar ideas on the topic in the first place.

The big differences between the A2I2 approach and the Novamente approach
are:

1) A2I2 is much closer to a neural net approach [involving neural-gas like
stuff, and other NN methods as well, some of them innovative], whereas
Novamente occupies a middle ground between subsymbolic & symbolic approaches

2) In the A2I2 project, they're starting off right away with trying to teach
the system based on perceptual-motor experience in a simple 2D domain.  In
Novamente, we are deferring this until we have our (more complex) cognitive
infrastructure more fully implemented and tested.  [Although, in fact, Tony
Lofthouse is coding up a simple 2D training-world right now, just to test
some of the current Novamente cognitive functions in isolation, even though
the system is not yet ready for real experiential learning]


-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Shane Legg

I think my position is similar to Ben's; it's not really what you
ground things in, but rather that you don't expose your limited
little computer brain to an environment that is too complex --
at least not to start with.  Language, even reasonably simple
context free languages, could well be too rich for a baby AI.
Trying to process 3D input is far too complex.  Better then to
start with something simple like 2D pixel patterns as Ben suggests.
The A2I2 project by Peter Voss is taking a similar approach.

Once very simple concepts and relations have been formed at this
level then I would expect an AI to be better able to start dealing
with richer things like basic language using what it learned
previously as a starting point.  For example, relating simple
patterns of language that have an immediate and direct relation
to the visual environment to start with and slowly building up
from there.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI on TV

2002-12-09 Thread Alan Grimes
Ben Goertzel wrote: 
> This is not a matter of principle, it's a matter of pragmatics  I 
> think that a perceptual-motor domain in which a variety of cognitively 
> simple patterns are simply expressed, will make world-grounded early 
> language learning much easier...

If anyone has the software for this, please tell me! =)

-- 
pain (n): see Linux.
http://users.rcn.com/alangrimes/

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel

Kevin,

I'm sure you're right in a theoretical sense, but in practice, I have a
strong feeling it will be a lot easier to teach an AGI stuff if one has a
nonlinguistic world to communicate to it about.

Rather than just communicating in math and English, I think teaching will be
much easier if the system can at least perceive 2D pixel patterns.  It'll be
a lot nicer to be able to tell it "There's a circle" when there's a circle
on the screen [that you and it both see] -- to tell it "the circle is moving
fast", "You stopped the circle", etc. etc.  Then to have it see a whole lot
of circles so that, in an unsupervised way, it gets used to perceiving
them

This is not a matter of principle, it's a matter of pragmatics  I think
that a perceptual-motor domain in which a variety of cognitively simple
patterns are simply expressed, will make world-grounded early language
learning much easier...

-- Ben

> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
> Behalf Of maitri
> Sent: Monday, December 09, 2002 5:52 PM
> To: [EMAIL PROTECTED]
> Subject: Re: [agi] AI on TV
>
>
> I don't want to underestimate the value of embodiment for an AI system,
> especially for the development of consciousness.  But this is just my
> opinion...
>
> As far as a very useful AGI, I don't see the necessity of a body
> or sensory
> inputs beyond textual input.  Almost any form can be represented as
> mathematical models that can easily be input to the system in that manner.
> I'm sure there are others on this list that have thought a lot more about
> this than I have..
>
> Kevin
>
> - Original Message -
> From: "Shane Legg" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, December 09, 2002 4:18 PM
> Subject: Re: [agi] AI on TV
>
>
> > Gary Miller wrote:
> > > On Dec. 9 Kevin said:
> > >
> > > "It seems to me that building a strictly "black box" AGI that
> only uses
> > > text or graphical input\output can have tremendous
> implications for our
> > > society, even without arms and eyes and ears, etc.  Almost
> anything can
> > > be designed or contemplated within a computer, so the need for dealing
> > > with analog input seems unnecessary to me.  Eventually, these will be
> > > needed to have a complete, human like AI.  It may even be better that
> > > these first AGI systems will not have vision and hearing
> because it will
> > > make it more palatable and less threatening to the masses"
> >
> > My understanding is that this current trend came about as follows:
> >
> > Classical AI system where either largely disconnected from the physical
> > world or lived strictly in artificial mirco worlds.  This lead to a
> > number of problems including the famous "symbol grounding problem" where
> > the agent's symbols lacked any grounding in an external reality.  As a
> > reaction to these problems many decided that AI agents needed to be
> > more grounded in the physical world, "embodiment" as they call it.
> >
> > Some now take this to an extreme and think that you should start with
> > robotic and sensory and control stuff and forget about logic and what
> > thinking is and all that sort of thing.  This is what you see now in
> > many areas of AI research, Brooks and the Cog project at MIT being
> > one such example.
> >
> > Shane
> >
> >
> > ---
> > To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>
>
> ---
> To unsubscribe, change your address, or temporarily deactivate
> your subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Pei Wang
I have a paper
(http://www.cogsci.indiana.edu/farg/peiwang/PUBLICATION/#semantics) on this
topic, which is mostly in agreement with what Kevin said.

For an intelligent system, it is important for its concepts and beliefs to
be grounded on the system's experience, but such experience can be textual.
Of course, sensorimotor experience is richer, but it is not fundamentally
different from textual experience.

Pei

- Original Message -
From: "maitri" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, December 09, 2002 5:52 PM
Subject: Re: [agi] AI on TV


> I don't want to underestimate the value of embodiment for an AI system,
> especially for the development of consciousness.  But this is just my
> opinion...
>
> As far as a very useful AGI, I don't see the necessity of a body or
sensory
> inputs beyond textual input.  Almost any form can be represented as
> mathematical models that can easily be input to the system in that manner.
> I'm sure there are others on this list that have thought a lot more about
> this than I have..
>
> Kevin
>
> - Original Message -
> From: "Shane Legg" <[EMAIL PROTECTED]>
> To: <[EMAIL PROTECTED]>
> Sent: Monday, December 09, 2002 4:18 PM
> Subject: Re: [agi] AI on TV
>
>
> > Gary Miller wrote:
> > > On Dec. 9 Kevin said:
> > >
> > > "It seems to me that building a strictly "black box" AGI that only
uses
> > > text or graphical input\output can have tremendous implications for
our
> > > society, even without arms and eyes and ears, etc.  Almost anything
can
> > > be designed or contemplated within a computer, so the need for dealing
> > > with analog input seems unnecessary to me.  Eventually, these will be
> > > needed to have a complete, human like AI.  It may even be better that
> > > these first AGI systems will not have vision and hearing because it
will
> > > make it more palatable and less threatening to the masses"
> >
> > My understanding is that this current trend came about as follows:
> >
> > Classical AI system where either largely disconnected from the physical
> > world or lived strictly in artificial mirco worlds.  This lead to a
> > number of problems including the famous "symbol grounding problem" where
> > the agent's symbols lacked any grounding in an external reality.  As a
> > reaction to these problems many decided that AI agents needed to be
> > more grounded in the physical world, "embodiment" as they call it.
> >
> > Some now take this to an extreme and think that you should start with
> > robotic and sensory and control stuff and forget about logic and what
> > thinking is and all that sort of thing.  This is what you see now in
> > many areas of AI research, Brooks and the Cog project at MIT being
> > one such example.
> >
> > Shane
> >
> >
> > ---
> > To unsubscribe, change your address, or temporarily deactivate your
> subscription,
> > please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread maitri
I don't want to underestimate the value of embodiment for an AI system,
especially for the development of consciousness.  But this is just my
opinion...

As far as a very useful AGI, I don't see the necessity of a body or sensory
inputs beyond textual input.  Almost any form can be represented as
mathematical models that can easily be input to the system in that manner.
I'm sure there are others on this list that have thought a lot more about
this than I have..

Kevin

- Original Message -
From: "Shane Legg" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, December 09, 2002 4:18 PM
Subject: Re: [agi] AI on TV


> Gary Miller wrote:
> > On Dec. 9 Kevin said:
> >
> > "It seems to me that building a strictly "black box" AGI that only uses
> > text or graphical input\output can have tremendous implications for our
> > society, even without arms and eyes and ears, etc.  Almost anything can
> > be designed or contemplated within a computer, so the need for dealing
> > with analog input seems unnecessary to me.  Eventually, these will be
> > needed to have a complete, human like AI.  It may even be better that
> > these first AGI systems will not have vision and hearing because it will
> > make it more palatable and less threatening to the masses"
>
> My understanding is that this current trend came about as follows:
>
> Classical AI system where either largely disconnected from the physical
> world or lived strictly in artificial mirco worlds.  This lead to a
> number of problems including the famous "symbol grounding problem" where
> the agent's symbols lacked any grounding in an external reality.  As a
> reaction to these problems many decided that AI agents needed to be
> more grounded in the physical world, "embodiment" as they call it.
>
> Some now take this to an extreme and think that you should start with
> robotic and sensory and control stuff and forget about logic and what
> thinking is and all that sort of thing.  This is what you see now in
> many areas of AI research, Brooks and the Cog project at MIT being
> one such example.
>
> Shane
>
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread maitri
that's him...



- Original Message -
From: "Shane Legg" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Monday, December 09, 2002 3:43 PM
Subject: Re: [agi] AI on TV


> maitri wrote:
> >
> > The second guy was from either England or the states, not sure.  He was
> > working out of his garage with his wife.  He was trying to develop robot
> > AI including vision, speech, hearing and movement.
>
> This one's a bit more difficult, Steve Grand perhaps?
>
> http://www.cyberlife-research.com/people/steve/
>
> Shane
>
> ---
> To unsubscribe, change your address, or temporarily deactivate your
subscription,
> please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]
>


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
Gary Miller wrote:

On Dec. 9 Kevin said:
 
"It seems to me that building a strictly "black box" AGI that only uses 
text or graphical input\output can have tremendous implications for our 
society, even without arms and eyes and ears, etc.  Almost anything can 
be designed or contemplated within a computer, so the need for dealing 
with analog input seems unnecessary to me.  Eventually, these will be 
needed to have a complete, human like AI.  It may even be better that 
these first AGI systems will not have vision and hearing because it will 
make it more palatable and less threatening to the masses"

My understanding is that this current trend came about as follows:

Classical AI system where either largely disconnected from the physical
world or lived strictly in artificial mirco worlds.  This lead to a
number of problems including the famous "symbol grounding problem" where
the agent's symbols lacked any grounding in an external reality.  As a 
reaction to these problems many decided that AI agents needed to be
more grounded in the physical world, "embodiment" as they call it.

Some now take this to an extreme and think that you should start with
robotic and sensory and control stuff and forget about logic and what
thinking is and all that sort of thing.  This is what you see now in
many areas of AI research, Brooks and the Cog project at MIT being
one such example.

Shane


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI on TV

2002-12-09 Thread Shane Legg
maitri wrote:

 
The second guy was from either England or the states, not sure.  He was 
working out of his garage with his wife.  He was trying to develop robot 
AI including vision, speech, hearing and movement.

This one's a bit more difficult, Steve Grand perhaps?

http://www.cyberlife-research.com/people/steve/

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel



 
I was 
at Starlab  one week after it folded.  Hugo was the only one left 
there -- he was living in an apartment in the building.  It was a huge, 
beautiful, ancient, building, formerly the Czech Embassy to Brussels  I 
saw the CAM-Brain machine (CBM) there, disabled by Korkin (the maker) due to 
non-payment...
 
There 
is a CBM in use at ATR in Japan [where Hugo used to work], but it's mostly being 
used for simple hardware-type experiments, not advanced 
learning...
 
; 
there was one at Lernout-Hauspie, but I don't know what became of it when that 
firm went under...
 
Hugo 
is currently designing the CBM-2, and I've given him some possibly useful ideas 
in that regard...
 
I can 
sympathize somewhat with Korkin: he spent his own $$ on the hardware, and then 
starlab did not pay him, breaking its contractual obligations.  He is 
struggling financially.  And Hugo was not at all politic or sympathetic in 
dealing with him, because Hugo is always so wrapped up in his own 
problems.  Well, such is human life  I tried briefly to help 
smooth things over w/ Korkin, but Hugo's attitude was sufficiently out-there 
that it was not possible...
 
-- 
Ben

  -Original Message-From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED]]On Behalf Of maitriSent: 
  Monday, December 09, 2002 11:44 AMTo: 
  [EMAIL PROTECTED]Subject: Re: [agi] AI on TV
  Ben,
   
  I just read the Bio.  You gave alot more 
  play to his ideas than the show did.  You probably know this, but Starlab 
  has folded and I think he was off to the states...
   
  The show seemed to indicate that nothing of note 
  ever came out of the project.  In fact, it appeared to not generate one 
  new network .  What they didn't detail was the cause of this.  It 
  could have ben hardware related, I don't know.  They were also having 
  serious contract problems with the Russian fellow who built it.  He had 
  effectively disabled the machine from the US until he got some more money, 
  which eventually killed the whole thing.  What a waste.  Maybe you 
  can buy the machine off Ebay now.  They said it would be 
  auctioned...
   
  They did give alot of play to his seemingly 
  contrarion ideas about the implications of his work.  It was a rather 
  dismal outlook on societies lack of general acceptance of AI and\or 
  enhancement.  I hope he was off base in this area, but I wouldn't be 
  surprised if a small group of radical anti-AI people emerge with hostile 
  intent.  Another good reason to not be so visible!!
   
  Kevin
  
- Original Message - 
From: 
Ben Goertzel 

To: [EMAIL PROTECTED] 
Sent: Monday, December 09, 2002 11:26 
AM
Subject: RE: [agi] AI on TV

 

  There was a show on the tube last night on 
  TechTV.  It was part of their weekly Secret, Strange and True 
  series.  They chronicled three guys who are working on creating 
  advanced AI. 
   
  One guy was from Belgium.  My 
  apologies to him if he reads this list, but he was a rather quirky and 
  stressed character.  He had designed a computer that was basically a 
  collection of chips.  He raised a million and had it built on 
  spec.  I gather he was expecting something to miraculously emerge 
  from this collection, but alas, nothing did.  It was really stressful 
  watching his stress.  He had very high visibility in the country and 
  the pressure was immense as he promised a lot.  I have real doubts 
  about his approach, even though I am a lay-AI person.  Also, its 
  clear from watching him that its sometimes good to have shoestring budgets 
  and low visibility.  Less stress and more forced creativity in your 
  approach... 
   
  Kevin:  Was the guy from Belgium perhaps Hugo de 
  Garis??  [Who is not in Belgium anymore, but who designed a 
  radical hardware based approach to AGI, and who is a bit of a quirky guy?? 
  ...]
   
  I visited Hugo at Starlab [when it existed] in Brussels 
  in mid-2001
   
  See my brief bio of Hugo 
at
   
   http://www.goertzel.org/benzine/deGaris.htm
   
   
  -- Ben 
  G


RE: [agi] AI on TV

2002-12-09 Thread Gary Miller
Title: Message



On Dec. 9 Kevin 
said:
 
"It seems to me that building a strictly "black 
box" AGI that only uses text or graphical input\output can have tremendous 
implications for our society, even without arms and eyes and ears, etc.  
Almost anything can be designed or contemplated within a computer, so the need 
for dealing with analog input seems unnecessary to me.  Eventually, these 
will be needed to have a complete, human like AI.  It may even be better 
that these first AGI systems will not have vision and hearing because it will 
make it more palatable and less threatening to the masses"
 
I agree 
wholeheartedly.  Sony and Honda as well as several military contractors are 
spending 10s perhaps hundreds of million dollars  on R&D robotics 
programs which incorporate the vision, and analog control, and data acquisition 
for industry, the military, and yes even the toy companies.  

 
Once 
AGIs are ready to fly it will be able to interface with these systems through 
software APIs (Application Programming Interfaces) and will not even care about 
the low-level programs that enable them move about and visually survey their 
environments.
 
Too 
often those who seek the spotlight are really sincere, but either need 
recognition for their own self reassurance or as a method of attracting 
potential funding.
 
There 
seems to be an unwritten law in the universe which that says all major 
inventions will involve major sacrifice and loss for those who dare to tackle 
what has been deemed impossible by others.  From Galileo to Edison, to 
Tesla, to maybe one of us.  Before we succeed, if we succeed, the 
universe will exact it's toll.  For nature will not give up her secrets 
willingly and intelligence may be her most closely guarded secret of 
all! 
 
Don't 
forget that genius and madness sometimes walk arm in arm!   

 
And as 
the man says if you weren't cazy when you got in, you probably will be before 
you get out!.

  
  -Original Message-From: 
  [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] On Behalf Of 
  maitriSent: Monday, December 09, 2002 11:08 AMTo: 
  [EMAIL PROTECTED]Subject: [agi] AI on TV
  There was a show on the tube last night on 
  TechTV.  It was part of their weekly Secret, Strange and True 
  series.  They chronicled three guys who are working on creating advanced 
  AI. 
   
  One guy was from Belgium.  My apologies to 
  him if he reads this list, but he was a rather quirky and stressed 
  character.  He had designed a computer that was basically a collection of 
  chips.  He raised a million and had it built on spec.  I gather he 
  was expecting something to miraculously emerge from this collection, but alas, 
  nothing did.  It was really stressful watching his stress.  He had 
  very high visibility in the country and the pressure was immense as he 
  promised a lot.  I have real doubts about his approach, even though I am 
  a lay-AI person.  Also, its clear from watching him that its sometimes 
  good to have shoestring budgets and low visibility.  Less stress and more 
  forced creativity in your approach...
   
  The second guy was from either England or the 
  states, not sure.  He was working out of his garage with his wife.  
  He was trying to develop robot AI including vision, speech, hearing and 
  movement.  He was clearly floundering as he radically redesigned what he 
  was doing probably a dozen times during the 1 hour show.  I think this 
  experimentation has value.  But I really wonder if large scale trial and 
  error will result in AGI.  I don't think so.  I think trial and 
  error will, of course, be essential during development, but T and E of the 
  entire underlying architecture seems a folly to me.  Since the problem is 
  SO immense, I believe one must start with a very sound and detailed game plan 
  that can be tweaked as things move along.
   
  The last guy was brooks at MIT.  They were 
  developing a robot with enhanced vision capabilities.  They also failed 
  miserably.  I am rather glad that they did. They re funded by DOD, and 
  are basically trying to build a robotic killing machine.  Just what we 
  need.
   
  It seems to me that trying to tackle the vision 
  problem is too big of a place to start.  While all this work will have 
  value down the line, is it essential to AGI?  It seems to me that 
  building a strictly "black box" AGI that only uses text or graphical 
  input\output can have tremendous implications for our society, even without 
  arms and eyes and ears, etc.  Almost anything can be designed or 
  contemplated within a computer, so the need for dealing with analog input 
  seems unnecessary to me.  Eventually, these will be needed to have a 
  complete, human like AI.  It may even be better that these first AGI 
  systems will not have vision and hearing because it will make it more 
  palatable and less threatening to the masses
   
  The show was rather discouraging, especially if 
  one consider

Re: [agi] AI on TV

2002-12-09 Thread maitri



Ben,
 
I just read the Bio.  You gave alot more play 
to his ideas than the show did.  You probably know this, but Starlab has 
folded and I think he was off to the states...
 
The show seemed to indicate that nothing of note 
ever came out of the project.  In fact, it appeared to not generate one new 
network .  What they didn't detail was the cause of this.  It could 
have ben hardware related, I don't know.  They were also having serious 
contract problems with the Russian fellow who built it.  He had effectively 
disabled the machine from the US until he got some more money, which eventually 
killed the whole thing.  What a waste.  Maybe you can buy the machine 
off Ebay now.  They said it would be auctioned...
 
They did give alot of play to his seemingly 
contrarion ideas about the implications of his work.  It was a rather 
dismal outlook on societies lack of general acceptance of AI and\or 
enhancement.  I hope he was off base in this area, but I wouldn't be 
surprised if a small group of radical anti-AI people emerge with hostile 
intent.  Another good reason to not be so visible!!
 
Kevin

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Monday, December 09, 2002 11:26 
  AM
  Subject: RE: [agi] AI on TV
  
   
  
There was a show on the tube last night on 
TechTV.  It was part of their weekly Secret, Strange and True 
series.  They chronicled three guys who are working on creating 
advanced AI. 
 
One guy was from Belgium.  My 
apologies to him if he reads this list, but he was a rather quirky and 
stressed character.  He had designed a computer that was basically a 
collection of chips.  He raised a million and had it built on 
spec.  I gather he was expecting something to miraculously emerge from 
this collection, but alas, nothing did.  It was really stressful 
watching his stress.  He had very high visibility in the country and 
the pressure was immense as he promised a lot.  I have real doubts 
about his approach, even though I am a lay-AI person.  Also, its clear 
from watching him that its sometimes good to have shoestring budgets and low 
visibility.  Less stress and more forced creativity in your 
approach... 
 
Kevin:  Was the guy from Belgium perhaps Hugo de 
Garis??  [Who is not in Belgium anymore, but who designed a 
radical hardware based approach to AGI, and who is a bit of a quirky guy?? 
...]
 
I visited Hugo at Starlab [when it existed] in Brussels 
in mid-2001
 
See my brief bio of Hugo at
 
 http://www.goertzel.org/benzine/deGaris.htm
 
 
-- Ben 
G


Re: [agi] AI on TV

2002-12-09 Thread maitri



Indeed it was... I'll read the bio with 
interest...
 
 

  - Original Message - 
  From: 
  Ben Goertzel 
  
  To: [EMAIL PROTECTED] 
  Sent: Monday, December 09, 2002 11:26 
  AM
  Subject: RE: [agi] AI on TV
  
   
  
There was a show on the tube last night on 
TechTV.  It was part of their weekly Secret, Strange and True 
series.  They chronicled three guys who are working on creating 
advanced AI. 
 
One guy was from Belgium.  My 
apologies to him if he reads this list, but he was a rather quirky and 
stressed character.  He had designed a computer that was basically a 
collection of chips.  He raised a million and had it built on 
spec.  I gather he was expecting something to miraculously emerge from 
this collection, but alas, nothing did.  It was really stressful 
watching his stress.  He had very high visibility in the country and 
the pressure was immense as he promised a lot.  I have real doubts 
about his approach, even though I am a lay-AI person.  Also, its clear 
from watching him that its sometimes good to have shoestring budgets and low 
visibility.  Less stress and more forced creativity in your 
approach... 
 
Kevin:  Was the guy from Belgium perhaps Hugo de 
Garis??  [Who is not in Belgium anymore, but who designed a 
radical hardware based approach to AGI, and who is a bit of a quirky guy?? 
...]
 
I visited Hugo at Starlab [when it existed] in Brussels 
in mid-2001
 
See my brief bio of Hugo at
 
 http://www.goertzel.org/benzine/deGaris.htm
 
 
-- Ben 
G


RE: [agi] Hello from Kevin Copple

2002-12-09 Thread Ben Goertzel

Gary Miller wrote:
> I also agree that the AGI approach of modeling and creating a self
> learning system is a valid bottom up approach to AGI.  But it is much
> harder for me with my limited mathematical and conceptual knowledge of
> the research to grasp how and when these systems will be able jumpstart
> themselves and evolve to the point of communicating in English.

Sure.

In my view, the path involves teaching an AGI to carry out simple tasks in
an environment (physical or digital) and then teaching it to communicate
about these tasks and related entities in its environment...

> While it is true that most bots today generate a reflexive response
> based only on the user's input, it is possible to extend bot technology
> by generating the response based upon the following additional internal
> stimuli not provided in the current input they are responding to.  These
> stimuli provide at least a portion of the grounding I think you are
> referring to.

Hm...

Actually, I think you're getting at a deep point here.

Potentially, *conversational pragmatics* and *inferred psychology* can be
used to ground *semantics*, for a chat bot...

For example, suppose there's a pattern of word usage, sentence length, etc.,
which correlates with humans being angry.

The bot can learn to correlate this pattern with the word "angry."

It is thus grounding the word "angry" with a nonlinguistic pattern...

It may then learn different patterns corresponding to "very angry" versus
"slightly angry" ..

Suppose there's also a pattern of word usage, sentence length, punctuation
use, etc., that corresponds to the emotion of "happy"  ... and "very happy"
vs. "slightly happy"

If it also understands "very long sentence" vs. "slightly long sentence" vs.
"not long sentence" [via grounding these in sentence lengths], then it may
be able to extrapolate from these examples to form an abstract model of
"very"-ness in general...

Based on this line of thinking, I have to modify and partially retract my
previous statement.

If a chat bot is given the ability to study patterns in language usage, such
as the ones mentioned above, then it may use these patterns as a
"nonlinguistic" domain in which to ground its linguistic knowledge...

So, I think that truly intelligent language usage COULD potentially be
learned by a chat bot

I still think this is trickier than learning it via a more
physical-world-ish grounding domain, but it's far from impossible

Very interesting point, Gary, thanks!!

-- Ben





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI on TV

2002-12-09 Thread Ben Goertzel



 

  There was a show on the tube last night on 
  TechTV.  It was part of their weekly Secret, Strange and True 
  series.  They chronicled three guys who are working on creating advanced 
  AI. 
   
  One guy was from Belgium.  My 
  apologies to him if he reads this list, but he was a rather quirky and 
  stressed character.  He had designed a computer that was basically a 
  collection of chips.  He raised a million and had it built on spec.  
  I gather he was expecting something to miraculously emerge from this 
  collection, but alas, nothing did.  It was really stressful watching his 
  stress.  He had very high visibility in the country and the pressure was 
  immense as he promised a lot.  I have real doubts about his approach, 
  even though I am a lay-AI person.  Also, its clear from watching him that 
  its sometimes good to have shoestring budgets and low visibility.  Less 
  stress and more forced creativity in your approach... 
   
  Kevin:  Was the guy from Belgium perhaps Hugo de 
  Garis??  [Who is not in Belgium anymore, but who designed a radical 
  hardware based approach to AGI, and who is a bit of a quirky guy?? 
  ...]
   
  I visited Hugo at Starlab [when it existed] in Brussels 
  in mid-2001
   
  See my brief bio of Hugo at
   
   http://www.goertzel.org/benzine/deGaris.htm
   
   
  -- Ben 
G


[agi] AI on TV

2002-12-09 Thread maitri



There was a show on the tube last night on 
TechTV.  It was part of their weekly Secret, Strange and True series.  
They chronicled three guys who are working on creating advanced 
AI. 
 
One guy was from Belgium.  My apologies to him 
if he reads this list, but he was a rather quirky and stressed character.  
He had designed a computer that was basically a collection of chips.  He 
raised a million and had it built on spec.  I gather he was expecting 
something to miraculously emerge from this collection, but alas, nothing 
did.  It was really stressful watching his stress.  He had very high 
visibility in the country and the pressure was immense as he promised a 
lot.  I have real doubts about his approach, even though I am a lay-AI 
person.  Also, its clear from watching him that its sometimes good to have 
shoestring budgets and low visibility.  Less stress and more forced 
creativity in your approach...
 
The second guy was from either England or the 
states, not sure.  He was working out of his garage with his wife.  He 
was trying to develop robot AI including vision, speech, hearing and 
movement.  He was clearly floundering as he radically redesigned what he 
was doing probably a dozen times during the 1 hour show.  I think this 
experimentation has value.  But I really wonder if large scale trial and 
error will result in AGI.  I don't think so.  I think trial and error 
will, of course, be essential during development, but T and E of the entire 
underlying architecture seems a folly to me.  Since the problem is SO 
immense, I believe one must start with a very sound and detailed game plan that 
can be tweaked as things move along.
 
The last guy was brooks at MIT.  They were 
developing a robot with enhanced vision capabilities.  They also failed 
miserably.  I am rather glad that they did. They re funded by DOD, and are 
basically trying to build a robotic killing machine.  Just what we 
need.
 
It seems to me that trying to tackle the vision 
problem is too big of a place to start.  While all this work will have 
value down the line, is it essential to AGI?  It seems to me that building 
a strictly "black box" AGI that only uses text or graphical input\output can 
have tremendous implications for our society, even without arms and eyes and 
ears, etc.  Almost anything can be designed or contemplated within a 
computer, so the need for dealing with analog input seems unnecessary to 
me.  Eventually, these will be needed to have a complete, human like 
AI.  It may even be better that these first AGI systems will not have 
vision and hearing because it will make it more palatable and less threatening 
to the masses
 
The show was rather discouraging, especially if one 
considers that these three folks are leading the way towards AGI.  As for 
me, I think others in the field are alot further along...Nonetheless, I'm sure 
the show will be rerun and may be a worthwhile watch for those 
here...
 
Kevin


RE: [agi] Hello from Kevin Copple

2002-12-09 Thread Gary Miller
Ben you said:



RE: [agi] EllaZ systems

2002-12-09 Thread Kevin Copple
Hey Ben,

It seems that recent college IT grads here hope to earn about 3000rmb
(375usd) a month, but often must settle for less.  This is based on my
rather limited knowledge.  Hopefully I will know more in the near future,
since I have been getting the word out and have a local headhunter looking
for some candidates.  One prospect who is not willing to leave his job for
short term work responded, "you are offering too much."

>I guess the important thing is to store as much data as possible, in a
>clearly structured way.

>People can always postprocess the data using their own scripts, so long as
>the information is there are and is clearly structured...

Yes, I agree with this sentiment.  I am thinking along the lines full
conventional citation plus other data such as location and original date of
creation.  We may indulge in a little overkill, since I have already
experienced remorse at not recording more detail in some of the early
stages.  Trial and error remains a great teacher.

>XML or RDF type syntax is generally easy for people to work with...

XML may be the way to go.  Perhaps XML files can largely replace DB's, and a
translation from XML to a DB should be straightforward.  A relational DB
could allow associating one convun to another, thus illustrating a joke or
poem, for example.  Those types of relationships may be difficult with XML,
but could be done programmatically, at least to some extent.  This AI
business sure could consume a lot of "gurus."

>I would definitely want each conversational unit linked to each
conversation
>it was embodied in -- the full conversational history... so that the
context
>could be determined  One of the interesting things to mine from this
>dataset is how people respond to context...

I will add "Ben" to my WordNet gloss for "ambitious" :-)  . . . good point
though.  We are now able to conveniently store mind-boggling amounts of text
data.  Ella will display the entire text of Kant's Critique of Pure Reason
in a single window of your browser (its amazing that those scrollbars never
wear out).  The one-microprocessor bottleneck is the big limitation (for me
anway).

>On a different topic: If you plan to involve statistical NLP technology in
>the next phase of your project, that could be an interesting thing to talk
>about ... it's not something I'm working on now, but we played around with
>it a lot at Webmind Inc. ...

Thanks for the idea.  I have been meaning to take a closer look at what has
gone on at Webmind Inc..

Later . . . Kevin

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Reverse engineering

2002-12-09 Thread maitri



Here's a good writeup on a team working to reverse 
engineer the brain...
 
http://www.discover.com/dec_02/feattech.html
 
Kevin


RE: [agi] EllaZ systems

2002-12-09 Thread Ben Goertzel

Hi Kevin,

> Since wages are so low
> here, even for well-educated people, I am in the process of hiring a few
> people for a year or so to move our project along faster.  Please let me
> know if you have any leads or suggestions.

I am curious: How  much does it cost, roughly, to hire a good programmer
there with the ability to understand AI concepts?

[Not that I plan to expand Novamente's software operations to China at the
moment, I'm just curious ;) ]

> Ben, one of the challenges it seems is how best to structure the Convun
> database so as to maximize its use for intelligent systems.
> There is likely
> no clear correct approach, so we will just do our best.  I will
> try soon to
> submit a description of where we are headed to this mailing list
> and ask for
> comments.

I guess the important thing is to store as much data as possible, in a
clearly structured way.

People can always postprocess the data using their own scripts, so long as
the information is there are and is clearly structured...

XML or RDF type syntax is generally easy for people to work with...

I would definitely want each conversational unit linked to each conversation
it was embodied in -- the full conversational history... so that the context
could be determined  One of the interesting things to mine from this
dataset is how people respond to context...

On a different topic: If you plan to involve statistical NLP technology in
the next phase of your project, that could be an interesting thing to talk
about ... it's not something I'm working on now, but we played around with
it a lot at Webmind Inc. ...

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]