Re: [agi] hello

2008-08-15 Thread rick the ponderer
On 8/13/08, Jim Bromer [EMAIL PROTECTED] wrote:

 On Wed, Aug 13, 2008 at 4:14 AM, rick the ponderer [EMAIL PROTECTED]
 wrote:
 
  Thanks for replying YKY
  Is the logic learning you are talking about inductive logic programming.
 If
  so, isn't ilp basically a search through the space of logic programs (i
 may
  be way off the mark here!), wouldn't it be too large of a search space to
  explore if you're trying reach agi.
 
  And if you're determined to learn a symbolic representation, wouldn't
  genetic programming be a better choice, since it won't get stuck in local
  minima.


 There is no reason why symbolic reasoning could not incorporate some
 kind of random combinatoric search methods like those used in GA
 searches. Categorical imagination can be used to examine the possible
 creation of new categories; the method does not have to be limited to
 the examination of new combinations of previously derived categories.
 And it does not have to be limited to incremental methods either.

 For example, the method might be used to combine fragments of surface
 features observed in the IO data environment. Combinatoric search can
 be also used with the creation and consideration of conjectures about
 possible explanations of observed data events.  One of the most
 important aspects of these kinds of searches is that they can be used
 in serendipitous methods to detect combinations or conjectures that
 might be useful in some other problem even when they don't solve the
 current search goal that they were created for.

 While discussions about these subjects must utilize some traditional
 frames of reference, the conventions of their use in conversation
 should not be considered as absolute limitations on their possible
 modifications.  They can be used as starting points of further
 conversation.  YKY's and Ben Goetzel's recent comments sound as if
 they are referring to strictly predefined categories when they talk
 about symbolic methods, but I would be amazed if that represents their
 ultimate goals in AI research.

 Similarly, other unconventional methods can be considered when
 thinking about ANN's and GA's, but I think that novel approaches to
 symbolic methods offers the best bet for some of the same  reasons
 that YKY mentioned.

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


For example, the method might be used to combine fragments of surface
features observed in the IO data environment. Combinatoric search can
be also used with the creation and consideration of conjectures about
possible explanations of observed data events. One of the most
important aspects of these kinds of searches is that they can be used
in serendipitous methods to detect combinations or conjectures that
might be useful in some other problem even when they don't solve the
current search goal that they were created for.

Is that any different to clustering?



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-15 Thread rick the ponderer
On 8/15/08, rick the ponderer [EMAIL PROTECTED] wrote:



 On 8/13/08, Jim Bromer [EMAIL PROTECTED] wrote:

 On Wed, Aug 13, 2008 at 4:14 AM, rick the ponderer [EMAIL PROTECTED]
 wrote:
 
  Thanks for replying YKY
  Is the logic learning you are talking about inductive logic programming.
 If
  so, isn't ilp basically a search through the space of logic programs (i
 may
  be way off the mark here!), wouldn't it be too large of a search space
 to
  explore if you're trying reach agi.
 
  And if you're determined to learn a symbolic representation, wouldn't
  genetic programming be a better choice, since it won't get stuck in
 local
  minima.


 There is no reason why symbolic reasoning could not incorporate some
 kind of random combinatoric search methods like those used in GA
 searches. Categorical imagination can be used to examine the possible
 creation of new categories; the method does not have to be limited to
 the examination of new combinations of previously derived categories.
 And it does not have to be limited to incremental methods either.

 For example, the method might be used to combine fragments of surface
 features observed in the IO data environment. Combinatoric search can
 be also used with the creation and consideration of conjectures about
 possible explanations of observed data events.  One of the most
 important aspects of these kinds of searches is that they can be used
 in serendipitous methods to detect combinations or conjectures that
 might be useful in some other problem even when they don't solve the
 current search goal that they were created for.

 While discussions about these subjects must utilize some traditional
 frames of reference, the conventions of their use in conversation
 should not be considered as absolute limitations on their possible
 modifications.  They can be used as starting points of further
 conversation.  YKY's and Ben Goetzel's recent comments sound as if
 they are referring to strictly predefined categories when they talk
 about symbolic methods, but I would be amazed if that represents their
 ultimate goals in AI research.

 Similarly, other unconventional methods can be considered when
 thinking about ANN's and GA's, but I think that novel approaches to
 symbolic methods offers the best bet for some of the same  reasons
 that YKY mentioned.

 Jim Bromer


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com

 
 For example, the method might be used to combine fragments of surface
 features observed in the IO data environment. Combinatoric search can
 be also used with the creation and consideration of conjectures about
 possible explanations of observed data events. One of the most
 important aspects of these kinds of searches is that they can be used
 in serendipitous methods to detect combinations or conjectures that
 might be useful in some other problem even when they don't solve the
 current search goal that they were created for.
 
 Is that any different to clustering?

where you talk about discovering new categories from IO data.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-15 Thread Joel Pitt
On Wed, Aug 13, 2008 at 6:31 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 To use Thorton's example, he demontrated that a checkerboard pattern can
 be learned using logic easily, but it will drive a NN learner crazy.

Note that neural networks are a broad subject and don't only include
perceptrons, but also self-organising maps and other connectionist set
ups.

In particular, Hopfield networks are an associative memory system that
would have no problem learning/memorising a checkerboard pattern (or
any other pattern, the only problem occurs when memorized patterns
begin to overlap).

A logic system system would be a lot more efficient though.

J


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-13 Thread YKY (Yan King Yin)
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:

 Reading this, I get the view of ai as basically neural networks, where
each individual perceptron could be any of a number of algorithms
(decision tree, random forest, svm etc).
 I also get the view that academics such as Hinton are trying to find ways
of automatically learning the network, whereas there could also be a
parallel track of engineering the network, manually creating it perceptron
by percetron, in the way Rodney Brooks advocates bottom up subsumption
architecture.

 How does opencog relate to the above viewpoint. Is there something
fundamentally flawed in the above as an approach to achieving agi.

NN *may* be inadequate for AGI, because logic-based learning seems to be, at
least for some datasets, more efficient than NN learning (that includes
variants such as SVMs).  This has been my intuition for some time, and
recently I've found a book that explores this issue in more detail.  See
Chris Thorton 2000, Truth from Trash -- how learning makes sense, MIT
press, or some of his papers on his web site.

To use Thorton's example, he demontrated that a checkerboard pattern can
be learned using logic easily, but it will drive a NN learner crazy.

It doesn't mean that the NN approach is hopeless, but it faces some
challenges.  Or, maybe this intuition is wrong (ie, do such heavily
logical datasets occur in real life?)

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-13 Thread rick the ponderer
On 8/13/08, YKY (Yan King Yin) [EMAIL PROTECTED] wrote:


 On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
 
  Reading this, I get the view of ai as basically neural networks, where
 each individual perceptron could be any of a number of algorithms
 (decision tree, random forest, svm etc).
  I also get the view that academics such as Hinton are trying to find ways
 of automatically learning the network, whereas there could also be a
 parallel track of engineering the network, manually creating it perceptron
 by percetron, in the way Rodney Brooks advocates bottom up subsumption
 architecture.
 
  How does opencog relate to the above viewpoint. Is there something
 fundamentally flawed in the above as an approach to achieving agi.

 NN *may* be inadequate for AGI, because logic-based learning seems to be,
 at least for some datasets, more efficient than NN learning (that includes
 variants such as SVMs).  This has been my intuition for some time, and
 recently I've found a book that explores this issue in more detail.  See
 Chris Thorton 2000, Truth from Trash -- how learning makes sense, MIT
 press, or some of his papers on his web site.

 To use Thorton's example, he demontrated that a checkerboard pattern can
 be learned using logic easily, but it will drive a NN learner crazy.

 It doesn't mean that the NN approach is hopeless, but it faces some
 challenges.  Or, maybe this intuition is wrong (ie, do such heavily
 logical datasets occur in real life?)

 YKY
 --
 *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your 
 Subscriptionhttp://www.listbox.com

Thanks for replying YKY
Is the logic learning you are talking about inductive logic programming. If
so, isn't ilp basically a search through the space of logic programs (i may
be way off the mark here!), wouldn't it be too large of a search space to
explore if you're trying reach agi.

And if you're determined to learn a symbolic representation, wouldn't
genetic programming be a better choice, since it won't get stuck in local
minima.

Would neural networks be better in that case because they have the
mechanisms as in Geoff Hinton's paper that improve on random searching.

Also, if you did manage to learn a giant logic program that represented ai,
could it be easily parallelized the way a neural network can be (so that it
can run in real time).



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-13 Thread YKY (Yan King Yin)
On 8/13/08, rick the ponderer [EMAIL PROTECTED] wrote:
 Thanks for replying YKY
 Is the logic learning you are talking about inductive logic programming.
If so, isn't ilp basically a search through the space of logic programs (i
may be way off the mark here!), wouldn't it be too large of a search space
to explore if you're trying reach agi.
**
Yes, and I guess the search space would be huge no matter what kind of
learning substrate we use.  At least one redeeming trick (for symbolic AI)
is that we can limit the depth of the search of programs, and my intuition
is that commonsense reasoning is mostly shallow (ie, involving few
inference steps).

 And if you're determined to learn a symbolic representation, wouldn't
genetic programming be a better choice, since it won't get stuck in local
minima.
*
It is possible to use GA to search the ILP space;  there is research in that
area.  I may use that too.

One interesting question is to compare ILP search in the space of logic
programs vs genetic programming (ie search in program spaces such as Lisp or
combinator logic or lambda calculus).  Unfortunately I'm unfamiliar with the
latter, so I need some time to study that.

 Would neural networks be better in that case because they have the
mechanisms as in Geoff Hinton's paper that improve on random searching.

**
This is just the age-old debate of symbolic AI vs connectionism, given a new
twist in the context of machine learning.  Note that that first debate was
never really settled.  So, my bet is that we need NN-style learning at the
low levels, and symbolic-style learning at the high levels.  I tend to focus
on the symbolic side.  I'm very skeptical whether NN learning can solve
high-level symbolic problems.

 Also, if you did manage to learn a giant logic program that represented
ai, could it be easily parallelized the way a neural network can be (so that
it can run in real time).


Yes, logical inference can be parallelized.  I have a book about it, but I
haven't bothered to study that -- design first, optimize later.

YKY



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


Re: [agi] hello

2008-08-13 Thread Jim Bromer
On Wed, Aug 13, 2008 at 4:14 AM, rick the ponderer [EMAIL PROTECTED] wrote:

 Thanks for replying YKY
 Is the logic learning you are talking about inductive logic programming. If
 so, isn't ilp basically a search through the space of logic programs (i may
 be way off the mark here!), wouldn't it be too large of a search space to
 explore if you're trying reach agi.

 And if you're determined to learn a symbolic representation, wouldn't
 genetic programming be a better choice, since it won't get stuck in local
 minima.

There is no reason why symbolic reasoning could not incorporate some
kind of random combinatoric search methods like those used in GA
searches. Categorical imagination can be used to examine the possible
creation of new categories; the method does not have to be limited to
the examination of new combinations of previously derived categories.
And it does not have to be limited to incremental methods either.

For example, the method might be used to combine fragments of surface
features observed in the IO data environment. Combinatoric search can
be also used with the creation and consideration of conjectures about
possible explanations of observed data events.  One of the most
important aspects of these kinds of searches is that they can be used
in serendipitous methods to detect combinations or conjectures that
might be useful in some other problem even when they don't solve the
current search goal that they were created for.

While discussions about these subjects must utilize some traditional
frames of reference, the conventions of their use in conversation
should not be considered as absolute limitations on their possible
modifications.  They can be used as starting points of further
conversation.  YKY's and Ben Goetzel's recent comments sound as if
they are referring to strictly predefined categories when they talk
about symbolic methods, but I would be amazed if that represents their
ultimate goals in AI research.

Similarly, other unconventional methods can be considered when
thinking about ANN's and GA's, but I think that novel approaches to
symbolic methods offers the best bet for some of the same  reasons
that YKY mentioned.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=111637683-c8fa51
Powered by Listbox: http://www.listbox.com


RE: [agi] Hello from Kevin Copple

2002-12-09 Thread Ben Goertzel

Gary Miller wrote:
 I also agree that the AGI approach of modeling and creating a self
 learning system is a valid bottom up approach to AGI.  But it is much
 harder for me with my limited mathematical and conceptual knowledge of
 the research to grasp how and when these systems will be able jumpstart
 themselves and evolve to the point of communicating in English.

Sure.

In my view, the path involves teaching an AGI to carry out simple tasks in
an environment (physical or digital) and then teaching it to communicate
about these tasks and related entities in its environment...

 While it is true that most bots today generate a reflexive response
 based only on the user's input, it is possible to extend bot technology
 by generating the response based upon the following additional internal
 stimuli not provided in the current input they are responding to.  These
 stimuli provide at least a portion of the grounding I think you are
 referring to.

Hm...

Actually, I think you're getting at a deep point here.

Potentially, *conversational pragmatics* and *inferred psychology* can be
used to ground *semantics*, for a chat bot...

For example, suppose there's a pattern of word usage, sentence length, etc.,
which correlates with humans being angry.

The bot can learn to correlate this pattern with the word angry.

It is thus grounding the word angry with a nonlinguistic pattern...

It may then learn different patterns corresponding to very angry versus
slightly angry ..

Suppose there's also a pattern of word usage, sentence length, punctuation
use, etc., that corresponds to the emotion of happy  ... and very happy
vs. slightly happy

If it also understands very long sentence vs. slightly long sentence vs.
not long sentence [via grounding these in sentence lengths], then it may
be able to extrapolate from these examples to form an abstract model of
very-ness in general...

Based on this line of thinking, I have to modify and partially retract my
previous statement.

If a chat bot is given the ability to study patterns in language usage, such
as the ones mentioned above, then it may use these patterns as a
nonlinguistic domain in which to ground its linguistic knowledge...

So, I think that truly intelligent language usage COULD potentially be
learned by a chat bot

I still think this is trickier than learning it via a more
physical-world-ish grounding domain, but it's far from impossible

Very interesting point, Gary, thanks!!

-- Ben





---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] Hello from Kevin Copple

2002-12-08 Thread Pei Wang
From: Kevin Copple [EMAIL PROTECTED]


 It seems to me that rout memorization is an aspect of human learning, so
why
 not include a variety of jokes, poems, trivia, images, and so on as part
of
 an AI knowledge base?  In the EllaZ system we refer to these chunks of
data
 as Convuns (conversational units).

This is an important issue.  One extreme approach in AI and CogSci is to
reduce the meaning of linguistic chunks (phrases, sentenses, paragraphs, and
texts) into basic components (example: Schank's CD Theory, Wierzbicka's
primes, and Frege's principle of compositionality). I think such an
approach is fine for certain formal languages, but definitely not OK for
some others, and especially, won't work well for any natural language.

However, I feel many statistical NLP approaches are going to the other
extreme, that is, to take a linguistic chunk as a whole, without analysing
its semantic relation with its components. I don't think we can go very far
in this direction, neither.  I hope Ella dosen't fall into this category.

 Ella was lucky enough to win the 2002 Loebner Prize Contest, which can be
 somewhat arbitrary with the limited number of judges and limited length of
 conversations.  She has a number of functional features that I suspect the
 engineering students selected as judges were more likely to test and
 appreciate.

I don't find any document about the system on the website. Is there any that
you can share with us?

Though I'm also interested in I Ching, your claim The I Ching (Yi Jing),
dating as far back as 2000 B.C., can be considered to be the first
computational AI, and the first binary computer. is still way too strong
for me to agree.  ;-)

 I am currently living Tianjin, China, having sold my import/export
chemicals
 business to a competitor.  My wife, Zhang Ying, is a local girl who
doesn't
 care for the food in the US and doesn't like being away from her friends
and
 family.  So, I am between jobs and working on www.EllaZ.com for the next
 year or so.

Tianjin is my hometown, and I'm back there every summer in the recent years.
I hope you enjoy your life there.

Pei

 We are always on the outlook for collaborators and ideas we can borrow
:-)

 Cheers . . . Kevin Copple



 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Hello from Kevin Copple

2002-12-08 Thread Ben Goertzel

Hi Kevin,

I know something about the Loebner prize from following the AI career of
Jason Hutchens, whose chatbot won the contest in the late 90's, and who I
knew slightly when I was living in Perth (Western Australia) in the
mid-90's

Your approach, on the surface, seems fairly similar to Jason's.  I'm
guessing you're familiar with his work, but others may not be so I'll post
some links here.

An old but good paper from his on chatbots and the Loebner contest:
http://ciips.ee.uwa.edu.au/Papers/Technical_Reports/1997/05/

His company a-i.com, which essentially shut down about a year ago
http://www.a-i.com/

My own approach is quite opposite.  While I do aim to make my Novamente AI
system (www.realai.net) chat eventually, I've detailed a complex design for
integrative cognition, and I think we need a lot of that implemented before
we can have the system do chat in a meaningful way [i.e. chat while
understanding what it's talking about].

I have a lot of skepticism about any approach to AGI that is primarily or
entirely language-focused.  I doubt it's going to be possible to get a
system to have any significant general intelligence unless it has access,
not only to language, but also to a nonlinguistic domain in which some of
its linguistic experience can be grounded or anchored [to use two
related terms from the cog sci literature].

I agree that a significant part of human conversation consists of rote
memory, and reflexive responses according to habitual communication
patterns.  To me, however, these are the least interesting parts of human
conversation  And I'm not sure how far mimicking these parts of human
conversation gets you, in terms of emulating the other parts of human
conversation, which involve deeper thought.

One thing is sure though: The database of conversational units you're
compiling could be *very useful* to a Novamente system [or other AGI system]
that was trying to learn to chat [though we're not ready for that quite
yet].  It will be a great source of information on conversational
pragmatics  Please, continue maintaining that DB, and maintain it
carefully!!!  One of these days I'll be wanting to talk to you about an
arrangement for sharing it ;)

-- Ben Goertzel







 -Original Message-
 From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On
 Behalf Of Kevin Copple
 Sent: Sunday, December 08, 2002 4:11 AM
 To: [EMAIL PROTECTED]
 Subject: [agi] Hello from Kevin Copple


 I just recently joined this e-mail list after following some
 links posted by
 Tony Lofthouse in the Generation5 forum. I am working on a
 natural language
 project that can be seen at www.EllaZ.com, and am interested in
 what you all
 are up to.  The e-mails I have received from this list in the
 last day or so
 have been interesting and informative.  Thanks!

 My approach to doing something in the AI field is to start with basic
 interface, knowledge, and functional features that can be implemented and
 demonstrated.  Now that a basic framework is in place, the system can be
 expanded and built upon as various techniques are identified as useful and
 incorporated.

 It seems to me that rout memorization is an aspect of human
 learning, so why
 not include a variety of jokes, poems, trivia, images, and so on
 as part of
 an AI knowledge base?  In the EllaZ system we refer to these
 chunks of data
 as Convuns (conversational units).  One plan is for the system to log
 interactions with users and identify patterns of interest.  The
 system would
 then be able to predict which Convuns a user would most likely be
 interested
 in, and also be able to evaluate the interest in a particular Convun.

 Ella was lucky enough to win the 2002 Loebner Prize Contest, which can be
 somewhat arbitrary with the limited number of judges and limited length of
 conversations.  She has a number of functional features that I suspect the
 engineering students selected as judges were more likely to test and
 appreciate.

 I am currently living Tianjin, China, having sold my
 import/export chemicals
 business to a competitor.  My wife, Zhang Ying, is a local girl
 who doesn't
 care for the food in the US and doesn't like being away from her
 friends and
 family.  So, I am between jobs and working on www.EllaZ.com for the next
 year or so.

 We are always on the outlook for collaborators and ideas we can
 borrow :-)

 Cheers . . . Kevin Copple



 ---
 To unsubscribe, change your address, or temporarily deactivate
 your subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]