FW: [agi] Early Apps.

2002-12-29 Thread Ben Goertzel

This message from James Rogers seems to have gone to SL4 instead of AGI ...


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of James
Rogers
Sent: Sunday, December 29, 2002 8:39 PM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Early Apps.


On 12/29/02 4:22 PM, Gary Miller [EMAIL PROTECTED] wrote:

 Each
 web client would need to run a separate instance of the application also
due
 to the bot's need to maintain context within each conversation.


I don't know anything about your application, but couldn't a properly
threaded app handle this with a single process image plus a little context
memory.  It certainly sounds like it to me.


 I have some college students giving
 me a bid on a Web interface and trying to get me and inexpensive host
site,
 but the last estimate I received from a national ISP was about $750 a
month.


[ OFFTOPIC ]

$750/month for what?  That much money will get you a few Mbps (not even
oversubscribed) on a GigE fiber backbone and rackspace around here (and many
other parts of the country).  And these are at standard commercial rates for
a dedicated circuit straight to the core, not preferential rates or with
looser SLAs.  It sounds like you are getting ripped on prices.

What are you looking for precisely?  Go ahead and respond offline, but I
might be able to help you out.  I am an original principal at one of the
fastest growing top-tier network providers in the world currently (and
attracting a good amount of buzz both for our network performance and the
fact that they are making money and growing while everyone else goes
bankrupt).  This is one of my investments that I am actively involved in on
a weekly basis among other things, so I can personally see what they could
do for you.  They continue to be exceptionally price competitive for what
they offer.

[ /OFFTOPIC ]


 Note the mam(m){a|e}l in the pattern allows mammal to be spelled mamel,
mamal,
 mammal, or mammel to allow for common misspellings.
 An additional Levenschtein distance calculation is also used if a less
common
 misspelling is encountered and the pattern does not match pattern 1.


This seems like a very brittle way of doing things.  A smart system should
be able to automatically read through misspellings based on the context of
the entire sentence.  It makes for vastly smaller memory and computational
requirements than trying to exhaustively search some arbitrary parameter
space.  Statistical methods would be better here, but you'd need a pretty
clever data structure to make such searches efficient.


 But
 some such as the pattern for greetings can run 2400 characters in length
to
 accomadate all of the various ways of greeting a person.


I'm guessing that if you efficiently compressed all those greetings you are
testing for, it would take less than 2400 characters (or 4800 bytes, since
you are probably storing Unicode using .NET), probably by a fair margin.
Not that this is particularly important to you, but there is a theoretical
notion buried in there.  I realize that what you are doing is mostly chatbot
type stuff, but it seems incredibly exhausting, both in time and resources
to try and scale this.


Cheers,

-James Rogers
 [EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Early Apps.

2002-12-29 Thread Ben Goertzel


Gary Miller wrote:
 I agree that as humans we bring a lot of general knowledge with us when
 we learn a new domain.  That is why I started off with the general
 conversational domain and am now branching into science, philosophy,
 mathematics and history.  And of course the AI can not make all the
 connections without being extensively interviewed on a subject and
 having a human help clarify it's areas of confusion just as a parent
 answers questions for a child or a teacher for a student.  I am not in
 fact trying to take the exhaustive approach one domain at a time
 approach but rather to teach it the most commonly known and requested
 information first.  My last email just used that description to identify
 my thoughts on grounding.  I am hoping that by doing this and repeating
 the interviewing process in an iterative development cycle that
 eventually the bot will eventually be able to discuss many different
 subjects at a somewhat superficial level much as same as most humans are
 capable of.  This is a lot different from the exhaustive definition that
 Cyc provides for each concept.

Gary, I respect the hypothesis you're making here: it is a scientific
hypothesis in the sense of Karl Popper, i.e. it is pragmatically
falsifiable.  You can try with this approach and see how it works.  It is
not identical to the expert systems approach, though it has some
commonalities.

My own intuition is that this approach will not succeed -- that conversing
with humans is not going to get across enough of the tacit, implicit
knowledge that a mind needs to have to really converse intelligently in any
nontrivial subject area.  I think that even if the implicit knowledge seems
to *us* to be there in the conversations, it won't be there *for the system*
unless the system has had some experience gaining implicit knowledge of its
own via nonlinguistic world-interaction.

 I don't think AI is absent sufficient theory, just sufficient execution.

Well, here I profoundly disagree with you.  I think that the
generally-accepted AI theories are profoundly wrong, and extremely limited
in their view of how intelligence must operate.  I think AI's failure to
execute is directly based on the failure of its theories to accept and
encompass the full complexity of the mind.

 I feel like the Cyc Project's heart was in the right place and the level
 of effort was certainly great, but perhaps the purity of their vision
 took priority over usability of the end result.  Is any company actually
 using Cyc as anything other than a search engine yet?

 That being said other than Cyc I am at a loss to name any serious AI
 efforts which are over a few years in duration and have more that 5 man
 years worth of effort (not counting promotional and fundraising).

My Novamente project certainly fits this description.  The Webmind AI
project had about 70 man-years of effort go into it between 1997-early 2001.
Novamente is Webmind's successor -- different code, different mathematics,
different software architecture, but the same spirit, and building on
Webmind's successes and mistakes.  Novamente has had maybe 7 man-years of
effort go into it since mid-2001.

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Thinking may be overrated.

2002-12-29 Thread Ben Goertzel

Kevin Copple wrote:
 Thinking in humans, much like genetic evolution, seems to involve
 predominately trial and error.  Even the logic we like to use is more
 often than not faulty, but can lead us to try something different.  And
 example of popular logic that is invariably faulty is reasoning
 by analogy.
 It is attractive, but always breaks down on close examination.  But this
 type of reasoning will lead to a trial that may succeed, possibly
 because of
 the attractive similarities, but more likely in spite of them.

I don't agree with this paragraph, although I see some truth in it.

I think that trial and error based idea-evolution is one important aspect
of human cognition, but not the *predominant* aspect.  It may predominate in
some circumstances, but these would be unusual ones where there was little
pertinent background knowledge

Analogical inference can be formulated rigorously in probabilistic terms.
It does have a guesswork aspect to it, but it's a well-organized way of
managing conditional probabilities... in my view ;_)

In Novamente, we have an EvolutionaryConceptCreation MindAgent which
explicitly uses trial and error to create new ideas.  But it is intended for
use together with other MindAGents, including those implementing
probabilistic inference   If you set the parameters of the system so
that evolutionary concept creation predominated, I think you'd find a system
with far below optimal functionality..

Traditional logic-based AI has badly underemphasized the role of trial and
error, but I'm afraid you're swinging to the opposite extreme !!

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Thinking may be overrated.

2002-12-29 Thread Kevin Copple
Ben Goertzel wrote:
 Traditional logic-based AI has badly underemphasized the role of trial and
error, but I'm afraid you're swinging to the opposite extreme !!

It has been said that it is easier to bring a wild idea under control than
to give life into a lame idea, so considering an extreme position may not be
a bad tactic.

In further defense of trial and error, I would point out that much or most
of our human knowledge and progress has been the result of countless random
trials and errors of others.  If the pre-Columbian Native Americans had a
strong value for seeking advancement through trial and error, I imagine they
would have discovered much better archery techniques that would have
dramatically altered human history.  Would those countless archers have met
the criteria for AGI?  Surely they would have.  But they apparently lacked
respect for random trial and error in the pursuit of progress.  Clearly they
WANTED their arrows to have three times the range, speed and power.  Seems
this is an obvious case of an AGI (minus the artificial) that desperately
needed the random trial and error problem solving method.

In my life, I have found that various forms of negative feedback often
taught me an effective lesson, even though I intellectually KNEW the lesson
beforehand.  As in, I knew that was a bad idea, tried it anyway, and will
never again.  I have seen this behavior many times in others as well.  This
is the type of observation that makes me wonder the extent to which emotion
is the real driver in our intelligent behavior.  WANTING to succeed often
seems to be the real factor in success at solving problems.

What is the pattern matching that occurs in our biological neural nets?  Is
it not a simple trial and error, with more dimensions?  To me, seeing a
pattern in a series of words, images, or numbers in an IQ test is a type of
trial and error.   I am getting beyond my ability to express myself, at
least without more energy and time than I have at the moment, but it occurs
to me that what we perceive as logic in our brains is actually massively
parallel trial and error processes with emotional reinforcement for success
or failure.

I do not want to say that random trial and error is the ultimate form of
intelligent thought.  Far from it.  But given what nature and humankind have
achieved with it to date, and that we may not even recognize the extent to
which it is involved in our own thought, it seems to be an intriguing
ingredient.  Perhaps artificial trial and error systems can lead us to pure
intelligence.  That is, if pure intelligence is not an illusion, a mirage,
an unachievable holy grail.

Cheers,

Kevin Copple

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]