[agi] AI Bush 1.0

2003-12-17 Thread Kevin Copple
Please take a look at www.AIBush.com where AI Bush 1.0 is now available on
CD-ROM or by download.  We believe we are breaking new ground with this type
of program, which features a wide variety of functions and entertainment
through a natural language interface.  Of particular interest is the
strategy game "Reelect Bush?" where the user participates and affects the
outcome of a presidential campaign.

One of the observations from this project is that we continue to add feature
after feature, but they don't seem to interfere with each other much.  The
large range of English words and phrases make it easy to allow access to a
large number of functions from a single input text box.

Another observation is that English commands have the potential to be more
efficient than point and click.  The query "please convert half a dozen Yen
into Canadian money" is answered in one step with AI Bush.  Doing the same
thing starting with Google would an order of magnitude longer.  First type
in a search request, then select a link to open, use a couple of dropdown
lists, type in an amount (if you picked the right site) or grab your
calculator.  Finally, you better write it down, whereas AI Bush has a handy
date stamped note feature "take a note" and also keeps a transcript for you.

Of course AI Bush is just a glorified chatbot using natural language to
activate program features.  But as more things are fitted together, perhaps
the whole will become more than the sum of the parts.  Chess, Blackjack,
WordNet, World Fact Book, Convuns, web services, I Ching, library, "Reelect
Bush?" and so on are turning into quite a collection.

Recent additions to the website include a press release, generous affiliate
program, and updated order page.  You may want to join in the affiliate
program to help promote our development and earn a 35% commission on each
sale that originates from your site.

We plan to soon add a page listing new features that are currently under
development, as well as a "wish list" for things to work on later.  If you
have any suggestions, please let us know!

Best regards,

Kevin Copple



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Dog-Level Intelligence

2003-03-21 Thread Kevin Copple
I was not able to connect to URL, or the subsequent one either, but I'll
make a few comments anyway :-)

It seems to me that "language, math, logic, long chains of reasoning, etc."
may be the easier types of intelligence to emulate.  I am thinking of a dog
running across a field, jumping into the air, and catching a Frisbee.  Some
of these acrobatic feats are amazing (to me anyway).  And when I think of
all the "thinking" that would be required to duplicate the task, "dog
intelligence" seems to be a huge challenge.

There are a number of factors, of which these would be a partial outline:
   body language of the owner
   past history of owner behavior
   observed flight of the Frisbee
   various perceived distances
   planning an intercept trajectory
   balance of the leap
   understanding body momentum
...understanding and using gravity
   control of hundreds of muscles
   emotional urgency of success
   stereoscopic depth perception
   awareness of obstacles
   timing of bite
   filtering distractions
   follow-through and landing

Of course, all this thinking would happen very fast.  Offhand, this type of
intelligent play is more impressive than geometry proofs or playing chess.

Cheers . . . Kevin Copple


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of
Moshe Looks
Sent: Friday, March 21, 2003 1:20 PM
To: [EMAIL PROTECTED]
Subject: [agi] Dog-Level Intelligence

Hi All,

Inspired by a discussion with Peter Voss, I have given some thought to the
problem of creating dog-level intelligence.  What I've come up with is a
"shopping list" of features that a dog-level AI should have, called
"Legitimate Challenges of Dog-Level Intelligence":

http://www.republicofheaven.org/doglevel.html

Calling this "dog-level" is somewhat arbitrary, but should give a fair idea
what I'm getting at; language, math, logic, long chains of reasoning, etc.,
are all out of bounds.  The list is probably incomplete; I'd like to hear
what other people think should be on it.  And of course what you think
should be taken off... ;->

Cheers,
Moshe


---
To unsubscribe, change your address, or temporarily deactivate your
subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Hard Wired Switch

2003-03-03 Thread Kevin Copple
Ben said,

>When the system is smart enough, it will learn to outsmart the posited
>Control Code, and the ethics-monitor AGI

This isn't apparent at all, given that the Control Code could be pervasively
imbedded and keyed to things beyond the AGI's control.  The idea is to limit
the AGI and control its progress as we wish.  I just don't see the risk that
the AGI will suddenly become so intelligent that it is able to "jump out of
the box" in a near-supernatural fashion, as some seem to fear.

Someone once said that a cave can trap and control a man, even though the
cave is dumb rock.  We are considerably more intelligent than granite, so I
would not hesitate to believe that we control an AGI that we create.

Of course, the details of a sophisticated "kill switch" would depend on the
architecture of system, and be beyond the scope of this casual conversation.
But to dismiss it out of hand as conceptually ineffectual is rather
puzzling.

Kevin Copple

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


[agi] Hard Wired Switch

2003-03-02 Thread Kevin Copple
Here's an idea (if it hasn't already been considered before):

In each executing component of an AGI has Control Code.  This Control Code
monitors a Big Red Switch.  If the switch is turned on, execution proceeds
normally.  If not, all execution by the component stopped.

The big red switch could be a heart monitor on Ben's chest.  The AGI better
keep Ben alive!  And if Ben gets angry with the AGI, he can remove the heart
monitor, so the AGI better keep Ben happy also.

Several other features could be thrown in:
1. Components will check that calling components have the active Control
Code that every component is expected to have.
2. The Control Code checks that certain parameters are not exceeded, such as
AGI memory storage and computational resources.
3. The Big Red Switch monitors not only Ben's heart, but other control
parameters as well.
4. The Big Red Switch monitors another AGI, which has a focused purpose of
looking for trouble with the first AGI.

I imagine that the control code would require a tiny slice of the AGI
resources, yet still be effective.  Implementation details would naturally
require a lot of thought and effort, but I think the concept of a built-in
"virus" would be effective.  Maybe "immune system" would be a more appealing
term.

Kevin Copple



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


RE: [agi] Turing Tournament

2003-01-22 Thread Kevin Copple
I spent a few minutes looking at the CalTech Turing Tournament website
http://turing.ssel.caltech.edu/index.html  I came away rather puzzled. This
seems to be a number guessing game.  Sure, it includes both emulator and
detector algorithms, but such a specialized domain seems less interesting
than algorithms that play chess, bridge, go, or whatever.

Does anyone here have any idea what the value of this Tournament is?  Other
than having fun and spending taxpayer dollars, that is.

>From the website:

"The Human Behavior to be Emulated.

"An even number (16) of human subjects are matched in pairs: subject 1 is
matched with 2, subject 3 is matched with 4, etc. For each pair of subjects,
the odd player is the row player, and the even player is the column player,
and they then play a repeated normal form game, whose stage game looks as
follows:

[matrix table ommitted]

"The game is played for 50 rounds. In each round, the row player chooses the
row, say i, and the column player chooses a column, say j, resulting in a
cell (i, j) that is chosen in that round. The first entry aij in each cell
represents the payoff, in cents, to the row player if that cell is chosen,
and the second entry bij in the cell represents the payoff to the column
player if the cell is chosen. Before the experiment begins, the subjects in
each pair are shown the payoff matrix, they know that they will play exactly
50 rounds of the game with the same partner, and they observe, after each
round, the row and column choices that were made by themselves and their
partner, and the payoff that each of them received in that round. At the end
of the experiment, the subjects are paid the total amount that they have
earned over the course of the 50 rounds. The normal form game used as the
stage game in the experiment is recorded in a file in the format of a stage
game. The output from the experiment consists of a 16×50 matrix of integers,
whose (i, t)th entry is the strategy selected by player i in round t of the
match. This is written to a file in the form of a dataset file."

Kevin Copple






-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of
Ed Heflin
Sent: Thursday, January 23, 2003 2:37 AM
To: [EMAIL PROTECTED]
Subject: Re: [agi] Turing Tournament

Damien,

This is great stuff...and I wouldn't expect less from CalTech, although I
might have expected MIT to be the first to formalize this ;-).

Just kidding...I became aware of the sometimes intense rivalry between the
two institutions during my post-doc work in experimental gravitational-wave
detector development some years ago...imagine trying to observe a
fluctuation the size of a hydrogen atom over the earth-sun distance...a
proposition in 'quantum fluctuation measurement insanity' that both
institutions took on with real scientific fervor and seemed to make some
progress in!

And, as you point out, the new Touring Tournament at CalTech comes from the
unlikely source of the Division of the Humanities and Social Sciences.  Now
that has got to be a first!  I do think that they add something entirely new
and necessary, that isn't captured by the "The First Turing Test" for the
The Loebner Prize.

I think the new element of automated testing is a real plus for the
competition and will tend to objectify the evaluation process
and enhance the standards of the competition.  Furthermore, it is an
enhancement that probably even Touring himself might not have expected...the
use of  both, an algorithmic detector with an emulator.

But more importantly, I think this sets the stage for ultimately asking what
the real test for mimetic behaviors should be, especially given the larger
goals of AGI. When I stop and ask myself...exactly what is being tested in a
'Touring test for mimetic behavior'?,  I know my answers different for an AI
system, say a chatbot, v. an AGI system, say a cognitive virtual player.

Since I expect that the behaviors of an AGI system are of greater 1. breadth
and 2. depth than an AI system, I would seek a different forum to test
greater mimetic behaviors appropriately.  Over the summer, I came to the
conclusion that a more complete Touring test of an AGI system would involve
a richer I/O environment, say a virtual world, and the ability to endow
virtual characters in this I/O environment full cognitive abilities.

In this sense, the Touring test of an AGI system becomes more of a test
grounded in game play whereby everything from speech interaction to
strategic interaction is tested in a human player v. virtual player
(computer).  The game play than becomes the basis of the judgment on the
virtual player (computer)'s ability to mimic human players.

Just my $0.02 worth.  EGHeflin

PS: Please excuse the slight out of sequence submission...but, I'm just
getting caught up with things after a bit of R&R in the sunshine state!

- Original Message -
From: "D

RE: [agi] META: Alan, please cool it...

2003-01-20 Thread Kevin Copple
Ben said,

>Alan,
>
>Several people, whose opinions I respect, have asked me to unsub
>you from this e-mail list, because they perceive your recent e-mails
>as having a very low signal to noise ratio.

Wow!  This tells me things about the members of this e-mail list that I
missed from the posts I have seen in my two months of participation here.  I
certainly hope that these "several people" don't represent the patience and
open mindedness of most of the folks here.

If you unsub Alan, please do me a favor and unsub me at the same time.

Kevin Copple


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Test Space

2003-01-12 Thread Kevin Copple









Phillip
Sutton said:

 

>I think it is possible to lodge very abstract concepts into an
entity, and use hard wiring to assist the AGI to rapidly and easily recognise examples of the 'hard' abstract concepts -
thus giving some life to each abstract concept.

 

Another
class of test for the AGI Test Space!

 

Kevin
Copple 








RE: [agi] AI is a trivial problem

2003-01-12 Thread Kevin Copple
Shane Legg wrote:

>A better idea, I think, would be to test the system on *all* problems
>that can be described in n bits or less (or use a large random sample
>from this space).  Then your system is gauranteed to be completely
>general in a computational sense.

Sounds good to me.  Perhaps my motivation in thinking about test problems is
to give advance notice to those who in the future may claim they have
developed, or may claim that they are on the path to developing, human level
AI or AGI.  But given my lack of credentials in this arena, I would feel a
little sheepish professing to be a judge (I doubt many here would put a lot
of weight on a fresh Loebner bronze medal).  Still, it seems there may be
some benefit in developing Shane's list.  The Loebner-type Turing test is
fraught with difficulties, but is the only defined milestone that I am aware
of (except for lesser solved problems such as playing grandmaster level
chess).

A collection of tests that serve as milestones may be useful for guiding,
gauging, and judging.  Various types and difficulties of test could occupy
the space.  If the space could be coherently defined and populated by people
respected in the field, we would have a sophisticated means in which to
discuss progress.   Of course, it would not hurt to give each one a
substantial cash prize value :-)

On the subject of whether an AGI is a Turing Machine, it struck me that an
AGI will change based upon interaction with the physical universe.  So, its
internal state will be continuously changing due to input from the vastly
complex real world, making it unknowable to the extent that we don't know
everything about that which it interacts with.  We could only predict its
behavior if we knew its complete history right up to the very instance of
action, which may not be any easier than knowing what a bored human will do
in the next five seconds.

Kevin Copple

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI is a trivial problem

2003-01-11 Thread Kevin Copple
Ben said regarding an AI that could learn and play, without specific coding,
high-level chess, go, etc.

>Such a software system would be VASTLY superior to any existing AI software
>system.  But if it could do nothing else, it would still be terribly
>overspecialized and narrow compared to a human...

>I would see such a system as halfway between Deep Blue and humans, in terms
>of general intelligence.

Agreed.  But I would then confidently conclude that we were on the steep
part of the curve toward AGI, and that the challenge of creating
intelligence had been met.  This would be a difficult AGI test that is also
clear and simple to administer.  Much more telling than the Turing imitation
game test I think.

Hmmm, here is another test idea:  given a $10,000 budget, conceive and
execute a plan for a web-based software services business that will legally
return $30,000 profit within a year.  Closer to a true AGI test?

Later . . . Kevin C.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] AI is a trivial problem

2003-01-11 Thread Kevin Copple
Ben said, "Given unbounded space and time resource bounds, AI is a trivial
problem."

To me, this is the major attraction of the field.  It is self-evident to
many that contemplate the challenge.  Our increasingly cheap and powerful
computers make the temptation even greater.

Also self-evident is that there are different ways to achieve AGI.

Of course, we hope that we can do great things with only a tiny slice of
"unbounded space and time."  We are on various paths to achieving AGI.  We
just don't know if we are on the steep part of the curves yet.

My personal litmus test is whether an AGI can learn and play high-level
chess, go, bridge, or other similar games without being coded specifically
for these games.  The advantage of this test is that it requires no physical
instantiation, and the results are easily quantifiable.

Kevin Copple

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] The Next Wave

2003-01-10 Thread Kevin Copple
Well, my "The Next Wave" post was intended to be humorous.  I not that much
of a comedian, so I may have weighed in too heavily on "apparently serious."
Let me apologize to the extent it was a feebly frivolous failure.

Perhaps I am wrong, but my impression is that the talk here about AGI sense
of self, AGI friendliness, and so on is quite premature.  "The Next Wave"
post was intended to illustrate this by way of even more off-the-deep-end
topics.  Heck, the last I heard we haven't been able to write an algorithm
that can beat an accomplished 9 year old at Go.

However, I think it is quite conceivable for a future AGI to learn all
non-trivial details of the universe, arriving at point where there is
effectively nothing left to learn and nothing left to do.  Why do something
if you know in advance the result?  Even random outcomes, such as the nearly
infinite number of snowflake patterns will get boring and pointless after,
say, 8,941,204,723,493,808 images.

But if we find an infinite number of parallel universes . . . ah, forget it.

Kevin Copple


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] The Next Wave

2003-01-10 Thread Kevin Copple
It seems clear that AGI will be obtained in the foreseeable future.  It also
seems that it will be done with adequate safeguards against a runaway entity
that will exterminate us humans.  Likely it will remain under our control
also.

HOWEVER, this brings up another wave of issues we must debate.  An AGI will
naturally begin building and programming itself, and quickly develop
abilities that our human minds cannot hope to achieve.  We need a consensus
on limits for humans using the AGI abilities, perhaps leading to some
programmed directives for the AGI's.  Here is my effort to start a list:

>> TIME TRAVEL <<

Likely the AGI will quickly learn how to travel through time.  Should we
develop rules of conduct in advance?  Sure, it's tempting to think of giving
folks like Usama bin Laden and Kim Jong Il visits in their youth from an
agitated Baby Face Nelson, but where do the "adjustments" stop?

>> PARALLEL UNIVERSES <<

The AGI may allow passage to an infinite number of parallel universes, each
slightly different than the next.  Do we really want to go mucking about,
changing things willy-nilly just for entertainment?

>> GENETIC ENGINEERING <<

The AGI will make genetic engineering and body adjustments a snap.  But when
we are all beautiful, strong, talented, and smart, are any of us?  Can there
be Yin without Yang?

>> ULTIMATE KNOWLEDGE <<

Our AGI will come to know everything.  Every single flap of every butterfly
wing in all of history.  If it has emotions like ours, it may become rather
depressed and realize that it is all pointless.  Maybe we will understand
and agree with the AGI's explanation. What happens then?

While I shudder at the enormity of the responsibility, I am in the process
of forming committees to address the challenges of each category.  For those
of you that feel the burden of the future upon your shoulders, please let me
know which committees you feel compelled to serve on.

Kevin Copple

P.S.  I also need a name for the website, the foundation, and a good slogan.
Any suggestions?


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] Thinking may be overrated.

2002-12-29 Thread Kevin Copple
Ben Goertzel wrote:
> Traditional logic-based AI has badly underemphasized the role of trial and
>error, but I'm afraid you're swinging to the opposite extreme !!

It has been said that it is easier to bring a wild idea under control than
to give life into a lame idea, so considering an extreme position may not be
a bad tactic.

In further defense of trial and error, I would point out that much or most
of our human knowledge and progress has been the result of countless random
trials and errors of others.  If the pre-Columbian Native Americans had a
strong value for seeking advancement through trial and error, I imagine they
would have discovered much better archery techniques that would have
dramatically altered human history.  Would those countless archers have met
the criteria for AGI?  Surely they would have.  But they apparently lacked
respect for random trial and error in the pursuit of progress.  Clearly they
WANTED their arrows to have three times the range, speed and power.  Seems
this is an obvious case of an AGI (minus the "artificial") that desperately
needed the random trial and error problem solving method.

In my life, I have found that various forms of negative feedback often
taught me an effective lesson, even though I intellectually KNEW the lesson
beforehand.  As in, "I knew that was a bad idea, tried it anyway, and will
never again."  I have seen this behavior many times in others as well.  This
is the type of observation that makes me wonder the extent to which emotion
is the real driver in our intelligent behavior.  WANTING to succeed often
seems to be the real factor in success at solving problems.

What is the pattern matching that occurs in our biological neural nets?  Is
it not a simple "trial and error," with more dimensions?  To me, seeing a
pattern in a series of words, images, or numbers in an IQ test is a type of
trial and error.   I am getting beyond my ability to express myself, at
least without more energy and time than I have at the moment, but it occurs
to me that what we perceive as logic in our brains is actually massively
parallel trial and error processes with emotional reinforcement for success
or failure.

I do not want to say that random trial and error is the ultimate form of
intelligent thought.  Far from it.  But given what nature and humankind have
achieved with it to date, and that we may not even recognize the extent to
which it is involved in our own thought, it seems to be an intriguing
ingredient.  Perhaps artificial trial and error systems can lead us to "pure
intelligence."  That is, if pure intelligence is not an illusion, a mirage,
an unachievable holy grail.

Cheers,

Kevin Copple

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Thinking may be overrated.

2002-12-28 Thread Kevin Copple
Perhaps thinking is overrated.  It sometimes seems the way progress is made,
and lessons learned, is predominately by trial and error.  Thomas Edison's
light bulb is good example, especially since it is the very symbol of
"idea."  From what I know, Edison's contribution was his desire to make the
light bulb, previously invented by other's, into a commercially successful
product.  His approach was to try this and try that until he finally
succeeded.

Benjamin Franklin invented the rocking chair.  Why had no one invented it
before?  Surely ancient Chinese, Egyptian, and Sumerian civilizations would
have loved this bit of easy low-tech entertainment.  Perhaps we think a
little too highly of our intellectual ability.  Native Americans did not
discover the three-finger (index, middle, ring) method of archery, even
though they spent dozens of generations developing their archery skills.
The more natural thumb and index finger method reduces the effective range
by a factor of three.  Lucky thing for the Pilgrims I guess.

Random evolution resulted in our fantastic technology-using brains.  No
planned design using calculus or any other type of logic seems to have been
needed.  Nervous systems developed for one purpose randomly morphed to
perform others.  Some of the more complex organisms had evolutionary
advantages that allowed them to propagate.   But evolution largely failed to
take advantage of basic technologies like fire, wheels, and metallurgy.  It
is ironic that we have succeeded doing a lot of technology the evolutionary
computer failed to develop, but we are struggling to duplicate much of the
technology it did.

"Thinking" in humans, much like genetic evolution, seems to involve
predominately trial and error.  Even the "logic" we like to use is more
often than not faulty, but can lead us to try something different.  And
example of popular logic that is invariably faulty is reasoning by analogy.
It is attractive, but always breaks down on close examination.  But this
type of reasoning will lead to a trial that may succeed, possibly because of
the attractive similarities, but more likely in spite of them.

When the Wright brothers made the first airplane, they used a lot of
different technologies.  There was no single silver bullet, except for a
determination to accomplish their goal.  Like any technological advancement,
the road to AGI will be paved with a variety techniques, technologies,
trials, and errors.  This seems doubly true since thinking as we know it is
apparently a hodgepodge of methods.
Catch you all later . . . Kevin C.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Natural Language DB's and AI

2002-12-26 Thread Kevin Copple
I claim to be an intelligent entity (if not a real AI programmer), and one
of my more valuable tools is a common dictionary, whether paper or
electronic.  There are many words I know, learned from the dictionary and/or
from context, that I cannot pronounce since I did not learn them verbally.
This occasionally comes to light when one of my English students here in
Tianjin asks me for an English word and I have to reply, "I know the word,
but I don't know how to pronounce it."

This intelligent entity (me) HAS an essential natural language DB in its
system.  I also WANT to have a better, more complete natural language DB.
It would be great to have the contents of a good dictionary implanted in my
brain if it were possible.

It would seem to me that words are a good starting place for symbols that
represent things.  Why reinvent the wheel?  Especially since we would like
to communicate with the AI entities we create, if only to hear or see
"king's pawn to king four."  Certainly natural language is rather messy with
its synonyms, homonyms, and multiple meanings per word.  The WordNet
mappings to synonym sets seems to be a good way to start approaching the
problem, especially with its inclusion of small word groups such as "part of
speech."

I realize that there is a big conceptual difference between data and the
ability to process it.  But one cannot be useful without the other.
Sometimes data and processing ability can be closely combined.  Perhaps word
definitions can be considered data.  Then grammar and parts of speech that
construct sentences may be considered a type of functionality.  Several
words in a definition map to a single word (or synonym set).  Words are
linked with grammar rules to form a sentence.  One sentence, or perhaps a
few in a paragraph map to a "thought."  This is a system that works well for
US, so has potential application to an AGI.

Of course, "grounding" the words used to define other words is a big
challenge.  But this difficulty does not lead to the conclusion that natural
language DB's should not be a very important part of an AGI.  Perhaps the
"grounding" and "real understanding" we instinctively strive for is a bit of
an illusion.  Maybe intertwined but ultimately circular and imperfect
definitions are all we need.  Sensory grounding would be nice, but perhaps
not necessary.  I can certainly understand things with very little sensory
grounding, such as the general theory of relativity (well, a little
understanding anyway).  And does it really matter that I perceive red the
same way as everyone else, so long as the perception is consistent?

Things like mathematics and chemistry have their own "languages" that at
least to some extent can be approached and explained using the "basic"
natural languages.  While this type of career plan is not to be recommended,
I hope to draw on my varied background as practical engineer, research
engineer, lawyer, patent lawyer, and import/export business (not to mention
a variety of hobbies) to guide my thinking about thinking.

We often intelligently use things we do not understand.  Computers,
automobiles, our brains, quarks, and so on.  Why can't an AGI use words it
does not actually understand, so long as it uses the word properly and
accomplishes the desired result?  I have seen expert systems and databases
do truly amazing things in my various experiences.  But nothing so amazing
as seeing EllaZ deal blackjack, or place the entire contents of Kant's
Critique of Pure Reason into a single scrolling browser textbox :-)

Catch you all later . . . Kevin Copple

P.S.  I have one of the reduced Loebner contest versions of the EllaZ system
(a/k/a Kip) running the Oddcast animated head now at www.EllaZ.com.  The
implementation is still a bit rough, but the AT&T Natural Voices TTS is
quite good.

[EMAIL PROTECTED]

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Grounding

2002-12-09 Thread Kevin Copple
Okay, I am bored, or maybe just lazy today, so please let me weigh in and
ramble a bit:

Vectors and scalars are great, and may be the best route to learning for a
given system, but it hardly seems obvious that they are a prerequisite to
learning for an AI that exceeds general human intellectual capacity.  I was
a chemical engineer in one of my former lives, and I can say that vectors
are definitely more lovable than the criminal defendants I was appointed to
represent in my former life as an attorney.  The defendants were mostly
interested in the rather binary guilty vs. not guilty.

Retinas have pixels don't they?  Perhaps our perception of scalars is
actually recognition of patterns in discrete points.  You could readily make
an image people recognize as a circle, using only pawns as discrete points
on a chessboard.

Wouldn't chess be a domain where an AGI could learn and excel, with no
vectors or scalars in sight?  Much of what is fundamental is binary: on/off,
dead/alive, male/female, married/single, smile/frown, and so on.

A miss is a good as a mile.

 . . . Kevin C.

P.S. To me a key fundamental is "Artificial Motivation."  Give an entity the
desire to accomplish goals, plus tools to use, then the ability to learn.

Example:  I was hungry, but now am full.  I wanted to reproduce, and
satisfied that urge.  Now I am tired of thinking, and want to consume more
of that wet fermented grain to stop the process for a while.  Ahh,
cultivating barely to make beer is good.  Oops, inadvertently founded
civilization.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] EllaZ systems

2002-12-09 Thread Kevin Copple
Hey Ben,

It seems that recent college IT grads here hope to earn about 3000rmb
(375usd) a month, but often must settle for less.  This is based on my
rather limited knowledge.  Hopefully I will know more in the near future,
since I have been getting the word out and have a local headhunter looking
for some candidates.  One prospect who is not willing to leave his job for
short term work responded, "you are offering too much."

>I guess the important thing is to store as much data as possible, in a
>clearly structured way.

>People can always postprocess the data using their own scripts, so long as
>the information is there are and is clearly structured...

Yes, I agree with this sentiment.  I am thinking along the lines full
conventional citation plus other data such as location and original date of
creation.  We may indulge in a little overkill, since I have already
experienced remorse at not recording more detail in some of the early
stages.  Trial and error remains a great teacher.

>XML or RDF type syntax is generally easy for people to work with...

XML may be the way to go.  Perhaps XML files can largely replace DB's, and a
translation from XML to a DB should be straightforward.  A relational DB
could allow associating one convun to another, thus illustrating a joke or
poem, for example.  Those types of relationships may be difficult with XML,
but could be done programmatically, at least to some extent.  This AI
business sure could consume a lot of "gurus."

>I would definitely want each conversational unit linked to each
conversation
>it was embodied in -- the full conversational history... so that the
context
>could be determined  One of the interesting things to mine from this
>dataset is how people respond to context...

I will add "Ben" to my WordNet gloss for "ambitious" :-)  . . . good point
though.  We are now able to conveniently store mind-boggling amounts of text
data.  Ella will display the entire text of Kant's Critique of Pure Reason
in a single window of your browser (its amazing that those scrollbars never
wear out).  The one-microprocessor bottleneck is the big limitation (for me
anway).

>On a different topic: If you plan to involve statistical NLP technology in
>the next phase of your project, that could be an interesting thing to talk
>about ... it's not something I'm working on now, but we played around with
>it a lot at Webmind Inc. ...

Thanks for the idea.  I have been meaning to take a closer look at what has
gone on at Webmind Inc..

Later . . . Kevin

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] gibberish/foreign language detector

2002-12-08 Thread Kevin Copple
Thanks for the idea and references Cliff.  It is common for a visitor to
test a chatterbot by pounding on the keyboard, speaking a foreign language,
typing binary code, etc.

Currently in EllaZ systems, if the input exceeds a minimum length (I don't
recall the number of characters off-hand) we check that there are at least
two words from an English Scrabble dictionary ("Enable" is the name as I
recall).  The advantage of a scrabble dictionary is that it contains plural
and tense variations in a simple word list DB.  I imagine that we could add
lists of proper names and place names also without bogging down the on-line
program too much.  A similar technique could be used to ID which foreign
language, but the database could start to snowball.  The trigram method
could be a more elegant and efficient way to do the same thing.

Later . . . Kevin

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of
Cliff Stabbert
Sent: Monday, December 09, 2002 11:50 AM
To: Gary Miller
Subject: Re: [agi] general patterns induction
. . . .
As a quick and dirty method for checking language, counting trigram
frequencies might work.  A trigram is a specific sequence of three
letters; just as "e" is the most common letter in the english
language, certain trigrams occur more often, others less, and the
specific distribution varies from language to language.  E.g.,
"cht" is more common in german (than in english), "cce" in italian and
"eau" in french.  Some public domain dictionaries, a few probability
formulas and you're on your way.  Google around a bit for "trigram
frequencies" and the like, it's often used in cryptography;
http://web.mit.edu/craighea/www/ldetect/ might help.
. . .


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] EllaZ systems

2002-12-08 Thread Kevin Copple
Pei Wang, it is neat to hear you're from Tianjin.  We are living in the Wan
Chun Garden apartments about a mile northeast of the main train station.  I
teach English on the weekends to get me out a little.  Teaching English as a
second language also helps me to keep in mind the basics of vocabulary and
grammar when it comes to contemplating AI ideas.  Since wages are so low
here, even for well-educated people, I am in the process of hiring a few
people for a year or so to move our project along faster.  Please let me
know if you have any leads or suggestions.

I plan for the Convun database to be a resource for intelligence engines,
not the intelligence per se.  Hopefully, it could be used in a variety of
ways, from simple reflexive responses to various types of more sophisticated
uses.  Currently EllaZ systems is simply a conventional chatterbot with a
number of special purpose program features such as I Ching, word math,
WordNet, Blackjack, and so on.   These additional program features are all
accessible using a natural language interface, as opposed to button clicks
or hyperlinks.  Let me know if you have any questions about any of the
features.

Ben, one of the challenges it seems is how best to structure the Convun
database so as to maximize its use for intelligent systems.  There is likely
no clear correct approach, so we will just do our best.  I will try soon to
submit a description of where we are headed to this mailing list and ask for
comments.

Hopefully the convun database will become something that will be of use to
you all.  There is a lot of public domain material that any intelligent
entity should know, and also much copyright protected material that is
available with a little persuasion and acknowledgment of authorship.
Getting the data collected, organized, and formatted in a way that it will
be available for processing is the challenge.

I like to imagine an AGI entity saying, "Hmm, this situation is similar to
that in Act 2 of Shakespeare's Much Ado About Nothing, and the result there
was . . . "

Catch you all later . . . Kevin C.



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



[agi] Hello from Kevin Copple

2002-12-08 Thread Kevin Copple
I just recently joined this e-mail list after following some links posted by
Tony Lofthouse in the Generation5 forum. I am working on a natural language
project that can be seen at www.EllaZ.com, and am interested in what you all
are up to.  The e-mails I have received from this list in the last day or so
have been interesting and informative.  Thanks!

My approach to doing something in the AI field is to start with basic
interface, knowledge, and functional features that can be implemented and
demonstrated.  Now that a basic framework is in place, the system can be
expanded and built upon as various techniques are identified as useful and
incorporated.

It seems to me that rout memorization is an aspect of human learning, so why
not include a variety of jokes, poems, trivia, images, and so on as part of
an AI knowledge base?  In the EllaZ system we refer to these chunks of data
as Convuns (conversational units).  One plan is for the system to log
interactions with users and identify patterns of interest.  The system would
then be able to predict which Convuns a user would most likely be interested
in, and also be able to evaluate the interest in a particular Convun.

Ella was lucky enough to win the 2002 Loebner Prize Contest, which can be
somewhat arbitrary with the limited number of judges and limited length of
conversations.  She has a number of functional features that I suspect the
engineering students selected as judges were more likely to test and
appreciate.

I am currently living Tianjin, China, having sold my import/export chemicals
business to a competitor.  My wife, Zhang Ying, is a local girl who doesn't
care for the food in the US and doesn't like being away from her friends and
family.  So, I am between jobs and working on www.EllaZ.com for the next
year or so.

We are always on the outlook for collaborators and ideas we can "borrow" :-)

Cheers . . . Kevin Copple



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]