RE: [agi] A point of philosophy, rather than engineering

2002-11-13 Thread Ben Goertzel


Arthur Murray wrote:
 If Ben Goertzel and the rest of the Novamente team build up
 an AI that mathematically comprehends mountains of data,
 they may miss the AI boat by not creating persistent concepts
 that accrete and auto-prune over time as the basis of NLP.

No, even before the Novamente system understands natural language, it will
still have persistent concepts accreting and auto-pruning over time.  In
fact, we're right now doing some primitive testing of accretion 
auto-pruning processes.

Accretion  auto-pruning are part of a chimp's mind-brain, they're prior to
language...

 The Mentifex Mind-1.1 AI, primitive as it may be, has since 1993
 (nine years ago) gradually built up a sophisticated group of
 about 34 mind-modules now barely beginning to achieve NLP results.

 I enter these thoughts here not confrontationally but from a
 point-of-view that NLP is not otherwise being sufficiently
 represented among all these mathematicians and computationalists.

NLP is obviously very important.  Historically, it has often been associated
with an overly rigid rule-based approach to AI, which is perhaps why it's
not so fashionable among AGI people.

I agree that amenability-to-NLP should be an important consideration of any
AGI design process.  We've designed Novamente specifically so that it will
be able to learn language when the time comes.  Our experience working with
computational linguistics at Webmind allowed us to do that.

On the other hand, Peter Voss has often put forth the following view (this
is a paraphrase, not a quote): Our brains are a lot like chimps' brains.
If someone designed an AGI with chimp level intelligence, making the
modifications to turn this chimp-AGI into a human-level AGI with linguistic
ability would be a *relatively* small trick compared to the original trick
of designing the chimp-AGI.

I think Peter has a certain point, but it's not the approach I'm taking.  My
approach is more linguistic than his but less than yours.

Your approach puts linguistics at the center, it seems; but I don't think
you can FOUND an AGI system on linguistics.  I think linguistic ability has
to mostly emerge from more generic cognitive functionality, in order to be
full and genuine linguistic ability that involves deep semantic
understanding

-- Ben G

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



Re: [agi] A point of philosophy, rather than engineering

2002-11-12 Thread Charles Hixson
Ben Goertzel wrote:


Hi,

 

Personally, I believe that the most effective AI will have a core
general intelligence, that may be rather primitive, and a huge number of
specialized intelligence modules.  The tricky part of this architecture
is designing the various modules so that they can communicate.  It isn't
clear that this is always reasonable (consider the interfaces between
chess and cooking), but if the problem can be handled in a general
manner (there's that word again!), then one of the intelligences could
be specialized for message passing.  In this model the core general
intelligence will be for use when none of the hueristics fit the
problem.  And it's attempts will be watched by another module whose
specialty is generating new hueristics.

Plausible?  I don't really know.  Possibly to complicated to actually
build.  It might need to be evolved from some simpler precursor.
   


It's clear that the human brain does something like what you're suggesting.
Much of the brain is specialized for things like vision, motion control,
linguistic analysis, time perception, etc. etc.  The portion of the human
brain devoted to general abstract thinking is very small.

Novamente is based on an integrative approach sorta like you suggest.  But
it's not quite as rigidly modular as you suggest.   Rather, we think one
needs to

-- create a flexible knowledge representation (KR) useful for representing
all forms of knowledge (declarative, procedural, perceptual, abstract,
linguistic, explicit, implicit, etc. etc.)


This probably won't work.  Thinking of the brain as a model, we have 
something called the synesthetic gearbox which is used to relate 
information in one modality of senstation with another modality.  This 
is a part of the reason that I suggested that one of the hueristic 
modules be specialized for message passing (and translation).


-- create a number of specialized mind agents acting on the KR, carrying
out specialized forms of intelligent processes

-- create an appropriate set of integrative mind agents acting on the KR,
oriented toward creating general intelligence based largely on the activity
specialized mindagents


Again the term general intelligence.  I would like to suggest that the 
intelligence needed to repair an auto engine is different from that 
needed to solve a calculus equation.  I see the General Intelligence as 
being the primarily to handle problems for which no hueristic can be 
found, and would suggest that nearly any even slightly tuned hueristic 
is better than the general intellligence for almost all problems.  E.g., 
if one is repairing an auto engine, one hueristic would be to remember 
the shapes of all the pieces you have seen, and to remember where they 
were when you first saw them.  Just think how that one hueristic would 
assist reassembling the engine.



Set up a knowledge base involving all these mind agents.. hook it up to
sensors  actuators  give it a basic goal relating to its environment...

Of course, this general framework and 89 cents will get you a McDonald's
Junior Burger.  All the work is in designing and implementing the KR and the
MindAgents!!  That's what we've spent (and are spending) all our time on...


May I suggest that if you are even close to what you are attempting, 
that you have the start of a dandy personal secretary.  With so much 
correspondence coming via e-mail these days, this would create a very 
simplified environment in which the entity would need to operate.  In 
this limited environment you wouldn't need full meanings for most words, 
only categories and valuations.

I have a project which I am aiming at that area, but it is barely 
getting started.

-- Ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/

--
-- Charles Hixson
Gnu software that is free,
The best is yet to be.


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/


RE: [agi] A point of philosophy, rather than engineering

2002-11-12 Thread Ben Goertzel


Charles Hixson wrote (in response to me):
 -- create a flexible knowledge representation (KR) useful for
 representing
 all forms of knowledge (declarative, procedural, perceptual, abstract,
 linguistic, explicit, implicit, etc. etc.)
 
 This probably won't work.  Thinking of the brain as a model, we have
 something called the synesthetic gearbox which is used to relate
 information in one modality of senstation with another modality.  This
 is a part of the reason that I suggested that one of the hueristic
 modules be specialized for message passing (and translation).

There are both significant differences, and significant similarities,
between the representations used by different parts of the human brain.
They all use neurons and synapses, frequencies of neural firing,
neurotransmitter chemistry, etc., in fairly similar ways.  Of course there
are also some major differences in neural architecture btw brain regions --
different types of neurons, different neurotransmitter concentrations,
different connective arrangementc, etc.

Similarly there are significant similarities  differences btw the
representations used by different parts of Novamente.  They all use
Novamente Nodes and Links, and all use similar quantitative parameters of
Nodes and Links, and there's a lot of overlap in the MindAgents (dynamical
processes) they use.  But there are also significant differences, in the
frequency of different node and link types, the parameters of the different
MindAgents, etc.

 Again the term general intelligence.  I would like to suggest that the
 intelligence needed to repair an auto engine is different from that
 needed to solve a calculus equation.

Of course it is different in many ways.  It's also similar in many ways.

I believe that those two forms of intelligence consist of basically the same
set of processes, acting on the same basic sort of knowledge.  But the two
cases have very different underlying parameter settings.  In the brain
case, different types of neural connectivity patterns, perhaps different
concentrations of neurotransmitters in different brain regions, perhaps even
different amounts of different types of neurons -- all of which leads to
different emergent structures/dynamics.

 I see the General Intelligence as
 being the primarily to handle problems for which no hueristic can be
 found, and would suggest that nearly any even slightly tuned hueristic
 is better than the general intellligence for almost all problems.  E.g.,
 if one is repairing an auto engine, one hueristic would be to remember
 the shapes of all the pieces you have seen, and to remember where they
 were when you first saw them.  Just think how that one hueristic would
 assist reassembling the engine.

Yes, but what allows a human mind to learn that heuristic?

Our general (reasonably general, but far from absolutely general)
intelligence.

 Set up a knowledge base involving all these mind agents.. hook it up to
 sensors  actuators  give it a basic goal relating to its environment...
 
 Of course, this general framework and 89 cents will get you a McDonald's
 Junior Burger.  All the work is in designing and implementing
 the KR and the
 MindAgents!!  That's what we've spent (and are spending) all our
 time on...
 
 May I suggest that if you are even close to what you are attempting,
 that you have the start of a dandy personal secretary.  With so much
 correspondence coming via e-mail these days, this would create a very
 simplified environment in which the entity would need to operate.  In
 this limited environment you wouldn't need full meanings for most words,
 only categories and valuations.

As I said in a recent post, I prefer to stay away from natural language
processing at this stage, until the system has acquired a rudimentary
understanding of natural language thru its own experience.  We're not quite
there yet ;)

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



RE: [agi] A point of philosophy, rather than engineering

2002-11-12 Thread Arthur T. Murray

On Tue, 12 Nov 2002, Ben Goertzel wrote: 
 Charles Hixson wrote (in response to me):
[...]
  May I suggest that if you are even close to what you are attempting,
  that you have the start of a dandy personal secretary.  With so much
  correspondence coming via e-mail these days, this would create a very
  simplified environment in which the entity would need to operate.  In
  this limited environment you wouldn't need full meanings for most words,
  only categories and valuations.

BenG: 
 As I said in a recent post, I prefer to stay away from natural language
 processing at this stage, until the system has acquired a rudimentary
 understanding of natural language thru its own experience.  We're not quite
 there yet ;)
 
That's where the Mentifex AI and Novamente differ (and probably also
where A.T. Murray the linguist and Ben Goertzel the mathematician differ).

If you're not aiming for language, you're aiming for a smart animal.

A.T. Murray
-- 
http://www.scn.org/~mentifex/aisource.html is the cluster of Mind
programs described in the AI textbook AI4U based on AI Mind-1.1
by Arthur T. Murray which may be pre-ordered from bookstores with
hardcover ISBN 0-595-65437-1 and ODP softcover ISBN 0-595-25922-7.

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/



Re: [agi] A point of philosophy, rather than engineering

2002-11-11 Thread Charles Hixson
The problem with a truly general intelligence is that the search spaces 
are too large.  So one uses specializing hueristics to cut down the 
amount of search space.  This does, however, inevitably remove a piece 
of the generality.  The benefit is that you can answer more 
complicated questions quickly enough to be useful.  I don't see any way 
around this, short of quantum computers, and I'm not sure about them (I 
have this vague suspicion that there will be exponentially increasing 
probabilities of error, which require hugely increased error recovery 
systems, etc.).

This doesn't mean that we have currently reached the limits of agi.  It 
means that whatever those limits are, there will always be hueristicly 
tuned intelligences that will be more efficient in most problem domains.

Of course, here I am taking a strict interpretation of general, as in 
General Relativity vs. Special Relativity.  Notice that while Special 
Relativity has many uses, General Relativity is (or at least was until 
quite recently) mainly of theoretical interest.  Be prepared for a 
similar result with General Intelligence vs. Special Intelligence.  (The 
difference here is that Special Intelligence comes in lots of modules 
adapted for lots of special circumstances.)

Personally, I believe that the most effective AI will have a core 
general intelligence, that may be rather primitive, and a huge number of 
specialized intelligence modules.  The tricky part of this architecture 
is designing the various modules so that they can communicate.  It isn't 
clear that this is always reasonable (consider the interfaces between 
chess and cooking), but if the problem can be handled in a general 
manner (there's that word again!), then one of the intelligences could 
be specialized for message passing.  In this model the core general 
intelligence will be for use when none of the hueristics fit the 
problem.  And it's attempts will be watched by another module whose 
specialty is generating new hueristics.

Plausible?  I don't really know.  Possibly to complicated to actually 
build.  It might need to be evolved from some simpler precursor.  



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/


Re: [agi] A point of philosophy, rather than engineering

2002-11-11 Thread James Rogers
On Mon, 2002-11-11 at 14:11, Charles Hixson wrote:
 
 Personally, I believe that the most effective AI will have a core 
 general intelligence, that may be rather primitive, and a huge number of 
 specialized intelligence modules.  The tricky part of this architecture 
 is designing the various modules so that they can communicate.  It isn't 
 clear that this is always reasonable (consider the interfaces between 
 chess and cooking), but if the problem can be handled in a general 
 manner (there's that word again!), then one of the intelligences could 
 be specialized for message passing.  In this model the core general 
 intelligence will be for use when none of the hueristics fit the 
 problem.  And it's attempts will be watched by another module whose 
 specialty is generating new hueristics.


This is essentially what we do, but it works a little differently than
you are suggesting.  The machinery and representation underneath the
modules is identical, where each module is its own machine which has
become optimized for its task.  In other words, if you were making a
module on chess and a module on cooking you would start with the same
blank module machinery and they would be trained for their respective
tasks.

If you looked at the internals of the module machinery after the
training period, you would notice marked macro-level structural
differences between the two that relate to how the machinery
self-optimizes for its task. The computational machines, which are
really just generic Turing virtual machines that you could program any
type of software on, use a pretty foreign notion of
computation/processing -- the processor model looks nothing like a von
Neumann-variant architecture.  Despite notable differences in structure,
it is really just two modules of the same machine that have
automatically conformed structurally to their data environment.

The interesting part is the integration of the modules.  There are
actually a number of ways to do it, all of which have advantages and
disadvantages.  One advantage of having simple underlying machinery
controlling the representation of data is that all modules already
deeply understand the data of any other module.  You COULD do a hard
merge of the cooking module with the chess module into one module, and
automatically discover the relations and abstract similarities between
the two (whatever those might be) without any special code, but there
are lots of reasons why this is bad in practice.  In implementation, we
typically do what we would call a soft merge, where the machines are
fully integrated for most purposes and can use each others space, but
where external data feeds are localized to specific modules within the
cluster (even though these modules have access to every other module for
the purposes of processing the data feed).  From the perspective of
external data streams it looks like a bunch of independent machines
working together, but from the perspective of the machine the entire
cluster is a single machine image.  There are good theoretical reasons
for doing things this way which I won't go into here.

In short, we mostly do what you are talking about, but you've actually
over-estimated the difficulty of integration of domain-specific modules
(using our architecture, at least).  Actually building modules is more
difficult, mostly because the computing architecture uses assumptions
that are very strange; I think my programmer's mind works against me
some days, and teaching/training modules by example is easier than
programming them directly most times.  Once they are done, you pretty
much can do plug-n-play on-the-fly integration, even on a hot/active
cluster of modules  (resource permitting, of course).  An analogy would
be how they learned new skills on-the-fly in The Matrix.  The
integration is a freebie that comes with the underlying architecture,
not something that I spent much effort designing.

Cheers,

-James Rogers
 [EMAIL PROTECTED]  

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/