Re: [agi] would anyone want to use a commonsense KB?

2008-02-20 Thread Ben Goertzel
> And I seriously doubt that a general SMT solver +
>  prob. theory is going to beat a custom probabilistic logic solver.

My feeling is that an SMT solver plus appropriate subsets of prob theory
can be a very powerful component of a general probabilistic inference
framework...

I can back this up with some details but that would get too thorny
for this list...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] A Follow-Up Question re Vision.. P.S.

2008-02-21 Thread Ben Goertzel
Mike,

>  I'm disappointed that you guys, especially Bob M,  aren/t responding to
>  this. It just might be important to how the brain succeeds in perceiving
>  images, while computers are such a failure.

This is all well-known information!!!

Tomasso Poggio and many others are working on making
detailed computer simulations of how the brain does vision processing.

It's a worthy line of research, but unlike you I am not impelled to consider
it AGI-critical ... anyway that line of research appears to be proceeding
steadily and successfully... though like everything in science, not as fast we
we'd like...

ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-24 Thread Ben Goertzel
Hi,

There is no good overview of SMT so far as I know, just some technical
papers... but SAT solvers are not that deep and are well reviewed in
this book...

http://www.sls-book.net/

-- Ben

On Sun, Feb 24, 2008 at 4:38 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Ben or anyone,
>
>  Do you know of an explanation or reference that is a for Dummies explanation
>  of how SAT (or SMT) handles computations in spaces with and 100,000
>  variables and/or 10^300 states in practically computable time.
>
>  I assume it is by focusing only on that part of the space through which
>  relevant and/or relatively short inferences paths pass, or something like
>  that.
>
>  Ed Porter
>
>
>  -Original Message-
>  From: Ben Goertzel [mailto:[EMAIL PROTECTED]
>  Sent: Wednesday, February 20, 2008 5:54 PM
>  To: agi@v2.listbox.com
>
> Subject: Re: [agi] would anyone want to use a commonsense KB?
>
>
>
> > And I seriously doubt that a general SMT solver +
>  >  prob. theory is going to beat a custom probabilistic logic solver.
>
>  My feeling is that an SMT solver plus appropriate subsets of prob theory
>  can be a very powerful component of a general probabilistic inference
>  framework...
>
>  I can back this up with some details but that would get too thorny
>  for this list...
>
>  ben
>
>
> ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
>  http://www.listbox.com/member/?&;
>
>
> Powered by Listbox: http://www.listbox.com
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
YKY,

I'm with Pei on this one...

Decades of trying to do procedure learning using logic have led only
to some very
brittle planners that are useful under very special and restrictive
assumptions...

Some of that work is useful but it doesn't seem to me to be pointing in an AGI
direction.

OTOH for instance evolutionary learning and NN's have been more successful
at learning simple procedures for embodied action.

Within NM we have done (and published) experiments using probabilistic logic
for procedure learning, so I'm well aware it can be done.  But I don't
think it's a
scalable approach.

There appears to be a solid information-theoretic reason that the human brain
represents and manipulates declarative, procedural and episodic knowledge
separately.

It's more complex, but I believe it's a better idea to have separate methods for
representing and learning/adapting procedural vs declarative knowledge
--- and then
have routines for converting btw the two forms of knowledge.

One advantage AGIs will have over humans is better methods for translating
procedural to declarative knowledge, and vice versa.

For us to translate "knowing how to do X" into
"knowing how we do X" can be really difficult (I play piano
improvisationally and by
ear, and I have a hard time figuring out what the hell my fingers are
doing, even though
they do the same complex things repeatedly each time I play the same
song..).  This is
not a trivial problem for AGIs either but it won't be as hard as for humans...

-- Ben G

On Tue, Feb 26, 2008 at 8:00 AM, Pei Wang <[EMAIL PROTECTED]> wrote:
> On Tue, Feb 26, 2008 at 7:03 AM, YKY (Yan King Yin)
>  <[EMAIL PROTECTED]> wrote:
>  >
>  > On 2/15/08, Pei Wang <[EMAIL PROTECTED]> wrote:
>  > >
>  > > To me, the following two questions are independent of each other:
>  > >
>  >  > *. What type of reasoning is needed for AI? The major answers are:
>  > > (A): deduction only, (B) multiple types, including deduction,
>  > > induction, abduction, analogy, etc.
>  > >
>  > > *. What type of knowledge should be reasoned upon? The major answers
>  >  > are: (1) declarative only, (2) declarative and procedural.
>  > >
>  > > All four combination of the two answers are possible. Cyc is mainly
>  > > A1; you seem to suggest A2; in NARS it is B2.
>  >
>  >
>  > My current approach is "B1".  I'm wondering what is your argument for
>  > including procedural knowledge, in addition to declarative?
>
>  You have mentioned the reason in the following: some important
>  knowledge is procedural by nature.
>
>
>  > There is the idea of "deductive planning" which allows us to plan actions
>  > using a solely declarative KB.  So procedural knowledge is not needed for
>  > acting.
>
>  I haven't seen any no trivial result supporting this claim.
>
>
>  > Also, if you include procedural knowledge, things may be learned doubly in
>  > your KB.  For example, you may learn some declarative knowledge about the
>  > concept of "reverse" and also procedural knowledge of how to reverse
>  > sequences.
>
>  The knowledge about "how to do ..." can either be in procedural form,
>  as "programs", or in declarative, as descriptions of the programs.
>  There is overlapping/redundancy information in the two, but very often
>  both are needed, and the redundancy is tolerated.
>
>
>  > Even worse, in some cases you may only have procedural knowledge, without
>  > anything declarative.  That'd be like the intelligence of a calculator,
>  > without true understanding of maths.
>
>  Yes, but that is exactly the reason to directly reasoning on
>  procedural knowledge, right?
>
>  Pei
>
>
>  > YKY
>  >
>  >
>  >  
>  >
>  >  agi | Archives | Modify Your Subscription
>
>
>
> ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>  Knowing how to carry out inference can itself be procedural knowledge,
>  in which case no explicit distinction between the two is required.
>
>  --
>  Vladimir Nesov

Representationally, the same formalisms can of course be used for both
procedural and declarative knowledge.

The slightly subtler point, however, is that it seems that **given finite space
and time resources**, it's far better to use specialized
reasoning/learning methods
for handling knowledge that pertains to carrying out coordinated sets of action
in space and time.

Thus, "procedure learning" as a separate module from general inference.

The brain works this way and, on this very general level, I think
we'll do best to
emulate the brain in our AGI designs (not necessarily in the specific
representations/
algorithms the brain uses, but rather in the simple fact of the
pragmatic declarative/
procedural distinction..)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread Ben Goertzel
Obviously, extracting knowledge from the Web using a simplistic SAT
approach is infeasible

However, I don't think it follows from this that extracting rich
knowledge from the Web is infeasible

It would require a complex system involving at least

1)
An NLP engine that maps each sentence into a menu of probabilistically
weighted logical interpretations of the sentence (including links into
other sentences built using anaphor resolution heuristics).  This
involves a dozen conceptually distinct components and is not at all
trivial to design, build or tune.

2)
Use of probabilistic inference rules to create implication links
between the different interpretations of the different sentences

3)
Use of an optimization algorithm (which could be a clever use of SAT
or SMT, or something else) to utilize the links formed in step 2, to
select the right interpretation(s) for each sentence


The job of the optimization algorithm is hard but not THAT hard
because the choice of the interpretation of one sentence is only
tightly linked to the choice of interpretation of a relatively small
set of other sentences (ones that are closely related syntactically,
semantically, or in terms of proximity in the same document, etc.).

I don't know any way to tell how well this would work, except to try.

My own approach, cast in these terms, would be to

-- use virtual-world grounding to help with the probabilistic
weighting in step 1 and the link building in step 2

-- use other heuristics besides SAT/SMT in step 3 ... but, using these
techniques within NM/OpenCog is also a possibility down the road, I've
been studying the possibility...


-- Ben





On Tue, Feb 26, 2008 at 6:56 AM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
>
> On 2/25/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > There is no good overview of SMT so far as I know, just some technical
>  > papers... but SAT solvers are not that deep and are well reviewed in
> > this book...
> >
> > http://www.sls-book.net/
>
>
> But that's *propositional* satisfiability, the results may not extend to
> first-order SAT -- I've no idea.
>
> Secondly, the learning of an entire KB from text corpus is much, much harder
> than SAT.  Even the learning of a single hypothesis from examples with
> background knowledge (ie the problem of inductive logic programming) is
> harder than SAT.  Now you're talking about inducing the entire KB, and
> possibly involving "theory revision" -- this is VERY impractical.
>
> I guess I'd focus on learning simple rules, one at a time, from NL
> instructions.  IMO this is one of the most feasible ways of acquiring the
> AGI KB.  But it also involves the AGI itself in the acquisition process, not
> just a passive collection of facts like MindPixel...
>
> YKY
>
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-26 Thread Ben Goertzel
YKY<

I thought you were   talking about the extraction of information that
is explicitly stated in online text.

Of course, inference is a separate process (though it may also play a
role in direct information extraction).

I don't think the rules of inference per se need to be learned.  In
our book on PLN we outline a complete set of probabilistic logic
inference rules, for example.

What needs to be learned via experience is how to appropriately bias
inference control -- how to sensibly prune the inference tree.

So, one needs an inference engine that can adaptively learn better and
better inference control as it carries out inferences.  We designed
and partially implemented this feature in the NCE but never completed
the work due to other priorities ... but I hope this can get done in
NM or OpenCog sometime in late 2008..

-- Ben

On Tue, Feb 26, 2008 at 3:02 PM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
>
> On 2/26/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > Obviously, extracting knowledge from the Web using a simplistic SAT
> > approach is infeasible
>  >
> > However, I don't think it follows from this that extracting rich
> > knowledge from the Web is infeasible
> >
> > It would require a complex system involving at least
> >
> > 1)
>  > An NLP engine that maps each sentence into a menu of probabilistically
> > weighted logical interpretations of the sentence (including links into
> > other sentences built using anaphor resolution heuristics).  This
>  > involves a dozen conceptually distinct components and is not at all
> > trivial to design, build or tune.
> >
> > 2)
> > Use of probabilistic inference rules to create implication links
> > between the different interpretations of the different sentences
>  >
> > 3)
> > Use of an optimization algorithm (which could be a clever use of SAT
> > or SMT, or something else) to utilize the links formed in step 2, to
> > select the right interpretation(s) for each sentence
>
>
> Gosh, I think you've missed something of critical importance...
>
> The problem you stated above is about choosing the correct interpretation of
> a bunch of sentences.  The problem we should tackle instead, is learning the
> "rules" that make up the KB.
>
> To see the difference, let's consider this example:
>
> Suppose I solve a problem (eg a programming exercise), and to illustrate my
> train of thoughts I clearly write down all the steps.  So I have, in
> English, a bunch of sentences A,B,C,...,Z where Z is the final conclusion
> sentence.
>
> Now the AGI can translate sentences A-Z into logical form.  You claim that
> this problem is hard because of multiple interpretations.  But I think
> that's relatively unimportant compared to the real problem we face.  So
> let's assume that we successfully -- correctly -- translate the NL sentences
> into logic.
>
> Now let's imagine that the AGI is doing the exercise, not me.  Then it
> should have a train of inference that goes from A to B to C ... and so on...
> to Z.  But, the AGI would NOT be able to make such a train of thoughts.  All
> it has is just a bunch of *static* sentences from A-Z.
>
> What is missing?  What would allow the AGI to actually conduct the inference
> from A-Z?
>
> The missing ingredient is a bunch of rules.  These are the "invisible glue"
> that links the thoughts "between the lines".  This is the knowledge that I
> think should be learned, and would be very difficult to learn.
>
> You know what I'm talking about??
>
>
>
> YKY
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>  Your piano example is a good one.
>
>  What it illustrates, I suggest, is:
>
>  your knowledge of, and thinking about, how to play the piano, and perform
>  the many movements involved, is overwhelmingly imaginative and body
>  knowledge/thinking (contained in images and the motor parts of the brain and
>  body as distinct from any kind of symbols)
>
>  The percentage of that knowledge that can be expressed in symbolic form -
>  logical, mathematical, verbal  etc- i.e. the details of those movements that
>  can be named or measured - is only A TINY FRACTION of the total.

Wrong...

This knowledge CAN be expressed in logical, symbolic form... just as can the
positions of all the particles in my brain ... but for these cases, the logical,
symbolic representation is highly awkward and inefficient...


>Our
>  cultural let alone your personal vocabulary (both linguistic and of  any
>  other symbolic form) for all the different finger movements you will
>  perform, can only name a tiny percentage of the details involved.

That is true, but in principle one could give a formal logical description of
them, boiling things all the way down to logical atoms corresponding to the
signals sent along the nerves to and from my fingers...

>  Such imaginative and body knowledge (which takes both declarative,
>  procedural and episodic forms) isn't, I suggest, - when considered as
>  corpuses or corpora of knowledge - MEANT to be put into explicit, symbolic,
>  verbal, logico-mathematical form.

Correct

> It would be utterly impossible to name all
>  the details of that knowledge.

Infeasible, not impossible

> One imaginative picture : an infinity of
>  words and other symbols. Any attempt to symbolise our imaginative/body
>  knowledge as a whole, would simply overwhelm our brain, or indeed any brain.

The concept of infinity is better handled in formal logic than anywhere else!!!

>  The idea that an AGI can symbolically encode all the knowledge, and perform
>  all the thinking, necessary to produce, say, a golf swing, let alone play a
>  symphony,  is a pure fantasy. Our system keeps that knowledge and thinking
>  largely in the motor areas of the brain and body, because that's where it
>  HAS to be.

Again you seem to be playing with different meanings of the word "symbolic."

I don't think that formal logic is a suitably convenient language for describing
motor movements or dealing with motor learning.

But still, I strongly suspect one can produce software programs that do handle
motor movement and learning effectively.  They are symbolic at the level of
the programming language, but not symbolic at the level of the deliberative,
reflective component of the artificial mind doing the learning.

A symbol is a symbol **to some system**.  Just because a hunk of program
code contains symbols to the programmer, doesn't mean it contains symbols
to the mind it helps implement.  Any more than a neuron being a symbol to a
neuroscientist, implies that neuron is a symbol to the mind it helps implement.

Anyway, I agree with you that formal logical rules and inference are not the
end-all of AGI and are not the right tool for handling visual imagination or
motor learning.  But I do think they have an important role to play even so.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>
> No one in AGI is aiming for common sense consciousness, are they?
>

The OpenCog and NM architectures are in principle supportive of this kind
of multisensory integrative consciousness, but not a lot of thought has gone
into exactly how to support it ...

In one approach, one would want to have

-- a large DB of embodied experiences (complete with the sensorial and
action data from the experiences)

-- a number of dimensional spaces, into which experiences are embedded
(a spatiotemporal region corresponds to a point in a dimensional space).
Each dimensional space would be organized according to a different principle,
e.g. melody, rhythm, overall visual similarity, similarity of shape, similarity
of color, etc.

-- an internal simulation world in which concrete remembered experiences,
blended experiences, or abstracted experiences could be enacted and
"internally simulated"

-- conceptual blending operations implemented on the dimensional spaces
and directly in the internal sim world

-- methods for measuring similarity, inheritance and other logical relationships
in the dimensional spaces and the internal sim world

-- methods for enacting learned procedures in the internal sim world,
and learning
new procedures based on simulating what they would do in the internal sim world


This is all do-able according to mechanisms that exist in the OpenCog and NM
designs, but it's an aspect we haven't focused on so far in NM... though we're
moving in that direction due to our work w/ embodiment in simulation
worlds...

We have built a sketchy internal sim world for NM but haven't experimented with
it much yet due to other priorities...

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-26 Thread Ben Goertzel
>  You guys seem to think this - true common sense consciousness - can all be
>  cracked in a year or two. I think there's probably a lot of good reasons -
>  and therefore major creative problems - why it took a billion years of
>  evolution to achieve.

I'm not trying to emulate the brain.

Evolution took billions of years to NOT achieve the airplane, helicopter
or wheel ...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
> I'm not talking about inference control here -- I assume that inference
> control is done in a proper way, and there will still be a problem.  You
> seem to assume that all knowledge = what is explicitly stated in online
> texts.  So you deny that there is a large body of implicit knowledge other
> than inference control rules (which are few in comparison).
>
> I think that if your AGI doesn't have the implicit knowledge, it'd only be
> able to perform simple inferences about statistical events -- for example,
> calculating the probability of (lung cancer | smoking).

For instance, suppose you ask an AI if chocolate makes a person more
alert.

It might read one article saying that coffee makes people more alert,
and another article saying that chocolate contains theobromine, and another
article saying that theobromine is related to caffeine, and another article
saying that coffee contains caffeine ... and then put the pieces together to
answer YES

This kind of reasoning
may sound simple but getting it to work systematically on the large
scale based on text mining has not been done...

And it does seem w/in the grasp of current tech without any breakthroughs...

> The kind of reasoning I'm interested in is more sophisticated.  For example,
> I may ask the AGI to "open a file and print the 100th line" (in Java or C++,
> say).  The AGI should be able to use a loop to read and discard the first 99
> lines.  We need a step like:  "read 99 lines -> use a loop" but such a step
> must be based on even simpler *concepts* of repetition and using loops.
> What I'm saying is that your AGI does NOT have such rules and would be
> incapable of thinking about such things.

Being "incapable of thinking about such things" is way too strong a statement --
that has to do with the AI's learning/reasoning algorithms rather than about the
knowledge it has.

I think there would be a viable path to AGI via

1)
Filling a KB up w/ commensense knowledge via text mining and simple inference,
as I described above

2)
Building an NL conversation system utilizing the KB created in 1

3)
Teaching the AGI the "implicit knowledge" you suggest via conversing with it

As noted I prefer to introduce embodiment into the mix, though, for a variety
of reasons...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Ben Goertzel
>  d) you keep repeating the illusion that evolution did NOT achieve the
>  airplane and other machines - oh yes, it did - your central illusion here is
>  that machines are independent species. They're not. They are EXTENSIONS  of
>  human beings, and don't work without human beings attached. Manifestly
>  evolution has taken several stages to perfect tool/machine-using species -
>  of whom we are only the latest version - I refer you to my good colleague,
>  the tool-using-and-creating Caledonian crow.

That is purely rhetorical gamesmanship...

By that interpretation of "achieved by evolution" then any AGI that we create
will also be achieved by evolution, due to being created by humans that
were achieved by evolution, right?

So, by this definition, the concept of "achieved by evolution" makes no
useful distinctions among AGI designs...

And: a wheel does work without a human attached, btw ..

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Ben Goertzel
>  Well,  what I and embodied cognitive science are trying to formulate
>  properly, both philosophically and scientifically, is why:
>
>  a) common sense consciousness is the brain-AND-body thinking on several
>  levels simultaneously about any given subject...

I don't buy that my body plays a significant role in thinking about,
for instance,
mathematics.  I bet that my brain in a vat could think about math just
as well or
better than my embodied brain.

Of course my brain is what it is because of evolving to be embodied, but that's
a different statement.

>  b) with the *largest* part of that thinking being "body thinking" - i.e.
>  your body working out *in-the-body* how the actions under consideration can
>  be enacted  (although this is inseparable from, and dependent on, the
>  brain's levels of thinking)

What evidence do you have that this is the "largest part" ... it does
not feel at all
that way to me, as a subjectively-experiencing human; and I know of no evidence
in this regard.

The largest bulk of brain matter does not equate to the largest part
of thinking,
in any useful sense...

I suspect that, in myself at any rate, the vast majority of my brain
dynamics are driven
by the small percentage of my brain that deal with abstract cognition.
 An attractor
spanning the whole brain can nonetheless be triggered/controlled by dynamics
in a small region.

>  c) if an agent doesn't have a body that can think about how it can move (and
>  have emotions), then it almost certainly can't understand how other bodies
>  move (and have emotions) - and therefore can't acquire a
>  "more-than-it's-all-Greek/Chinese/probabilistic-logic-to-me" understanding
>  of physics, biology, psychology, sociology etc. etc. - of both the
>  formal/cultural and informal/personal kinds.

I agree about psychology and sociology, but not about physics and biology.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] reasoning & knowledge

2008-02-27 Thread Ben Goertzel
I do not doubt that body-thinking exists and is important, my doubt is that it
is in any AGI-useful sense "the largest part" of thinking...

On Wed, Feb 27, 2008 at 1:07 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben:What evidence do you have that this [body thinking] is the "largest
>
> part" ... it does
>  not feel at all
>  that way to me, as a subjectively-experiencing human; and I know of no
>  evidence
>  in this regard
>
>  Like I said, I'm at the start here - and this is going against thousands of
>  years of literate culture. And there's a lot of work that needs to be done,
>  but I'm increasingly confident about it.
>
>  For a quick, impressionistic response to your question, think of what kind
>  of spectator events are almost guaranteed to produce the greatest
>  physical-and-emotional, "whole-body" responses in you. Spectator sports -
>  when you watch, say, someone miss a goal, and literally scream with your
>  whole body. Or farce - when some comic actor makes some crazy physical
>  errors - which you find literally gut-wrenchingly funny.  Why do you respond
>  so intensely? Because you are "body thinking", mirroring their actions with
>  your whole body - and that's a whole lot of stuff to think with, compared
>  say to the relatively few brain-and-body areas involved in symbolic thinking
>  like "22 + 22 = 44". (I notice in education they are now talking about how
>  infants and young children have to acquire all those symbols by "hands-on
>  thinking", i.e. "body thinking." You (and I) have just forgotten all that
>  stuff.).
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Artificial general intelligence

2008-02-27 Thread Ben Goertzel
>  I like the marketing technique at this mailing list. AGI "developers"
>  are claiming that they are building "AGI" but they are just building
>  narrow programs.

Personally I am working on both -- the former for R&D purposes and the
latter to make a living ;-p

>  The term "artificial general intelligence" is an oxymoron. That term is
>  metaphysical, since there is no such thing as "general".

This point is well-understood already.

Hutter's theoretical analyses of AIXI make pretty clear the meaning of
"fully general intelligence" -- which is unambiguous but pragmatically
useless, as it requires infinite computational power.

So in reality one is talking about "degrees of generality" ... absolute
generality being impossible w/in finite computational resources...

>I prefer
>  something like "human-like intelligence" or the more common term
>  "human-level intelligence." Intelligence requires human-like perception.

Actually, I think human-like intelligence is a clear notion -- but places
very narrow constraints on AGI, which not all approaches adhere to
(mine doesn't)

OTOH, human-level intelligence is an even slipperier term than AGI

Most likely the first AGIs will be superhuman in some regards and subhuman
in others, so that whether they're "human level" will be basically a meaningless
question

>  What is intelligence? It is the interaction of human-like perception
>  with respect to episodic memory.

Well, no.

The interaction of perception and episodic memory is one aspect of
intelligence, not the whole thing.

And human-like perception is not the only feasible kind.

>  In the past, I was ignorant about cognitive science and rejected
>  cognitive science for AGI on the "airplane and birds" analogy. However,
>  when I learned more about it, I was discovering how our perception, such
>  as synesthesia-like neurons, spatial cells, and episodic intelligence,
>  are almost absolutely required for AGI. I would never have been inspired
>  about this if I was ignorant about cognitive science.

I taught cog sci for a couple years ... I think the brain's functions
are fascinating,
but I have no idea why you think this particular evolved system is the ONLY
sort of intelligence that is possible for us to create...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
>  It could be done with a simple chain of word associations mined from a text
>  corpus: alert -> coffee -> caffeine -> theobromine -> chocolate.

That approach yields way, way, way too much noise.  Try it.

>  But that is not the problem.  The problem is that the reasoning would be
>  faulty, even with a more sophisticated analysis.  By a similar analysis you
>  could reason:
>
>  - coffee makes you alert.
>  - coffee contains water.
>  - water (H20) is related to hydrogen sulfide (H2S).
>  - rotten eggs produce hydrogen sulfide.
>  - therefore rotten eggs make you alert.

There is a "produce" predicate in here which throws off the chain of
reasoning wildly.

And, nearly every food contains water, so the application of Bayes
rule within this inference chain of yours will yield a conclusion with
essentially zero confidence.  Since fewer foods contain caffeine or
theobromine, the inference trail I suggested will not have this
problem.

In short, I claim your "similar analysis" is only similar at a very
crude level of analysis, and is not similar when you look at the
actual probabilistic inference steps involved.

>  Long chains of logical reasoning are not very useful outside of mathematics.

But the inference chain I gave as an example is NOT very long. The
problem is actually that outside of math, chains of inference (long or
short) require contextualization...

>  > I think there would be a viable path to AGI via
>  >
>  > 1)
>  > Filling a KB up w/ commensense knowledge via text mining and simple
>  > inference,
>  > as I described above
>  >
>  > 2)
>  > Building an NL conversation system utilizing the KB created in 1
>  >
>  > 3)
>  > Teaching the AGI the "implicit knowledge" you suggest via conversing with 
> it
>
>  I think adding common sense knowledge before language is the wrong approach.
>  It didn't work for Cyc.

I agree it's not the best approach.

I also think, though, that one unsuccessful attempt should not be taken to damn
the whole approach.

The failure of explicit knowledge encoding by humans, does not straightforwardly
imply the failure of knowledge extraction via text mining (as approaches to AGI)

>  Natural language evolves to the easiest form for humans to learn, because if 
> a
>  language feature is hard to learn, people will stop using it because they
>  aren't understood.  We would be wise to study language learning in humans and
>  model the process.  The fact is that children learn language in spite of a
>  lack of common sense.

Actually, they seem to acquire language and common sense together.

But, "wild children" and apes learn common sense, but never learn
language beyond
the proto-language level.

But I agree, study of human dev psych is one thing that has inclined
me toward the
embodied approach ...

yet I still feel you dismiss the text-mining approach too glibly...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-27 Thread Ben Goertzel
>  > yet I still feel you dismiss the text-mining approach too glibly...
>
>  No, but text mining requires a language model that learns while mining.  You
>  can't mine the text first.

Agreed ... and this gets into subtle points.  Which aspects of the
language model
need to be adapted while mining, and which can remain fixed?  Answering this
question the right way may make all the difference in terms of the viability of
the approach...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-02-28 Thread Ben Goertzel
Hi,

> I think Ben's text mining approach has one big flaw:  it can only reason
> about existing knowledge, but cannot generate new ideas using words /
> concepts.

Text mining is not an AGI approach, it's merely a possible way of getting
knowledge into an AGI.

Whether the AGI can generate new ideas is independent of whether it
gets knowledge via text mining or via some other means...

> I want to stress that AGI needs to be able to think at the
> WORD/CONCEPT level.  In order to do this, we need some rules that *rewrite*
> sentences made up of words, such that the AGI can reason from one sentence
> to another.  Such rewrite rules are very numerous and can be very complex --
> for example rules for auxillary words and prepositions, etc.  I'm not even
> sure that such rules can be expressed in FOL easily -- let alone learn them!

This seems "off" somehow -- I don't think reasoning should be implemented
on the level of linguistic surface forms.

> The embodiment approach provides an environment for learning qualitative
> physics, but it's still different from the common sense domain where
> knowledge is often verbally expressed.

I don't get your point...

Most of common sense is about the world in which we live, as embodied
social organisms...  Embodiment buys you a lot more than qualitative
physics.  It buys you richly shared social experience, among other things.

> In fact, it's not the environment
> that matters, it's the knowledge representation (whether it's expressive
> enough) and the learning algorithm (how sophisticated it is).

I think that all three of these things matter a lot, along with the
overall cognitive
architecture.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction Question

2008-02-29 Thread Ben Goertzel
I am not so sure that humans use uncomputable models in any useful sense,
when doing calculus.  Rather, it seems that in practice we use
computable subsets
of an in-principle-uncomputable theory...

Oddly enough, one can make statements *about* uncomputability and
uncomputable entities, using only computable operations within a
formal system...

For instance, one can prove that even if x is an uncomputable real number

x - x = 0

But that doesn't mean one has to be able to hold *any* uncomputable number x
in one's brain...

thus is the power of abstraction, and I don't see why AGIs can't have
it just like
humans do...

Ben

On Fri, Feb 29, 2008 at 4:37 PM, Abram Demski <[EMAIL PROTECTED]> wrote:
> I'm an undergrad who's been lurking here for about a year. It seems to me
> that many people on this list take Solomonoff Induction to be the ideal
> learning technique (for unrestricted computational resources). I'm wondering
> what justification there is for the restriction to turing-machine models of
> the universe that Solomonoff Induction uses. Restricting an AI to computable
> models will obviously make it more realistically manageable. However,
> Solomonoff induction needs infinite computational resources, so this clearly
> isn't a justification.
>
> My concern is that humans make models of the world that are not computable;
> in particular, I'm thinking of the way physicists use differential
> equations. Even if physics itself is computable, the fact that humans use
> incomputable models of it remains. Solomonoff Induction itself is an
> incomputable model of intelligence, so an AI that used Solomonoff Induction
> (even if we could get the infinite computational resources needed) could
> never understand its own learning algorithm. This is an odd position for a
> supposedly universal model of intelligence IMHO.
>
> My thinking is that a more-universal theoretical prior would be a prior over
> logically definable models, some of which will be incomputable.
>
> Any thoughts?
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Solomonoff Induction Question

2008-02-29 Thread Ben Goertzel
>  This is a general theorem about *strings* in this formal system, but
>  no such string with uncomputable real number can ever be written, so
>  saying that it's a theorem about uncomputable real numbers is an empty
>  set theory (it's a true statement, but it's true in a trivial
>  "falsehood, therefore Mars is inhabited by little green men" kind of
>  formal sense).

Well, but NO uncomputable number can be written, so which theorems
about uncomputable numbers are NOT empty in the sense you  mean?

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI Metaphysics and spatial biases.

2008-03-02 Thread Ben Goertzel
> Using informal words, how would you describe the metaphysics or
> biases currently encoded into the Novamente system?
>
> /Robert Wensman

This is a good question, and unfortunately I don't have a
systematic answer.  Biases are encoded in many different
aspects of the design, e.g.

-- the knowledge representation

-- the heuristics within the inference rules (e.g. for temporal
and spatial inference)

-- the set of predicates and procedures provided as primitives for
automated program learning

-- various specializations in the architecture (e.g. the use of
specialized SpaceServer and TimeServer objects to allow efficient
indexing of entities by space and time)

and we haven't made an effort to go through and systematize the
conceptual biases implicit in the detailed design of all the different
parts of the system, although there are plenty of important biases
there

sorry for the unsatisfying answer but it would take me a couple days
of analysis to give you a real answer, and other priorities beckon...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] would anyone want to use a commonsense KB?

2008-03-03 Thread Ben Goertzel
> Sure, AGI needs to handle NL in an open-ended way.  But the question is
> whether the internal knowledge representation of the AGI needs to allow
> ambiguities, or should we use an ambiguity-free representation.  It seems
> that the latter choice is better.  Otherwise, the knowledge stored in
> episodic memory would be open to interpretations and may need to errors in
> recall, and similar problems.

Rather, I think the right goal is to create an AGI that, in each
context, can be as ambiguous as it wants/needs to be in its
representation of a given piece of information.

Ambiguity allows compactness, and can be very valuable in this regard.

Guidance on this issue is provided by the Lojban language.  Lojban
allows extremely precise expression, but also allows ambiguity as
desired.  What one finds when speaking Lojban is that sometimes one
chooses ambiguity because it lets one make ones utterances shorter.  I
think the same thing holds in terms of an AGI's memory.  An AGI with
finite memory resources must sometimes choose to represent relatively
unimportant information ambiguously rather than precisely so as to
conserve memory.

For instance, storing the information

"A is associated with B"

is highly ambiguous, but takes little memory.  Storing logical
information regarding the precise relationship between A and B may
take one or more orders of magnitude more information.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] AGI-08 in the news...

2008-03-05 Thread Ben Goertzel
http://www.memphisdailynews.com/Editorial/StoryLead.aspx?id=101671

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Brief report on AGI-08

2008-03-08 Thread Ben Goertzel
any AI academics to
come to a mildly out-of-the-mainstream conference on AGI.  Society,
including the society of scientists, is starting to wake up to the
notion that, given modern technology and science, human-level AGI is
no longer a pipe dream but a potential near-term reality.  w00t!  Of
course there is a long way to go in terms of getting this kind of work
taken as seriously as it should be, but at least things seem to be
going in the right direction.

-- Ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] What should we do to be prepared?

2008-03-09 Thread Ben Goertzel
Agree... I have not followed this discussion in detail, but if you have
a concrete proposal written up somewhere in a reasonably compact
format, I'll read it and comment

-- Ben G

On Sun, Mar 9, 2008 at 1:48 PM, Tim Freeman <[EMAIL PROTECTED]> wrote:
> From: "Mark Waser" <[EMAIL PROTECTED]>:
>
> >Hmm.  Bummer.  No new feedback.  I wonder if a) I'm still in "Well
>  >duh" land, b) I'm so totally off the mark that I'm not even worth
>  >replying to, or c)  being given enough rope to hang myself.
>  >:-)
>
>  I'll read the paper if you post a URL to the finished version, and I
>  somehow get the URL.  I don't want to sort out the pieces from the
>  stream of AGI emails, and I don't want to try to provide feedback on
>  part of a paper.
>
>  --
>  Tim Freeman   http://www.fungible.com   [EMAIL PROTECTED]
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Some thoughts of an AGI designer

2008-03-10 Thread Ben Goertzel
>  The three most common of these assumptions are:
>
>1) That it will have the same motivations as humans, but with a
>  tendency toward the worst that we show.
>
>2) That it will have some kind of "Gotta Optimize My Utility
>  Function" motivation.
>
>3) That it will have an intrinsic urge to increase the power of its
>  own computational machinery.
>
>  There are other assumptions, but these seem to be the big three.

And IMO, the truth is likely to be more complex...

For instance,  a Novamente-based AGI will have an explicit utility
function, but only a percentage of the system's activity will be directly
oriented toward fulfilling this utility function

Some of the system's activity will be "spontaneous" ... i.e. only
implicitly goal-oriented .. and as such may involve some imitation
of human motivation, and plenty of radically non-human stuff...

ben g

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Recap/Summary/Thesis Statement

2008-03-11 Thread Ben Goertzel
>  > An attractor is a set of states that are repeated given enough time.  If
>  >  agents are killed and not replaced, you can't return to the current state.
>
>  False. There are certainly attractors that disappear, first
>  seen by Ruelle, Takens, 1971 its called a "blue sky catastrophe"
>
>  http://www.scholarpedia.org/article/Blue-sky_catastrophe

Relatedly, you should look at Mikhail Zak's work on "terminal attractors",
which occurred in the context of neural nets as I recall

These are attractors which a system zooms into for a while, then after a period
of staying in them, it zooms out of them  They occur when the differential
equation generating the dynamical system displaying the attractor involves
functions with points of nondifferentiability.

Of course, you may be specifically NOT looking for this kind of attractor,
in your Friendly AI theory ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Your mail to [EMAIL PROTECTED]

2008-03-11 Thread Ben Goertzel
I tried to fix the problem, let me know if it worked...

ben



On Tue, Mar 11, 2008 at 12:02 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> Ben,
>
> Can we boot alien off the list?  I'm getting awfully tired of his
>  auto-reply emailing me directly *every* time I post.  It is my contention
>  that this is UnFriendly behavior (wasting my resources without furthering
>  any true goal of his) and should not be accepted.
>
> Mark
>
>  - Original Message -
>  From: <[EMAIL PROTECTED]>
>  To: <[EMAIL PROTECTED]>
>  Sent: Tuesday, March 11, 2008 11:56 AM
>  Subject: Re: Your mail to [EMAIL PROTECTED]
>
>
>  > Thank you for contacting Alienshift.
>  > We will respond to your Mail in due time.
>  >
>  > Please feel free to send positive thoughts in return back to the Universe.
>  > [EMAIL PROTECTED]
>  >
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi] NewScientist piece on AGI-08

2008-03-11 Thread Ben Goertzel
I wonder if this is the only article she'll write on the conference?
Perhaps she'll release other posts on other bits and pieces of
research she liked there...

I tend to agree with John Laird's comments about Selmer's Eddie avatar:

***
John Laird, a researcher in computer games and Artificial Intelligence
(AI) at the University of Michigan in Ann Arbor, is not overly
impressed. "It's not that challenging to get an AI system to do theory
of mind," he says.

He points out that last year, Cynthia Breazeal of the Massachusetts
Institute of Technology's Media Lab programmed that ability into a
physical robot called Leonardo. A video shows the robot passing the
test.

More impressive demonstration, says Laird, would be a character,
initially unable to pass the test, that learned how to do so – just as
humans do.
***

What Selmer's team did was a pretty straightforward bit of logic
programming, IMO ... and it really didn't benefit from the virtual
embodiment at all  Not that I'm calling it trivial or stupid or
anything ... it's a reasonably nice piece of work... but I really
doubt these simple logical rules encapsulate anything remotely
resembling a human child's "theory of mind."

-- Ben
-- Ben

On Tue, Mar 11, 2008 at 9:45 PM, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> Many of us there met Celeste Biever, the NS correspondent. Her piece is now
>  up:
>  
> http://technology.newscientist.com/channel/tech/dn13446-virtual-child-passes-mental-milestone-.html
>
>  Josh
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi]

2008-03-13 Thread Ben Goertzel
I know Selmer and his group pretty well...

It is well done stuff, but it is purely hard-coded-knowledge-based
logical inference --
there is no real learning there...

It's not so hard to get impressive-looking functionality in toy demo
tasks, by hard-
coding rules and using a decent logic engine

Others have failed at this, so his achievement is worthwhile and means his logic
engine and formalism are better than most ... but still ... IMO, this
is not a very likely
path to AGI ...

-- Ben

On Thu, Mar 13, 2008 at 10:30 AM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Here is an article about RPI's attempt to pass a slightly modified version
>  of the turning test using supercomputers to power their "Rascals" AI
>  algorithm.
>
>  http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=206903246&pri
>  ntable=true
>
>  The one thing I didn't understand was that they said their "Rascals" AI
>  algorithm used a theorem proving architectures.  I would assume that that
>  would mean it as based on binary logic, and thus would not be sufficiently
>  flexible to model many human thought processes, which are almost certainly
>  more neural net-like, and thus much more probabilistic.
>
>  Does anybody have any opinions on that.
>
>  Ed Porter
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi]

2008-03-13 Thread Ben Goertzel
Unless the details of that modified Turing Test are somehow profoundly
flawed, then, yes...

ben

On Thu, Mar 13, 2008 at 12:28 PM, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> So Ben, based on what you are saying, you fully expect them to fail their
> Turing test?
>
> Eric B. Ramsay
>
>
> Ben Goertzel <[EMAIL PROTECTED]> wrote:
>  I know Selmer and his group pretty well...
>
> It is well done stuff, but it is purely hard-coded-knowledge-based
> logical inference --
> there is no real learning there...
>
> It's not so hard to get impressive-looking functionality in toy demo
> tasks, by hard-
> coding rules and using a decent logic engine
>
> Others have failed at this, so his achievement is worthwhile and means his
> logic
> engine and formalism are better than most ... but still ... IMO, this
> is not a very likely
> path to AGI ...
>
> -- Ben
>
> On Thu, Mar 13, 2008 at 10:30 AM, Ed Porter wrote:
> > Here is an article about RPI's attempt to pass a slightly modified version
> > of the turning test using supercomputers to power their "Rascals" AI
> > algorithm.
> >
> >
> http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=206903246&pri
> > ntable=true
> >
> > The one thing I didn't understand was that they said their "Rascals" AI
> > algorithm used a theorem proving architectures. I would assume that that
> > would mean it as based on binary logic, and thus would not be sufficiently
> > flexible to model many human thought processes, which are almost certainly
> > more neural net-like, and thus much more probabilistic.
> >
> > Does anybody have any opinions on that.
> >
> > Ed Porter
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: http://www.listbox.com/member/?&;
>
> > Powered by Listbox: http://www.listbox.com
> >
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "If men cease to believe that they will one day become gods then they
> will surely become worms."
> -- Henry Miller
>
> -------
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


Re: [agi]

2008-03-13 Thread Ben Goertzel
If the test is defined to refer ONLY to conversations about
a sufficiently narrow domain of objects in
a toy virtual world ... and they encode enough knowledge ... then maybe they
could brute-force past the test... after all there is not that much to
say about
a desk, a table, a lamp and a box ... or whatever the set of objects in the toy
world may be...

This is the danger of toy test environments, be they in virtual worlds or
physical robotics...

ben g

On Thu, Mar 13, 2008 at 12:35 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Unless the details of that modified Turing Test are somehow profoundly
>  flawed, then, yes...
>
>  ben
>
>
>
>  On Thu, Mar 13, 2008 at 12:28 PM, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
>  > So Ben, based on what you are saying, you fully expect them to fail their
>  > Turing test?
>  >
>  > Eric B. Ramsay
>  >
>  >
>  > Ben Goertzel <[EMAIL PROTECTED]> wrote:
>  >  I know Selmer and his group pretty well...
>  >
>  > It is well done stuff, but it is purely hard-coded-knowledge-based
>  > logical inference --
>  > there is no real learning there...
>  >
>  > It's not so hard to get impressive-looking functionality in toy demo
>  > tasks, by hard-
>  > coding rules and using a decent logic engine
>  >
>  > Others have failed at this, so his achievement is worthwhile and means his
>  > logic
>  > engine and formalism are better than most ... but still ... IMO, this
>  > is not a very likely
>  > path to AGI ...
>  >
>  > -- Ben
>  >
>  > On Thu, Mar 13, 2008 at 10:30 AM, Ed Porter wrote:
>  > > Here is an article about RPI's attempt to pass a slightly modified 
> version
>  > > of the turning test using supercomputers to power their "Rascals" AI
>  > > algorithm.
>  > >
>  > >
>  > 
> http://www.eetimes.com/news/latest/showArticle.jhtml?articleID=206903246&pri
>  > > ntable=true
>  > >
>  > > The one thing I didn't understand was that they said their "Rascals" AI
>  > > algorithm used a theorem proving architectures. I would assume that that
>  > > would mean it as based on binary logic, and thus would not be 
> sufficiently
>  > > flexible to model many human thought processes, which are almost 
> certainly
>  > > more neural net-like, and thus much more probabilistic.
>  > >
>  > > Does anybody have any opinions on that.
>  > >
>  > > Ed Porter
>  > >
>  > > ---
>  > > agi
>  > > Archives: http://www.listbox.com/member/archive/303/=now
>  > > RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  > > Modify Your Subscription: http://www.listbox.com/member/?&;
>  >
>  > > Powered by Listbox: http://www.listbox.com
>  > >
>  >
>  >
>  >
>  > --
>  > Ben Goertzel, PhD
>  > CEO, Novamente LLC and Biomind LLC
>  > Director of Research, SIAI
>  > [EMAIL PROTECTED]
>  >
>  > "If men cease to believe that they will one day become gods then they
>  > will surely become worms."
>  > -- Henry Miller
>  >
>  > ---
>  > agi
>  > Archives: http://www.listbox.com/member/archive/303/=now
>  > RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  > Modify Your Subscription: http://www.listbox.com/member/?&;
>  >
>  > Powered by Listbox: http://www.listbox.com
>  >
>  >  
>  >
>  >  agi | Archives | Modify Your Subscription
>
>
>
>  --
>  Ben Goertzel, PhD
>  CEO, Novamente LLC and Biomind LLC
>  Director of Research, SIAI
>  [EMAIL PROTECTED]
>
>  "If men cease to believe that they will one day become gods then they
>  will surely become worms."
>  -- Henry Miller
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b
Powered by Listbox: http://www.listbox.com


[agi] Seeking student programmers for summer 2008: OpenCog meets Google Summer of Code

2008-03-22 Thread Ben Goertzel
Hi all,

Sorry for the short notice, but I was out of town last week with limited email
access...

The Singularity Institute for AI was accepted as a mentoring organization for
Google's 2008 Summer of Code project, with a focus on the OpenCog
open-source AGI project (www.opencog.org).  See

http://code.google.com/soc/2008/siai/about.html

What this means is that programmers who want to spend Summer 2008
working on open-source AI code within the OpenCog framework, and get paid
$5000 by Google for this, can submit proposals for OpenCog projects,
within the GSOC website.

Student programmers have the interval btw March 24 and March 31 to
submit proposals, then accepted proposals will be announced on the GSOC
website on April 11.

If you have a particular proposal idea you'd like to discuss, best option
is to post it on the OpenCog Google Group mailing list (find info on
opencog.org).

Some proposal ideas are found here

http://opencog.org/wiki/Ideas

but we're quite open to other suggestions as well, in the freewheeling spirit
of GSOC...

Thanks
Ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Microsoft Launches Singularity

2008-03-24 Thread Ben Goertzel
 http://www.codeplex.com/singularity

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
> Now, let me ask you a question:  Do you believe that all AI / AGI
> researchers are toiling over all this for the challenge, or purely out of
> interest?  I doubt that as well.  Surely there are those elements as drivers
> - BUT SO IS MONEY.

Aki, you don't seem to understand the psychology of the
AGI researcher very well.

Firstly, academic AGI researchers are not in it for the $$, and are unlikely
to profit from their creations no matter how successful.  Yes, spinoffs from
academia to industry exist, but the point is that academic work is motivated
by love of science and desire for STATUS more so than desire for money.

Next, Singularitarian AGI researchers, even if in the business domain (like
myself), value the creation of AGI far more than the obtaining of material
profits.

I am very interested in deriving $$ from incremental steps on the path to
powerful AGI, because I think this is one of the better methods available
for funding AGI R&D work.

But deriving $$ from human-level AGI really is not a big motivator of
mine.  To me, once human-level AGI is obtained, we have something of
dramatically more interest than accumulation of any amount of wealth.

Yes, I assume that if I succeed in creating a human-level AGI, then huge
amounts of $$ for research will come my way, along with enough personal $$ to
liberate me from needing to manage software development contracts
or mop my own floor.  That will be very nice.  But that's just not the point.

I'm envisioning a population of cockroaches constantly fighting over
crumbs of food on the floor.  Then a few of the cockroaches -- let's
call them the Cockroach Robot Club --  decide to
spend their lives focused on creating a superhuman robot which will
incidentally allow cockroaches to upload into superhuman form with
superhuman intelligence.  And the other cockroaches insist that
Cockroach Robot Club's
motivation in doing this must be a desire
to get more crumbs of food.  After all,
just **IMAGINE** how many crumbs of food you'll be able to get with
that superhuman robot on your side!!!  Buckets full of crumbs!!!  ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
Hi Aki,

> Even as a pure scientist, you can
> accomplish more in research by producing wealth, than depending on gov't
> grants.  I say gov't grants because private investment is probably years
> away from now.  The topic of financing got a lot of attention at AGI 08.
>

Well, if you're an AGI researcher and believe that government funding isn't
going to push AGI forward ... and that unfunded or lightly-funded
open-source initiatives like
OpenCog won't work either ... then  there are two approaches, right?

1)
You can try to do like Jeff Hawkins, and make a pile of $$ doing something
AGI-unrelated, and then use the ensuing $$ for AGI

2)
You can try to make $$ from stuff that's along the incremental path to AGI


I'm trying approach 2  but it has its pitfalls.  Yet so of course does
approach 1 --
Hawkins succeeded and so have others whom I know, but it's a tiny minority
of those who have tried... being a great AGI researcher does not necessarily
make you great at business, nor even at narrow-AI biz applications...

There are no easy answers to the problem of being "ahead of your time" ...
yet it's those of us who are willing to push ahead in spite of being
out of synch
with society's priorities, that ultimately shift society's priorities
(and in this case,
may shift way more than that...)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente study

2008-03-25 Thread Ben Goertzel
Hi,

The PLN book should be out by that date ... I'm currently putting in
some final edits to the manuscript...

Also, in April and May I'll be working on a lot of documentation
regarding plans for OpenCog.  While this doesn't include all
Novamente's proprietary stuff, it will certainly tell you enough to
give you a way better understanding of what Novamente, as well as
OpenCog, is all about...

-- Ben

On Tue, Mar 25, 2008 at 1:28 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
> Ben,
>
>  It seems to me that Novamente is widely considered the most promising and
> advanced AGI effort around (at least of the ones one can get any detailed
> technical information about), so I've been planning to put some significant
> effort into understanding it with a view toward deciding whether I think
> you're on the right track or not (with as little hand-waving, faith, or
> bigotry as possible in my conclusion).  To do that properly, I am waiting
> for your book on Probabilistic Logic Networks to be published.  Amazon says
> July 2008... is that date correct?
>
>  Thanks!
>
>  ________
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Hi all,

A lot of students email me asking me what to read to get up to speed on AGI.

So I started a wiki page called "Instead of an AGI Textbook",

http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

Unfortunately I did not yet find time to do much but outline a table
of contents there.

So I'm hoping some of you can chip in and fill in some relevant
hyperlinks on the pages
I've created ;-)

For those of you too lazy to click the above link, here is the
introductory note I put on the wiki page:




I've often lamented the fact that there is no advanced undergrad level
textbook for AGI, analogous to what Russell and Norvig is for Narrow
AI.

Unfortunately, I don't have time to write such a textbook, and no one
else with the requisite knowledge and ability seems to have the time
and inclination either.

So, instead of a textbook, I thought it would make sense to outline
here what the table of contents of such a textbook might look like,
and to fill in each section within each chapter in this TOC with a few
links to available online resources dealing with the topic of the
section.

However, all I found time to do today (March 25, 2008) is make the
TOC. Maybe later I will fill in the links on each section's page, or
maybe by the time I get around it some other folks will have done it.

While nowhere near as good as a textbook, I do think this can be a
valuable resource for those wanting to get up to speed on AGI concepts
and not knowing where to turn to get started. There are some available
AGI bibliographies, but a structured bibliography like this can
probably be more useful than an unstructured and heterogeneous one.

Naturally my initial TOC represents some of my own biases, but I trust
that by having others help edit it, these biases will ultimately "come
out in the wash.

Just to be clear: the idea here is not to present solely AGI material.
Rather the idea is to present material that I think students would do
well to know, if they want to work on AGI. This includes some AGI,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.

***


-- Ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Java spreading activation library released

2008-03-25 Thread Ben Goertzel
Hi Stephen,

I think this approach makes sense.

In Novamente/OpenCog, we don't use spreading activation, but we use an
"economic attention allocation" mechanism that is similar in spirit
(though subtly
different in dynamics).

The motivation is similar: You just can't use complex, abstract
reasoning methods
for everything, because they're too expensive.  So this sort of simple
heuristic approach
is useful in many cases, as an augmentation to more precise methods.

-- Ben

On Tue, Mar 25, 2008 at 7:53 PM, Stephen Reed <[EMAIL PROTECTED]> wrote:
>
> While programming my bootstrap English dialog system, I needed a spreading
> activation library for the purpose of enriching the discourse context with
> conceptually related terms.  For example given that there is a
> human-habitable room that both speakers know of, then it is reasonable to
> assume that "on the table" has meaning "on the piece of furniture" in the
> room rather than the meaning "subject to negotiation".  This assumption can
> be deductively concluded by an inference engine given the room as a fact,
> and rules concluding the typical objects that are found in rooms.  But
> performing theorem proving during utterance comprehension is not cognitively
> plausible, and would take too long for real-time performance.   Suppose that
> offline deductive inference provides justifications (e.g. proof traces) to
> support learned links between rooms and tables, then spreading activation is
> a well known algorithm for searching semantic graphs for relevant linked
> nodes.
>
> A literature search provided much useful information regarding spreading
> activation, also known as marker passing, especially about natural language
> disambiguation, which is my topic of interest.  Because there are no general
> purpose spreading activation Java libraries available, I wrote one and just
> released it on the Texai SourceForge project site.  The download includes
> Javadoc, an overview document, source code, all required jars (Java
> libraries), unit tests and examples, and GraphViz illustrations of sample
> graphs.  Performance is acceptable: 20,000 nodes can be activated in 24 ms
> with one thread on my 2.8 GHz CPU.  Furthermore the code is multi-threaded
> and it gets about a 30% speed increase by using two CPU cores.  Even if you
> are not interested in spreading activation, the Java code is a clear example
> of using a CyclicBarrier and CountdownLatch to control worker threads with a
> driver.
>
> A practice I recommend to you all is to improve Wikipedia articles on AI
> topics of interest.  Therefore I elaborated the existing article on
> spreading activation to include the algorithm and its variations.
>
> Cheers.
> -Steve
>  Stephen L. Reed
>
> Artificial Intelligence Researcher
> http://texai.org/blog
> http://texai.org
> 3008 Oak Crest Ave.
> Austin, Texas, USA 78704
> 512.791.7860
>
>
>  ____
> Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
> now.
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Richard,

>  Unfortunately I cannot bring myself to believe this will help anyone new
>  to the area.
>
>  The main reason is that this is only a miscellaneous list of topics,
>  with nothing to indicate a comprehensive theory or a unifying structure.
>   I do not ask for a complete unified theory, of course, but something
>  more than just a collection of techniques is needed if this is to be a
>  "textbook".
>


I have my own comprehensive theory and unifying structure for AGI...

Pei has his...

You have yours...

Stan Franklin has his...

Etc.

These have been published with varying levels of detail in various
places ... I'll be publishing more of mine this year, in the PLN book, and
then in the OpenCog documentation and plans ... but many of the
conceptual aspects of my approach were already mentioned in
The Hidden Pattern

My goal in "Instead of an AGI Textbook" is **not** to present anyone's
unifying theory (not even my own) but rather to give pointers to
**what information a student should learn, in order to digest the various
unifying theories being proposed**.

To put it another way: Aside from a strong undergrad background in CS
and good programming skills, what would I like someone to know about
in order for them to work on Novamente or OpenCog or
some other vaguely similar AI project?

Not everything in my suggested TOC is actually used in Novamente or OpenCog...
but even the stuff that isn't, is interesting to know about if you're
going to work
on these things, just to have a general awareness of the various approaches
that have been taken to these problems...

>  A second reason for being skeptical is that there is virtually no
>  cognitive psychology in this list - just a smattering of odd topics.

Yes, that's a fair point -- that's a shortcoming of the draft TOC as I
posted it.

Please feel free to add some additional, relevant cog psych topics
to the page ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
On Tue, Mar 25, 2008 at 9:39 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Richard,
>
>
>  >  Unfortunately I cannot bring myself to believe this will help anyone new
>  >  to the area.
>  >
>  >  The main reason is that this is only a miscellaneous list of topics,
>  >  with nothing to indicate a comprehensive theory or a unifying structure.

Actually it's not a haphazardly assembled miscellaneous list of topics
... it was
assembled with a purpose and structure in mind...

Specifically, I was thinking of OpenCog, and what it would be good for someone
to know in order to have a relatively full grasp of the OpenCog design.

As such, the topic list may contain stuff that is not relevant to your
AGI design,
and also may miss stuff that is critical to your AGI design...

But the "non textbook" is NOT intended as a presentation of OpenCog or any
other specific AGI theory or framework.  Rather, it is indeed,
largely, a grab bag
of relevant prerequisite information ... along with some information on specific
AGI approaches...

One problem I've found is that the traditional undergrad CS or AI education does
not actually give all the prerequisites for really grasping AGI
theories ... often
topics are touched in a particularly non-AGI-ish way ... for instance,
neural nets
are touched but complex dynamics in NN's are skipped ... Bayes nets are touched
but issues involving combining probability with more complex logic operations
are skipped ... neurons are discussed but theories of holistic brain function
are skipped ... etc.   The most AGI-relevant stuff always seems to get
skipped for
lack of time..!

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
>  I'll try to find the time to provide my list --- at this moment, it
>  will be more like a reading list than a textbook TOC.

That would be great -- however I may integrate your reading
list into my TOC ... as I really think there is value in a structured
and categorized reading list rather than just a list...

I know every researcher will have their own foci, but I'm going
to try to unify different researchers' suggestions into a single
TOC with a sensible organization, because I would like to cut
through the confusion faced by students starting out in this
field of research...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Yeah, the AGIRI wiki has been there for years ... the hard thing is
getting people
to contribute to it (and I myself rarely find the time...)

But if others don't chip in, I'll complete my little non-textbook
myself sometime w/in
the next month ...

-- Ben

On Tue, Mar 25, 2008 at 10:52 PM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
> Ok - that was silly of me.  After visiting the link (which was after I sent
> the email), I noticed that is WAS a Wiki.
>
> My apologies.
>
> ~Aki
>
>
>
>
> On Tue, Mar 25, 2008 at 9:47 PM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
>
> > Thanks Ben.  AGI is a daunting field to say the least.  Many scientific
> domains are involved in various degrees.  I am very happy to see  something
> like this, because knowing where to start is not so obvious for the
> beginner.  I actually recently purchased Artificial Intelligence: A Modern
> Approach - but only because I did not know where else to start.  I have the
> programming down - but, like most others, I don't know *what* to program.
> >
> > I really hope that others will contribute to your TOC.  In fact, I am
> willing to put up and host an "AGI Wiki" if theis community would find it of
> use.  I'd need a few weeks - because I don't have the time right now - but
> it is a worthwhile endeavor, and I'm happy to do it.
> >
> > ~Aki
> >
> >
> >
> >
> >
> > On Tue, Mar 25, 2008 at 6:46 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >
> >
> >
> >
> > > Hi all,
> > >
> > > A lot of students email me asking me what to read to get up to speed on
> AGI.
> > >
> > > So I started a wiki page called "Instead of an AGI Textbook",
> > >
> > >
> http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
> > >
> > > Unfortunately I did not yet find time to do much but outline a table
> > > of contents there.
> > >
> > > So I'm hoping some of you can chip in and fill in some relevant
> > > hyperlinks on the pages
> > > I've created ;-)
> > >
> > > For those of you too lazy to click the above link, here is the
> > > introductory note I put on the wiki page:
> > >
> > >
> > > 
> > >
> > > I've often lamented the fact that there is no advanced undergrad level
> > > textbook for AGI, analogous to what Russell and Norvig is for Narrow
> > > AI.
> > >
> > > Unfortunately, I don't have time to write such a textbook, and no one
> > > else with the requisite knowledge and ability seems to have the time
> > > and inclination either.
> > >
> > > So, instead of a textbook, I thought it would make sense to outline
> > > here what the table of contents of such a textbook might look like,
> > > and to fill in each section within each chapter in this TOC with a few
> > > links to available online resources dealing with the topic of the
> > > section.
> > >
> > > However, all I found time to do today (March 25, 2008) is make the
> > > TOC. Maybe later I will fill in the links on each section's page, or
> > > maybe by the time I get around it some other folks will have done it.
> > >
> > > While nowhere near as good as a textbook, I do think this can be a
> > > valuable resource for those wanting to get up to speed on AGI concepts
> > > and not knowing where to turn to get started. There are some available
> > > AGI bibliographies, but a structured bibliography like this can
> > > probably be more useful than an unstructured and heterogeneous one.
> > >
> > > Naturally my initial TOC represents some of my own biases, but I trust
> > > that by having others help edit it, these biases will ultimately "come
> > > out in the wash.
> > >
> > > Just to be clear: the idea here is not to present solely AGI material.
> > > Rather the idea is to present material that I think students would do
> > > well to know, if they want to work on AGI. This includes some AGI,
> > > some narrow AI, some psychology, some neuroscience, some mathematics,
> > > etc.
> > >
> > > ***
> > >
> > >
> > > -- Ben
> > >
> > >
> > > --
> > > Ben Goertzel, PhD
> > > CEO, Novamente LLC and Biomind LLC
> > > Director of Research, SIAI
> > > [EMAIL PROTECTED]
> > >
> > > "If men cease to believe that they will on

Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
>  I actually recently purchased Artificial Intelligence: A Modern
> Approach - but only because I did not know where else to start.

It's a very good book ... if you view it as providing insight into various
component technologies of potential use for AGI ... rather than as saying
very much directly about AGI...

>I have the
> programming down - but, like most others, I don't know *what* to program.

Well I hope to solve that problem in May -- via releasing the initial version
of OpenCog, plus a load of wiki pages indicating stuff that, IMO, if
implemented,
tuned and tested would allow OpenCog to be turned into a powerful AGI
system ;-)

-- Ben



Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
On Tue, Mar 25, 2008 at 11:07 PM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
> Thanks Ben.  That is really exciting stuff / news.  I'm loking forward to
> OpenCog.
>
> BTW - is OpenCog mainly in C++ (like Novamente) ?  Or is it translations (to
> Java, or other languages) of concepts so that others can code  and add to it
> more readily and quickly?

yes, the OpenCog core system is C++ , though there are some peripheral
code libraries (e.g. the RelEx natural language preprocessor) which are in
Java...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
This kind of diagram would certainly be meaningful, but, it would be a
lot of work to put together, even more so than a traditional TOC ...

On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
> Hi Pei -
>
> What about having a tree like diagram that branches out into either:
> - the different paths / approaches to AGI (for instance: NARS, Novamente,
> and Richard's, etc.), with suggested readings at those leaves
>  - area of study, with suggested readings at those leaves
>
> Or possibly, a Mind Map diagram that shows AGI in the middle, with the
> approaches stemming from it, and then either sub fields, or a reading list
> and / or collection of links (though the links may become outdated, dead).
>
> Point is, would a diagram help "map" the field - which caters to the
> differing approaches, and which helps those wanting to chart a course to
> their own learning/study ?
>
> Thanks,
> ~Aki
>
>
>
>
>  On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> > Ben,
> >
> > It is a good start!
> >
> > Of course everyone else will disagree --- like what Richard did and
> > I'm going to do. ;-)
> >
> > I'll try to find the time to provide my list --- at this moment, it
> > will be more like a reading list than a textbook TOC. In the future,
> > it will be integrated into the E-book I'm working on
> > (http://nars.wang.googlepages.com/gti-summary).
> >
> > Compared to yours, mine will contain less math and algorithms, but
> > more psychology and philosophy.
> >
> > I'd like to see what Richard and others want to propose. We shouldn't
> > try to merge them into one wiki page, but several.
> >
> > Pei
> >
> >
> >
> >
> >
> > On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > > Hi all,
> > >
> > >  A lot of students email me asking me what to read to get up to speed on
> AGI.
> > >
> > >  So I started a wiki page called "Instead of an AGI Textbook",
> > >
> > >
> http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
> > >
> > >  Unfortunately I did not yet find time to do much but outline a table
> > >  of contents there.
> > >
> > >  So I'm hoping some of you can chip in and fill in some relevant
> > >  hyperlinks on the pages
> > >  I've created ;-)
> > >
> > >  For those of you too lazy to click the above link, here is the
> > >  introductory note I put on the wiki page:
> > >
> > >
> > >  
> > >
> > >  I've often lamented the fact that there is no advanced undergrad level
> > >  textbook for AGI, analogous to what Russell and Norvig is for Narrow
> > >  AI.
> > >
> > >  Unfortunately, I don't have time to write such a textbook, and no one
> > >  else with the requisite knowledge and ability seems to have the time
> > >  and inclination either.
> > >
> > >  So, instead of a textbook, I thought it would make sense to outline
> > >  here what the table of contents of such a textbook might look like,
> > >  and to fill in each section within each chapter in this TOC with a few
> > >  links to available online resources dealing with the topic of the
> > >  section.
> > >
> > >  However, all I found time to do today (March 25, 2008) is make the
> > >  TOC. Maybe later I will fill in the links on each section's page, or
> > >  maybe by the time I get around it some other folks will have done it.
> > >
> > >  While nowhere near as good as a textbook, I do think this can be a
> > >  valuable resource for those wanting to get up to speed on AGI concepts
> > >  and not knowing where to turn to get started. There are some available
> > >  AGI bibliographies, but a structured bibliography like this can
> > >  probably be more useful than an unstructured and heterogeneous one.
> > >
> > >  Naturally my initial TOC represents some of my own biases, but I trust
> > >  that by having others help edit it, these biases will ultimately "come
> > >  out in the wash.
> > >
> > >  Just to be clear: the idea here is not to present solely AGI material.
> > >  Rather the idea is to present material that I think students would do
> > >  well to know, if they want to work on AGI. This includes some AGI,
> > >  some narrow AI, some psychology, some neuroscience, so

Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Is there some kind of online software that lets a group of people
update a Mind Map
diagram collaboratively, in the manner of a Wiki page?

This would seem critical if a Mind Map is to really be useful for the purpose
you suggest...

-- Ben

On Wed, Mar 26, 2008 at 8:32 AM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
> Well ... I can take a shot at putting a diagram together.  Making Mind Maps
> is one way I learn any kind of material I want.
>
> If the topics in the list(s) are descriptive enough, I can take a shot at
> putting such a diagram together.
>  It'd be less work to correct it than to make one, right?
>
> Hey - whatever helps.  For me, it's a win-win.  It would help me, and it
> would help accomplish what you guys are trying to do.
>
> Let me know,
>  ~Aki
>
>
>
> On Tue, Mar 25, 2008 at 10:40 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > This kind of diagram would certainly be meaningful, but, it would be a
> > lot of work to put together, even more so than a traditional TOC ...
> >
> >
> >
> >
> > On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar <[EMAIL PROTECTED]> wrote:
> > > Hi Pei -
> > >
> > > What about having a tree like diagram that branches out into either:
> > > - the different paths / approaches to AGI (for instance: NARS,
> Novamente,
> > > and Richard's, etc.), with suggested readings at those leaves
> > >  - area of study, with suggested readings at those leaves
> > >
> > > Or possibly, a Mind Map diagram that shows AGI in the middle, with the
> > > approaches stemming from it, and then either sub fields, or a reading
> list
> > > and / or collection of links (though the links may become outdated,
> dead).
> > >
> > > Point is, would a diagram help "map" the field - which caters to the
> > > differing approaches, and which helps those wanting to chart a course to
> > > their own learning/study ?
> > >
> > > Thanks,
> > > ~Aki
> > >
> > >
> > >
> > >
> > >  On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang <[EMAIL PROTECTED]>
> wrote:
> > > > Ben,
> > > >
> > > > It is a good start!
> > > >
> > > > Of course everyone else will disagree --- like what Richard did and
> > > > I'm going to do. ;-)
> > > >
> > > > I'll try to find the time to provide my list --- at this moment, it
> > > > will be more like a reading list than a textbook TOC. In the future,
> > > > it will be integrated into the E-book I'm working on
> > > > (http://nars.wang.googlepages.com/gti-summary).
> > > >
> > > > Compared to yours, mine will contain less math and algorithms, but
> > > > more psychology and philosophy.
> > > >
> > > > I'd like to see what Richard and others want to propose. We shouldn't
> > > > try to merge them into one wiki page, but several.
> > > >
> > > > Pei
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel <[EMAIL PROTECTED]>
> wrote:
> > > > > Hi all,
> > > > >
> > > > >  A lot of students email me asking me what to read to get up to
> speed on
> > > AGI.
> > > > >
> > > > >  So I started a wiki page called "Instead of an AGI Textbook",
> > > > >
> > > > >
> > >
> http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
> > > > >
> > > > >  Unfortunately I did not yet find time to do much but outline a
> table
> > > > >  of contents there.
> > > > >
> > > > >  So I'm hoping some of you can chip in and fill in some relevant
> > > > >  hyperlinks on the pages
> > > > >  I've created ;-)
> > > > >
> > > > >  For those of you too lazy to click the above link, here is the
> > > > >  introductory note I put on the wiki page:
> > > > >
> > > > >
> > > > >  
> > > > >
> > > > >  I've often lamented the fact that there is no advanced undergrad
> level
> > > > >  textbook for AGI, analogous to what Russell and Norvig is for
> Narrow
> > > > >  AI.
> > > > >
> > > > >  Unfortunately, I don't have time to write suc

Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Thanks Mark ... let's see how it evolves...

I think the problem is not finding a publisher, but rather, finding
the time to contribute and refine the content

Maybe in a year or two there will be enough good content there that
someone with appropriate time and inclination and skill can shape it
into a textbook

-- Ben

On Wed, Mar 26, 2008 at 9:49 AM, Mark Waser <[EMAIL PROTECTED]> wrote:
> Hi Ben,
>
> I have a publisher who would love to publish the result of the wiki as a
>  textbook if you are willing.
>
> Mark
>
>
>
>  ----- Original Message -
>  From: "Ben Goertzel" <[EMAIL PROTECTED]>
>  To: 
>  Sent: Tuesday, March 25, 2008 7:46 PM
>  Subject: [agi] Instead of an AGI Textbook
>
>
>  > Hi all,
>  >
>  > A lot of students email me asking me what to read to get up to speed on
>  > AGI.
>  >
>  > So I started a wiki page called "Instead of an AGI Textbook",
>  >
>  > 
> http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
>  >
>  > Unfortunately I did not yet find time to do much but outline a table
>  > of contents there.
>  >
>  > So I'm hoping some of you can chip in and fill in some relevant
>  > hyperlinks on the pages
>  > I've created ;-)
>  >
>  > For those of you too lazy to click the above link, here is the
>  > introductory note I put on the wiki page:
>  >
>  >
>  > 
>  >
>  > I've often lamented the fact that there is no advanced undergrad level
>  > textbook for AGI, analogous to what Russell and Norvig is for Narrow
>  > AI.
>  >
>  > Unfortunately, I don't have time to write such a textbook, and no one
>  > else with the requisite knowledge and ability seems to have the time
>  > and inclination either.
>  >
>  > So, instead of a textbook, I thought it would make sense to outline
>  > here what the table of contents of such a textbook might look like,
>  > and to fill in each section within each chapter in this TOC with a few
>  > links to available online resources dealing with the topic of the
>  > section.
>  >
>  > However, all I found time to do today (March 25, 2008) is make the
>  > TOC. Maybe later I will fill in the links on each section's page, or
>  > maybe by the time I get around it some other folks will have done it.
>  >
>  > While nowhere near as good as a textbook, I do think this can be a
>  > valuable resource for those wanting to get up to speed on AGI concepts
>  > and not knowing where to turn to get started. There are some available
>  > AGI bibliographies, but a structured bibliography like this can
>  > probably be more useful than an unstructured and heterogeneous one.
>  >
>  > Naturally my initial TOC represents some of my own biases, but I trust
>  > that by having others help edit it, these biases will ultimately "come
>  > out in the wash.
>  >
>  > Just to be clear: the idea here is not to present solely AGI material.
>  > Rather the idea is to present material that I think students would do
>  > well to know, if they want to work on AGI. This includes some AGI,
>  > some narrow AI, some psychology, some neuroscience, some mathematics,
>  > etc.
>  >
>  > ***
>  >
>  >
>  > -- Ben
>  >
>  >
>  > --
>  > Ben Goertzel, PhD
>  > CEO, Novamente LLC and Biomind LLC
>  > Director of Research, SIAI
>  > [EMAIL PROTECTED]
>  >
>  > "If men cease to believe that they will one day become gods then they
>  > will surely become worms."
>  > -- Henry Miller
>  >
>
> > -------
>  > agi
>  > Archives: http://www.listbox.com/member/archive/303/=now
>  > RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  > Modify Your Subscription:
>  > http://www.listbox.com/member/?&;
>  > Powered by Listbox: http://www.listbox.com
>  >
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>
> Modify Your Subscription: http://www.listbox.com/member/?&;
>
>
> Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Hi Stephen,

> Ben,
> Wikipedia has significant overlap with the topic list on the AGIRI Wiki.  I
> propose for discussion the notion that the AGIRI Wiki be content-compatible
> with Wikipedia along two dimensions:
>
> license - authors agree to the GNU Free Documentation License

I have no problem with that

> editorial standards - Wikipedia says that content should be sourced from one
> or more research papers or textbooks, not just from the personal knowledge
> of the author, or from some web page.

Well, I think it is appropriate that a wiki covering an in-development research
area should contain a mix of sourced and non-sourced contents, actually.

In many cases it's the non-sourced content that will be the most
valuable, because
it represents practical knowledge and experience of AGI researchers and
developers, which is too new or raw to have been put into the formal literature
yet.

>I concede in
> advance that most AGIRI Wiki authors will find Wikipedia editorial standards
> burdensome,

To me this is a pretty major point.

The challenge with an AGI wiki right now is to get people to contribute quality
content at all ... so I'm not psyched about, right now at the starting
stage, making
them jump through hoops in order to do so.

>but the benefit would be athat content from the AGIRI Wiki can
> be used to create new, or improve existing Wikipedia articles.

That would be the case so long as the license is in place, it doesn't require
everything to be sourced -- appropriate sourcing could always be
introduced at the time
of porting to Wikipedia.

As the author of a load of academic papers, I'm well aware of how
irritating and
time-consuming it is to properly reference sources.  If I have to do
that for text I place on
the AGIRI wiki, I'm not likely to contribute much to it, just like I
don't currently contribute
much to Wikipedia.  I just don't have the time

>And if we
> can agree that on the  easy-to-achieve license, content from Wikipedia, e.g.
> my article on Hierarchical control systems can easily be imported into the
> AGIRI Wiki.

I don't see a problem with the license.

> Wikipedia is important to AGI, not only as an online encyclopedia that
> facilitates almost universal access to AGI related topics, but as a target
> for AI researchers that want to structure the text into a vast knowledge
> base.  Somewhere down the road to self-improvement, an AGI will be reading
> Wikipedia.

Along with the rest of the Web ...  for sure ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Fair enough, Richard...

Again I'll emphasize that the idea of the "Instead of an AGI Textbook"
is not to teach any particular theory or design for AGI, but rather to convey
background knowledge that is useful for folks who wish to come to grips
with contemporary AGI theories and designs

I have articulated my own "coherent body of thought" regarding AGI as well,
but I consider it to best be presented at the "research treatise" or "research
paper" rather than "textbook" level...

-- Ben G


On Wed, Mar 26, 2008 at 12:55 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
>
>  A propos of the several branches of discussion about AGI textbooks on
>  this thread...
>
>  Knowing what I do about the structure and content of the book I am
>  writing, I cannot imagine it being merged as just a set of branch points
>  from other works, like the one growing from Ben's TOC.
>
>  What I am doing is a coherent body of thought in its own right, with a
>  radically different underlying philosophy, so it really needs to be a
>  standalone project.
>
>
>
>  Richard Loosemore
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Fwd: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
 BTW I improved the hierarchical organization of the TOC a bit, to
 remove the impression that it's just a random grab-bag of topics...


 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook

 ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Instead_of_an_AGI_Textbook Challenge !!

2008-03-26 Thread Ben Goertzel
OK... I just burned an hour inserting more links and content into

http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook

I'm burnt out on it for a while, there's too much other stuff on my plate

However, I have a challenge for y'all

There are something like 400 people subscribed to this list...

If 25 of you spend 30 minutes each, during the next week, adding
relevant content to the non-textbook wiki page ... then at the end
of the week we will have a pretty nice knowledge resource for
newbies to AGI.

And we will probably all learn something from following up each
others' references ...

And then I'll save a lot of time during the next year, because when
someone emails me and asks me what they should read to get
up to speed on the general thinking in the AGI field, I'll just point
them to the non-textbook ;-)

-- Ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Re: Instead_of_an_AGI_Textbook Challenge !!

2008-03-26 Thread Ben Goertzel
Ah, one more note...

Due to its location on the AGIRI wiki, the Instead_of_an_AGI_Textbook
automatically links into the Mind Ontology

http://www.agiri.org/wiki/Mind_Ontology

that I created in a fit of mania one weekend a couple years ago.

So, just remember that if you decide to add content to the non-textbook,
rather than just links, you can link it into the Mind Ontology, expand
the Mind Ontology, etc.

The idea of the Mind Ontology was to create a unified common vocabulary
for AGI thinkers/researchers...

It didn't really work because almost no one paid attention, but it was a sort
of fun weekend ;-)

-- Ben


On Wed, Mar 26, 2008 at 10:43 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> OK... I just burned an hour inserting more links and content into
>
>  http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook
>
>  I'm burnt out on it for a while, there's too much other stuff on my plate
>
>  However, I have a challenge for y'all
>
>  There are something like 400 people subscribed to this list...
>
>  If 25 of you spend 30 minutes each, during the next week, adding
>  relevant content to the non-textbook wiki page ... then at the end
>  of the week we will have a pretty nice knowledge resource for
>  newbies to AGI.
>
>  And we will probably all learn something from following up each
>  others' references ...
>
>  And then I'll save a lot of time during the next year, because when
>  someone emails me and asks me what they should read to get
>  up to speed on the general thinking in the AGI field, I'll just point
>  them to the non-textbook ;-)
>
>  -- Ben
>
>
>
>
>  --
>  Ben Goertzel, PhD
>  CEO, Novamente LLC and Biomind LLC
>  Director of Research, SIAI
>  [EMAIL PROTECTED]
>
>  "If men cease to believe that they will one day become gods then they
>  will surely become worms."
>  -- Henry Miller
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Ben Goertzel
>  So if I tell you to "handle" an object, or a piece of business, like say
>  "removing a chair from the house" - that word "handle" is open-ended and
>  gives you vast freedom within certain parameters as to how to apply your
>  hand(s) to that object. Your hands can be applied to move a given box, for
>  example, in a vast if not infinite range of positions and trajectories. Such
>  a general, open concept is of the essence of general intelligence, because
>  it means that you are immediately ready to adapt to new kinds of situation -
>  if your normal ways of handling boxes are blocked, you are ready to seek out
>  or improvise some strange new contorted two-finger hand position to pick up
>  the box - which also count as "handling". (And you will have actually done a
>  lot of this).
>
>  So what is the "meaning" of "handle"? Well, to be precise, it doesn't have
>  a/one meaning, and isn't meant to - it has a range of possible
>  meanings/references, and you can choose which is most convenient in the
>  circumstances.

Actually I'd make a stronger statement than that.

It's not just that we can CHOOSE the meanings of concepts from a fixed menu
of possibilities ... we CREATE the meanings of concepts as we use them ...
this is how and why concept-meanings continually change over time in
individual minds and in cultures...

This is parallel to how we create episodic memories as we "re-live" them,
rather than retrieving them as if from a database...

These creation processes do however seem to be realizable in digital
computer systems, based on my theoretical understanding ... though none
of us have done it yet, it's certainly loads of work given current software
tools...

Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Ben Goertzel
>
> So if I tell you to "handle" an object, or a piece of business, like say
> "removing a chair from the house" - that word "handle" is open-ended and
> gives you vast freedom within certain parameters as to how to apply your
> hand(s) to that object. Your hands can be applied to move a given box, for
> example, in a vast if not infinite range of positions and trajectories. Such
> a general, open concept is of the essence of general intelligence, because
> it means that you are immediately ready to adapt to new kinds of situation -
> if your normal ways of handling boxes are blocked, you are ready to seek out
> or improvise some strange new contorted two-finger hand position to pick up
> the box - which also count as "handling". (And you will have actually done a
> lot of this).
>
> So what is the "meaning" of "handle"? Well, to be precise, it doesn't have
> a/one meaning, and isn't meant to - it has a range of possible
> meanings/references, and you can choose which is most convenient in the
> circumstances.
>
>
> The same principles apply to just about every word in language and every
> unit of logic and mathematics.
>
> But - and correct me - I don't think anyone in AI/AGI is using language or
> any logico-mathematical systems in this general, open-ended way - the way
> they are actually meant to be used - and the very foundation of General
> Intelligence.
>
> Language and the other systems are always used by AGI in specific ways to
> have specific meanings. YKY, typically, wanted a language for his system
> which had precise meanings. Even Ben, I suspect, may only employ words in an
> "open" way, in that their meanings can be changed with experience - but at
> any given point their meanings will have to be specific.
>
> To be capable of generalising as the human brain does - and of true AGI -
> you have to have a brain that simultaneously processes on at least two if
> not three levels, with two/three different sign systems - including both
> general and particular ones.
>
>
>
> John:>> Charles: >> I don't think a General Intelligence could be built
> entirely
> >> out
> >> of
> >> >> narrow AI components, but it might well be a relatively trivial add-
> >> on.
> >> >> Just consider how much of human intelligence is demonstrably "narrow
> >> AI"
> >> >> (well, not artificial, but you know what I mean).  Object
> >> recognition,
> >> >> e.g.  Then start trying to guess how much of the part that we can't
>
>
> >> >> prove a classification for is likely to be a narrow intelligence
> >> >> component.  In my estimation (without factual backing) less than
> >> 0.001
> >> >> of our intelligence is General Intellignece, possibly much less.
> >> >> >
> >> >
> >> John:  I agree that it may be <1%. >
> >> >
> >>
> >> Oh boy, does this strike me as absurd. Don't have time for the theory
> >> right
> >> now, but just had to vent. Percentage estimates strike me as a bit
> >> silly,
> >> but if you want to aim for one, why not look at both your paragraphs,
> >> word
> >> by word. "Don't"  "think" "might" "relatively" etc. Now which of those
> >> words
> >> can only be applied to a single type of activity, rather than an open-
> >> ended
> >> set of activities? Which cannot be instantiated in an open-ended if not
> >> infinite set of ways? Which is not a very valuable if not key tool of a
> >> General Intelligence, that can adapt to solve problems across domains?
> >> Language IOW is the central (but not essential) instrument of human
> >> general
> >> intelligence - and I can't think offhand of a single world that is not a
> >> tool for generalising across domains, including "Charles H." and "John
> >> G.".
> >>
> >> In fact, every tool you guys use - logic, maths etc. - is similarly
> >> general
> >> and functions in similar ways. The above strikes me as a 99% failure to
> >> understand the nature of general intelligence.
> >>
> >
> > Mike you are 100% potentially right with a margin of error of 110%. LOL!
> >
> > Seriously Mike how do YOU indicate approximations? And how are you
> > differentiating general and specific? And declaring relative absolutes and
> > convenient infinitudes... I'm trying to understand your argument.

[agi] Novamente's next 15 minutes of fame...

2008-03-28 Thread Ben Goertzel
http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-28 Thread Ben Goertzel
> 4. In fact. I would suggest that AGI researchers start to distinguish
> themselves from narrow AGI by replacing the over ambiguous concepts from AI,
> one by one. For example:
>
> knowledge representation = world model.
> learning = world model creation
> reasoning = world model simulation
> goal = life goal (to indicate that we have the ambition of building
> something really alive)
> If we say something like "world model creation", it seems pretty obvious
> that we do not mean anything like just tweaking a few bits in some function.

Yet, those terms are used for quite shallow things in many Good Old Fashioned
robotics architectures ;-)

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente's next 15 minutes of fame...

2008-03-29 Thread Ben Goertzel
Nothing has been publicly released yet, it's still at the
research-prototype stage ... I'll announce when we have some kind of
product release...

ben

On Sat, Mar 29, 2008 at 5:39 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> It sounds interesting.  Can anyone go and try it, or does it cost money or
> something.  Is it set up already?
> Jim Bromer
>
>
>
> On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> >
> >
> >
> >
> http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
> >
> > --
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > [EMAIL PROTECTED]
> >
> > "If men cease to believe that they will one day become gods then they
> > will surely become worms."
> > -- Henry Miller
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: http://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
>
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
My judgment as list moderator:

1)  Discussions of particular, speculative algorithms for solving SAT
are not really germane for this list

2)  Announcements of really groundbreaking new SAT algorithms would
certainly be germane to the list

3) Discussions of issues specifically regarding the integration of SAT solvers
into AGI architectures are highly relevant to this list

4) If you think some supernatural being placed an insight in your mind, you're
probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less seriously
by a vast majority of scientific-minded people...

-- Ben G, List Owner

On Sun, Mar 30, 2008 at 4:41 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>
> I agree with Richard and hereby formally request that Ben chime in.
>
> It is my contention that SAT is a relatively narrow form of Narrow AI and
> not general enough to be on an AGI list.
>
> This is not meant, in any way shape or form, to denigrate the work that you
> are doing.  It is very important work.
>
> It's just that you're performing the equivalent of presenting a biology
> paper at a physics convention.:-)
>
>
>
>
> - Original Message -
> From: Jim Bromer
> To: agi@v2.listbox.com
> Sent: Sunday, March 30, 2008 11:52 AM
> Subject: **SPAM** Re: [agi] Logical Satisfiability...Get used to it.
>
>
>
>
>
> > On the contrary, Vladimir is completely correct in requesting that the
> > discussion go elsewhere:  this has no relevance to the AGI list, and
> > there are other places where it would be pertinent.
> >
> >
> > Richard Loosemore
> >
> >
>
>  If Ben doesn't want me to continue, I will stop posting to this group.
> Otherwise please try to understand what I said about the relevance of SAT to
> AGI and try to address the specific issues that I mentioned.  On the other
> hand, if you don't want to waste your time in this kind of discussion then
> do just that: Stay out of it.
> Jim Bromer
>
>
> Jim Bromer
>  
>
>  agi | Archives | Modify Your Subscription
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
On Sun, Mar 30, 2008 at 5:09 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> > 4) If you think some supernatural being placed an insight in your mind,
>  > you're
>  > probably better off NOT mentioning this when discussing the insight in a
>  > scientific forum, as it will just cause your idea to be taken way less
>  > seriously
>  > by a vast majority of scientific-minded people...
>
>  Awesome answer!
>
>  However, only *some* religions believe in supernatural beings and I,
>  personally, have never seen any evidence supporting such a thing.

I've got one in a jar in my basement ... but don't worry, I won't let him out
till the time is right ;-) ...

and so far, all his AI ideas have proved to be
absolute bullshit, unfortunately ... though he's done a good job of helping
me put hexes on my neighbors...


>  Have you been having such experiences and been avoiding mentioning them
>  because you're afraid for your reputation?
>
>  Ben, I'm worried about you now.;-)
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Ben Goertzel
many inconsistencies anyway.  My contention here is that is just the
> problem that we are faced with today in rational based AGI.  They can get so
> far, but only so far.
> >
> > A theory, with thousands of subtle variations and connections with other
> theories, that only had one or a few correct solutions would be useful in
> critical reasoning because these special theories would be critically
> significant.  They would exhibit strong correlations with simple or
> constrained relations that would be more like experiments that isolated
> significant factors that can be tested.  And these related theories could be
> examined more effectively using abstraction as well.  (There could still be
> problems with the critical theory since it could contain inconsistencies,
> but you are going to have that problem with any inductive system.)  If you
> are going to be using a rational-based AGI method, then you are going to
> want some theories that exhibit critical reasoning.  These kinds of theories
> might turn out to be the keystone in developing more sophisticated models
> about the world and reevaluating less sophisticated models.
> >
> > Jim Bromer
>
> ---
> agi
> Archives: http://www.listbox.com/member/archive/303/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription: http://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>  
> Looking for last minute shopping deals? Find them fast with Yahoo! Search.
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente's next 15 minutes of fame...

2008-03-31 Thread Ben Goertzel
We haven't launched anything public yet (and I'm not sure when we will)
but the prototype experiment shown in that machinima was done in Second
Life, yeah ...

We have also experimented with other virtual worlds such as Multiverse...

Ben G

On Mon, Mar 31, 2008 at 2:38 PM, Rafael C.P. <[EMAIL PROTECTED]> wrote:
> Is it running inside Second Life already or it's another enviroment? (sorry
> I don't know SL very well)
>
>
>
> On Sat, Mar 29, 2008 at 11:40 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
> > Nothing has been publicly released yet, it's still at the
> > research-prototype stage ... I'll announce when we have some kind of
> > product release...
> >
> > ben
> >
> >
> >
> >
> > On Sat, Mar 29, 2008 at 5:39 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:
> > > It sounds interesting.  Can anyone go and try it, or does it cost money
> or
> > > something.  Is it set up already?
> > > Jim Bromer
> > >
> > >
> > >
> > > On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> > >
> > > >
> > > >
> > > >
> > > >
> > >
> http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
> > > >
> > > > --
> > > > Ben Goertzel, PhD
> > > > CEO, Novamente LLC and Biomind LLC
> > > > Director of Research, SIAI
> > > > [EMAIL PROTECTED]
> > > >
> > > > "If men cease to believe that they will one day become gods then they
> > > > will surely become worms."
> > > > -- Henry Miller
> > > >
> > > > -------
> > > > agi
> > > > Archives: http://www.listbox.com/member/archive/303/=now
> > > > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > > > Modify Your Subscription: http://www.listbox.com/member/?&;
> > > > Powered by Listbox: http://www.listbox.com
> > > >
> > >
> > >
> > >  
> > >
> > >  agi | Archives | Modify Your Subscription
> >
> >
> >
> > --
> >
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > [EMAIL PROTECTED]
> >
> > "If men cease to believe that they will one day become gods then they
> > will surely become worms."
> > -- Henry Miller
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> >
> > Modify Your Subscription: http://www.listbox.com/member/?&;
> >
> >
> >
> > Powered by Listbox: http://www.listbox.com
> >
>
>
>
> --
> =
> Rafael C.P.
> =
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Ben Goertzel
>  Thank you for your politeness and your insightful comments.  I am
>  going to quit this group because I have found that it is a pretty bad
>  sign when the moderator mocks an individual for his religious beliefs.

FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my role as moderator, but rather in my role
as individual list participant ;-)

Sorry that my sense of humor got on your nerves.  I've had that effect
on people before!

Really though: if you're going to post messages in forums populated
by scientific rationalists, claiming divine inspiration for your ideas, you
really gotta expect **at minimum** some good-natured ribbing... !

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Fwd: [DIV10] opportunity for graduate studies in evolution of human creativity

2008-04-01 Thread Ben Goertzel
 is not evident from the
attribute level because it reflects understanding at the conceptual
level, such as analogical transfer (e.g. of the concept HANDLE from
KNIFE to CUP), or the knowledge that two artifacts are complementary
(e.g. MORTAR and PESTLE). The program then postulates 'lineages', i.e.
patterns of relatedness, amongst the artifacts that takes into account
both externally driven change (e.g. trade) and internally driven
change (e.g. blending of different traditions) using as an initial
data set decorated ceramics from Easter Island. The program has the
potential to be used for other elements of culture (e.g. gestures or
languages); indeed to reconstruct the cultural evolution of the
various interacting facets of human worldviews.
In sum, the proposed research advances a promising and innovative
approach to the study of cultural evolution, with implications that
extend across the sciences, social sciences, and humanities. It
tackles questions that lie at the foundation of who we are and what
makes us distinctive.







-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Unsupervised grammar mining from text [was GSoC: Learning Simple Grammars]

2008-04-05 Thread Ben Goertzel
I looked through the ADIOS papers...

It's interesting work, and it reminds me of a number of other things, including

-- Borzenko's work, http://proto-mind.com/SAHIN.pdf

-- Denis Yuret's work on mutual information based grammar learning,
from the late 90's

-- Robert Hecht-Nielsen's much-publicized work a couple years back, on
automated language learning and generation

-- Tony Smith's work on automated learning of function-word based
grammars from text, done in his MS thesis from University of Calgary
in the 90's

Looking at these various things together, it does seem clear that one
can extract a lot of syntactic structure from free text in an
unsupervised manner.

It is unclear whether one can get the full syntactic subtlety of
everyday English though.  Every researcher in this area seems to get
to a certain stage (mining the simpler aspects of English syntax), and
then never get any further.

However, I have another complaint to make.  Let's say you succeed with
this, and make an English-language-syntax recognizer that works, say,
as well as the link parser, by pure unsupervised learning.  That is
really cool but ... so what?

Syntax parsing is already not the bottleneck for AGI, we already have
decent parsers.  The bottleneck is semantic understanding.

Having a system that can generate random sentences is not very useful,
nor is having a bulky inelegant automatically learned formal-grammar
model of English.

If one wants to hand-craft mapping rules taking syntax parses into
logical relations, one is better off with a hand-crafted grammar than
a messier learned one.

If one wants to have the mapping from syntax into semantics be
learned, then probably one is better off having syntax be learned in a
coherent overall experiential-learning process -- i.e. as part of a
system learning how to interact in a world -- rather than having
syntax learned in an artificial, semantics-free manner via
corpus-mining.

In other words: suppose you could make ADIOS work for real ... how
would that help along the path of AGI?

-- Ben G



On Sat, Apr 5, 2008 at 8:46 AM, Evgenii Philippov <[EMAIL PROTECTED]> wrote:
>
>  On Sat, Apr 5, 2008 at 7:37 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>  >  For instance, I'll be curious whether ADIOS's automatically inferred
>  >  grammars can deal with recursive phrase structure, with constructs
>  >  like "the person with whom I ate dinner", and so forth
>
>  ADIOS papers have a lot of remarks like "recusion is not implemented",
>  but I think it IS able to deal with THIS kind of recusion... But this
>  is TBD---I am not sure.
>
>
>
>  e
>
>  >
>  >
>  >
>  >  On Sat, Apr 5, 2008 at 7:57 AM, Evgenii Philippov <[EMAIL PROTECTED]> 
> wrote:
>  >  >
>  >  >  Hello folks,
>  >  >
>  >  >
>  >  >  On Thu, Mar 27, 2008 at 11:06 PM, Ben Goertzel <[EMAIL PROTECTED]> 
> wrote:
>  >  >  >  In general, I personally have lost interest in automated inference 
> of grammars
>  >  >  >  from text corpuses, though I did play with that in the 90's (and 
> got bad results
>  >  >  >  like everybody else).
>  >  >
>  >  >  Uh oh! My current top-priority is playing with ADIOS algorithm for
>  >  >  unsupervised grammar learning, which is based on extended Hidden
>  >  >  Markov Models. Its results are plainly fantastic---it is able to
>  >  >  create a working grammar not only for English, but also for many other
>  >  >  languages, plus languages with spaces removed, plus DNA structure,
>  >  >  protein structure, etc etc etc. Some results are described in Zach
>  >  >  Solan's papers and the algorithm itself is described in his
>  >  >  dissertation.
>  >  >
>  >  >  http://www.tau.ac.il/~zsolan/papers/ZachSolanThesis.pdf
>  >  >  http://adios.tau.ac.il/
>  >  >
>  >  >  And its grammars are completely comprehensible for a human. (See the
>  >  >  homepage, papers and the thesis for diagrams.)
>  >  >
>  >  >  Also, they can very easily be used for language generation, and Z
>  >  >  Solan did a lot of experiments with this.
>  >  >
>  >  >  It has no relation to Link Grammar though.
>  >  >
>  >  >
>  >  >  >  Automated inference of grammar from language used in embodied 
> situations
>  >  >  >  interests me a lot ... and "cheating" via using hand-created NLP 
> tools may
>  >  >  >  be helpful too...
>  >  >  >
>  >  >  >  But I sort of feel like automated inference of grammars from 
> corpuses may
>  >  >  >  be a HARDER problem than le

Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Ben Goertzel
FWIW, I'll note that a heavy focus on metrics and testing has been part of every
US government funded AI project in history ... and this focus has not
gotten them
very far, generally speaking ...

-- Ben G

On Thu, Apr 10, 2008 at 5:25 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> MW: I believe that I was also quite clear with my follow-on comment of "a
> cart
>
>
> > before the horse problem.  Once we know how to acquire and store
> knowledge, then we can develop metrics for testing it -- but, for now, it's
> too early to go after the problem." as well.
> >
>
>  You're basically agreeing with what I said you said, which wasn't meant to
> be disparaging. You're putting testing, or metrics for testing, later, - and
> I imagine few AI-ers would disagree with you.  I'm suggesting that won't
> work - and that a new cog. sci. synthesis is beginning - just beginning - to
> emerge here.  I don't mind tantrums, but there might as well be some point
> to them.
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Big Dog

2008-04-10 Thread Ben Goertzel
Peruse the video:
 http://www.youtube.com/watch?v=W1czBcnX1Ww&feature=related

 Of course, they are only showing the best stuff.  And I am sure there
 is plenty of work left to do.  But from the variety of behaviors that
 are displayed, I would say that the problem of quadraped walking is
surprisingly well "solved."   apparently it's way easier than biped
locomotion...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Ben Goertzel
These things of course require a balance.

In many academic or corporate fora, radical innovation is frowned upon
so profoundly (in spite of sometimes being praised and desired, on the
surface, but in a confused and not fully sincere way), that it's continually
necessary to remind people of the need to open their minds and consider
the possibility that some of their assumptions are wrong.

OTOH, in **this** forum, we have a lot of openness and open-mindedness,
which is great ... but the downside is, people who THINK they have radical
new insights but actually don't, tend to get a LOT of
attention, often to the detriment of more interesting yet less "radical on the
surface" discussions.

I do find that most posters on this list seem to have put a lot of thought
(as well as a lot of feeling) into their ideas and opinions.  However, it's
frustrating when people re-tread issues over and over in a way that demonstrates
they've never taken the trouble to carefully study what's been done before.

I think it can often be super-valuable to approach some issue afresh, without
studying the literature first -- so as to get a brand-new view.  But
then, before
venting one's ideas in a public forum, one should check one's ideas against
the literature (in the idea-validation phase .. after the
idea-generation phase) to
see whether they're original, whether they're contradicted by well-thought-out
arguments, etc.

-- Ben G

-- Ben

On Mon, Apr 14, 2008 at 9:54 AM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> Good advice.  There are of course sometimes people who are ahead of the
> field, but in conversation you'll usually find that the genuine inovators
> have a deep - bordering on obsessive - knowledge of the field that they're
> working in and are willing to demonstrate/test their claims to anyone even
> remotely interested.
>
>
>
>
>
>
> On 14/04/2008, Brad Paulsen <[EMAIL PROTECTED]> wrote:
> >
> >
> > Dear Fellow AGI List Members:
> >
> > Just thought I'd remind the good members of this list about some
> strategies for dealing with certain types of postings.
> >
> > Unfortunately, the field of AI/AGI is one of those areas where anybody
> with a pulse and a brain thinks they can design a "program that thinks."
> Must be easy, right?  I mean, I can do it so how hard can it be to put "me"
> in a "can?"  Well, that's what some very smart people in the 1940's, '50's
> and into the 1960's thought.  They were wrong.  Most of them now admit it.
> So, on AI-related lists, we have to be very careful about the kinds of
> "conversations" on which we spend our valuable time.  Here are some
> guidelines.  I realize most people here know this stuff already.  This is
> just a gentle reminder.
> >
> > If a posting makes grandiose claims, is dismissive of mainstream research,
> techniques, and institutions or the author claims to have "special
> knowledge" that has apparently been missed (or dismissed) by all of the
> brilliant scientific/technical minds who go to their jobs at major
> corporations and universities every day (and are paid for doing so), and
> also by every Nobel Laureate for the last 20 years, this posting should be
> ignored.  DO NOT RESPOND to these types of postings: positively or
> negatively.  The poster is, obviously, either irrational or one of the
> greatest minds of our time.  In the former case, you know they're full of
> it, I know they're full of it, but they will NEVER admit that.  You will
> never win an argument with an irrational individual.  In the latter case,
> stop and ask yourself: Why is somebody that fantastically smart posting to
> this mailing list?  He or she is, obviously, smarter than everyone here.
> Why does he/she need us to validate his or her accomplishments/knowledge by
> posting on this list?  He or she should have better things to do and,
> besides, we probably wouldn't be able to understand ("appreciate") his/her
> genius anyhow.
> >
> > The only way to deal with postings like this is to IGNORE THEM.  Don't
> rise to the bait.  Like a bad cold, they will be irritating for a while, but
> they will, eventually, go away.
> >
> > Cheers,
> >
> > Brad
> >
> > 
>
> > agi | Archives | Modify Your Subscription
>
>
>  
>
>  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Ben Goertzel
>  We may well see a variety of proto-AGI applications in different
>  domains, sorta midway between narrow-AI and human-level AGI, including
>  stuff like
>
>  -- maidbots
>
>  -- AI financial traders that don't just execute machine learning
>  algorithms, but grok context, adapt to regime changes, etc.
>
>  -- NL question answering systems that grok context and piece together
>  info from different sources
>
>  -- artificial scientists capable of formulating nonobvious hypotheses
>  and validating them via data analysis, including doing automated data
>  preprocessing, etc.

And not to forget, of course, smart virtual pets and avatars in games
and virtual worlds ;-))

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Ben Goertzel
aop to learn to be a maidbot. We run the
>  learning on our one big machine and sell the maidbots cheap with 0.1% the
>  cpu. But being a researcher is all learning -- so each one would need the
>  whole shebang for each copy. A decade of Moore's Law ... and at least that of
>  AGI research.
>
>  Josh
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-17 Thread Ben Goertzel
Hi Mark,

>  This is, by the way, my primary complaint about Novamente -- far too much
> energy, mind-space, time, and effort has gone into optimizing and repeatedly
> upgrading the custom atom table that should have been built on top of
> existing tools instead of being built totally from scratch.

Really, work on the AtomTable has been a small percentage of work on
the Novamente Cognition Engine ... and, the code running the AtomTable is
now pretty much the same as it was in 2001 (though it was tweaked to make it
64-bit compatible, back in 2004 ... and there has been ongoing bug-removal
as well...).  We wrote some new wrappers for the AtomTable
last year (based on STL containers), but that didn't affect the
internals, just the API.

It's true that a highly-efficient, highly-customizable graph database could
potentially serve the role of the AtomTable, within the NCE or OpenCog.

But that observation is really not
such a big deal.  Potentially, one could just wrap someone else's graph DB
behind the 2007 AtomTable API, and this change would be completely transparent
to the AI processes using the AtomTable.

However, I'm not convinced this would be a good idea.  There are a lot of
useful specialized indices in the AtomTable, and replicating all this in some
other graph DB would wind up being a lot of work ... and we could use that
time/effort on other stuff instead

Using a relational DB rather than a graph DB is not appropriate for the NCE
design, however.

But we've been over this before...

And, this is purely a software implementation issue rather than an AI issue,
of course.  The NCE and OpenCog designs require **some** graph or
hypergraph DB which supports the manual and automated creation of
complex customized indices ... and supports refined "cognitive control"
over what lives on disk and what lives in RAM, rather than leaving this
up to some non-intelligent automated process.  Given these requirements,
the choice of how to realize them in software is not THAT critical ... and
what we have there now works


-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-17 Thread Ben Goertzel
On Thu, Apr 17, 2008 at 2:42 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> > Really, work on the AtomTable has been a small percentage of work on
> > the Novamente Cognition Engine ... and, the code running the AtomTable is
> > now pretty much the same as it was in 2001 (though it was tweaked to make
> it
> > 64-bit compatible, back in 2004 ... and there has been ongoing bug-removal
> > as well...).
> >
>
>  And . . . and . . . and . . . :-)  It's far more than you're
> admitting to yourself.:-)

That's simply not true, but I know of no way to convince you.

The AtomTable work was full-time work for a two guys for a few months
in 2001, and since then it's been occasional part-time tweaking by two
people who have been full-time engaged on other projects.

> > We wrote some new wrappers for the AtomTable
> > last year (based on STL containers), but that didn't affect the
> > internals, just the API.
> >
>
>  Which is what everything should have been designed around anyways -- so
> effectively, last year was a major "breaking" change that affected *all* the
> software written to the old API.

Yes, but calls to the AT were already well-encapsulated within the code,
so changing from the old API to the new has not been a big deal.

>  Absolutely.  That's what I'm pushing for.  Could you please, please publish
> the 2007 AtomTable API?  That's actually far, far more important than the
> code behind it.  Please, please . . . . publish the spec today . . . .
> pretty please with a cherry on top?

It'll be done as part of the initial OpenCog release, which will be pretty
soon now ... I don't have a date yet though...

> > However, I'm not convinced this would be a good idea.  There are a lot of
> > useful specialized indices in the AtomTable, and replicating all this in
> some
> > other graph DB would wind up being a lot of work ... and we could use that
> > time/effort on other stuff instead
> >
>
>  Which (pardon me, but . . .  ) clearly shows that you're not a professional
> software engineer

I'm not but many other members of the Novamente team are

>  My contention is that you all should be
> *a lot* further along than you are.  You have more talent than anyone else
> but are moving at a truly glacial pace.

90% of Novamente LLC's efforts historically have gone into various AI
consulting projects
that pay the bills.

Now, about 60% is going into consulting projects, and 40% is going
into the virtual
pet brain project

We have very rarely had funding to pay folks to work on AGI, so we've
worked on it
in bits and pieces here and there...

Sad, but true...

> I understand that you believe that
> this is primarily due to other reasons but *I am telling you* that A LOT of
> it is also your own fault due to your own software development choices.

You're wrong, but arguing the point over and over isn't getting us
anywhere.

>  Worse, fundamentally, currently, you're locking *everyone* into *your*
> implementation of the atom table.

Well, that will not be the case in OpenCog.  The OpenCog architecture
will be such that other containers could be inserted if desired.

>Why not let someone else decide whether
> or not it is worth their time and effort to implement those specialized
> indices on another graph DB of their choice?  If you would just open up the
> API and maybe accept some good enhancements (or, maybe even, if necessary,
> some changes) to it?

Yes, that's going to happen within OpenCog.

> > Using a relational DB rather than a graph DB is not appropriate for the
> NCE
> > design, however.
> >
>
>  Incorrect.  If the API is identical and the speed is identical, whether it
> is a relational db or a graph db *behind the scenes* is irrelevant.  Design
> to your API -- *NOT* to the underlying technology.  You keep making this
> mistake.

The speed will not be identical for an important subset of queries, because
of intrinsic limitations of the B-tree datastructures used inside RDB's.  We
discussed this before.


>  Seriously -- I think that you're really going to be surprised at how fast
> OpenCog might take off if you'd just relax some control and concentrate on
> the specifications and the API rather than the implementation issues that
> you're currently wasting time on.

I am optimistic about the development speedup we'll see from OpenCog,
but not for the reason you cite.

Rather, I think that by opening it up in an intelligent way, we're simply
going to get a lot more people involved, contributing their code, their
time, and their ideas.  This will accelerate things considerably, if all
goes well.

I repeat that NO implementation time has been spent on the AtomTable
internals for quite some time now.  A few weeks was spent on the API
last year, by one person.  I'm not sure why you want to keep exaggerating
the time put into that component, when after all you weren't involved in
its development at all (and I didn't even know you when the bulk of
that development was being done!!)

I don't care

Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
On Fri, Apr 18, 2008 at 1:01 PM, Pei Wang <[EMAIL PROTECTED]> wrote:
> PREMISES:
>
>  (1) AGI is one of the most complicated problem in the history of
>  science, and therefore requires substantial funding for it to happen.


Potentially, though, massively distributed, collaborative open-source
software development could render your first premise false ...


>  (2) Since all previous attempts failed, investors and funding agencies
>  have enough reason to wait until a recognizable breakthrough to put
>  their money in.
>
>  (3) Since the people who have the money are usually not AGI
>  researchers (so won't read papers and books), a breakthrough becomes
>  recognizable to them only by impressive demos.
>
>  (4) If the system is really general-purpose, then if it can give an
>  impressive demo on one problem, it should be able to solve all kinds
>  of problems to roughly the same level.
>
>  (5) If a system already can solve all kinds of problems, then the
>  research has mostly finished, and won't need funding anymore.
>
>  CONCLUSION: AGI research will get funding when and only when the
>  funding is no longer needed anymore.
>
>  Q.E.D. :-(
>
>  Pei
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
> > Potentially, though, massively distributed, collaborative open-source
> > software development could render your first premise false ...
> >
>
>   Though it is unlikely to do so, because collaborative open-source
> projects are best suited to situations in which the fundamental ideas behind
> the design has been solved.

I believe I've solved the fundamental issues behind the Novamente/OpenCog
design...

Time and effort will tell if I'm right ;-)

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
On Fri, Apr 18, 2008 at 5:35 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Pei:  I don't really want
>
>  a big gang at now (that will only waste the time of mine and the
>  others), but a small-but-good gang, plus more time for myself ---
>  which means less group debates, I guess. ;-)
>
>  Alternatively, you could open your problems for group discussion &
> think-tanking...   I'm surprised that none of you systembuilders do this.
>

That is essentially what I'm doing with OpenCog ... but it's a big job,
just preparing stuff in terms of documentation and code and designs
so that others have a prayer of understanding it ...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
YKY,

>  > I believe I've solved the fundamental issues behind the Novamente/OpenCog
>  > design...
>
>  It's hard to tell whether you have really solved the AGI problem, at
>  this stage. ;)

Understood...

>  Also, your AGI framework has a lot of non-standard, home-brew stuff
>  (especially the knowledge representation and logic).  I bet there are
>  some merits in your system, but is it really so compelling that
>  everybody has to learn it and do it that way?

I don't claim that the Novamente/OpenCog design is the **only** way ... but I do
note that the different parts are carefully designed to interoperate together
in subtle ways, so replacing any one component w/ some standard system
won't work.

For instance, replacing PLN with some more popular but more limited
probabilistic
logic framework, would break a lot of other stuff...

>  Creating a standard / common framework is not easy.  Right now I think
>  we lack such a consensus.  So the theorists are not working together.

One thing that stuck out at the 2006 AGI Workshop and AGI-08
conference, was the commonality between several different approaches,
for instance

-- my Novamente approach
-- Nick Cassimatis's Polyscheme system
-- Stan Franklin's LIDA approach
-- Sam Adams (IBM) Joshua Blue
-- Alexei Samsonovich's BICA architecture

Not that these are all the same design ... there are very real differences
... but there are also a lot of deep parallels.   Novamente seems to
be more fully fleshed out than these overall, but each of these guys
has thought through specific aspects more deeply than I have.

Also, John Laird (SOAR creator) is moving SOAR in a direction that's a
lot closer to the Goertzel/Cassimatis/Franklin/Adams style system than
his prior approaches ...

All the above approaches are

-- integrative, involving multiple separate components tightly bound
together in a high-level cognitive architecture

-- reliant to some extent on formal inference (along with subsymbolic methods)

-- clearly testable/developable in a virtual worlds setting

I would bet that with appropriate incentives all of the above
researchers could be persuaded to collaborate on a common AI project
-- without it degenerating into some kind of useless
committee-think...

Let's call these approaches LIVE, for short -- Logic-incorporating,
Integrative, Virtually Embodied

On the other hand, when you look at

-- Pei Wang's approach, which is interesting but is fundamentally
committed to a particular form of uncertain logic that no other AGI
approach accepts

-- Selmer Bringsjord's approach, which is founded on the notion that
standard predicate  logic alone is The Answer

-- Hugo de Garis's approach which is based on brain emulation

you're looking at interesting approaches that are not really
compatible with the LIVE approach ... I'd say, you could not viably
bring these guys into a collaborative AI project based on the LIVE
approach...

So, I do think more collaboration and co-thinking could occur than
currently does ... but also that there are limits due to fundamentally
different understandings

OpenCog is general enough to support any approach falling within the
LIVE category, and a number of other sorts of approaches as well
(e.g. a variety of neural net based architectures).  But it is not
**completely** general and doesn't aim to me ... IMO, a completely
general AGI development
framework is just basically, say, "C++ and Linux" ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Open source (was Re: [agi] The Strange Loop of AGI Funding: now logically proved!)

2008-04-19 Thread Ben Goertzel
> Translation: We all (me included) now accept as reasonable that in order to
> briefly earn a living wage, that we must develop radically new and useful
> technology and then just give it away.
...
> Steve Richfield

The above is obviously a "straw man" statement ... but I think it
**is** true these days that open-sourcing one's code is a viable way
to get one's software vision realized, and is not necessarily
contradictory with making a profit.

This doesn't mean that OSS is the only path, nor that it's necessarily
an easy thing to make work...

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-19 Thread Ben Goertzel
On Sat, Apr 19, 2008 at 12:51 PM, Charles D Hixson
<[EMAIL PROTECTED]> wrote:
> Ed Porter wrote:
>
> > WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?
> >
> >

There are no apparent missing conceptual pieces in the Novamente approach...

Hopefully this will become clear even from the OpenCog documentation
that I'll release this summer (which won't cover all of the stuff in
Novamente, but a significant subset)

However, there are certainly places where only a high-level conceptual
design has been sketched out (with an intuitive plausibility argument,
often referring to our own or others' prototype experiments) ... and
details remain to be filled in, on the mathematical as well as code
level.

Any one of these places could, after more implementation and
experimentation, get revealed to be **concealing** a conceptual
problem that isn't now apparent.  We'll discover that as we go.

But I'll defer enlarging on this in detail till I've released the
OpenCog conceptual documentation.

After a lot of thought, I've finally figured out the right way to
structure the documentation, and the explanation for why I believe it
can lead to human-level AGI within a relatively modest amount of
effort.   So I'm eager to start transmogrifying various internal NM
docs into an OpenCog wikibook.  But alas, I've got some irritating
sys-admin and administrivia tasks, plus some biz meetings, to deal
with over the next few days, so this won't proceed nearly as rapidly
as I'd like...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] For robotics folks: Seeking thoughts about integration of OpenSim and Player

2008-04-19 Thread Ben Goertzel
Hi guys,

This question is aimed at folks who work on robotics or know a lot about it...

In a F2F chat with Josh Hall today, it occurred to me that it might be
valuable to have an integration of Player/Stage/Gazebo

http://en.wikipedia.org/wiki/Player_Project

with OpenSim,

http://en.wikipedia.org/wiki/OpenSimulator

The goal would be to be able to have AI system control virtual agents
using detailed motor-control commands, similar to what one uses to
control a physical robot ... yet also have the ability to interact
with these agents in a content-rich virtual world

I thought of poking the OpenSim folks to see if they think this would
be an interesting direction, but figured I'd post here first just to
see if anyone gives me a reason why it's a stupid idea  (I think
there may also be some OpenSim folks on these lists...)

If feasible, this would be of plenty value to OpenCog, Novamente and
other virtual-world AI systems.  I love the richness of online virtual
worlds, yet I'm sick of having to control virtual agents via crude
methods like sending signals from the AI server to the virtual world
that trigger specific, predefined animations  There seems no
logical reason why one can't have precise, robot-simulator type
control of agents in virtual worlds... though I understand that
realizing the software integration involved in integrating OpenSim and
Player might involve numerous technical difficulties...

Thx
Ben G







-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT ARE THE MISSING CONCEPTUAL PIECES IN AGI?

2008-04-19 Thread Ben Goertzel
Richard,

I promise you I'll take you up on this argument **in detail** sometime
during Summer 2008 after I release the OpenCog conceptual
documentation... which is only about 50-70 hours work from being
ready, but time for such stuff is scant...

ben

>
>  Ed, can you please specify *precisely* what, in the talks at AGI 2008,
> leads you to the conclusion that "we know enough to start building
> impressive intelligences"?
>
>  Some might say that everything shown at AGI 2008 could be interpreted as
> lots of people working on the problem, but that in ten years time those same
> people will come back and report that they are still working on the problem,
> but with no substantial difference to what they showed this year.
>
>  The reason that some people would say this is that if you went back ten
> years, you could find people achieving forms of AI that exhibited no
> *substantial* difference to anything shown at AGI 2008.
>
>  So, I am looking for your concrete reason (not gut instinct, but concrete
> reason) to claim that "we know enough ...etc.".
>
>
>
>  Richard Loosemore
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Open source (was Re: [agi] The Strange Loop of AGI Funding: now logically proved!)

2008-04-20 Thread Ben Goertzel
Bob...

... and of course, OSS does not contradict paying programmers to write software.

I have no plans to dissolve Novamente LLC, for example ;-p ... we're
actually doing better than ever ...

And, I note that SIAI is now paying 2 programmers (one full time, one
3/5 time) to work on OpenCog specifically ...

And we will have a bunch of students getting paid by Google to code
for OpenCog this summer, under the Google Summer of Code program...

It is certainly true that a paid team of full-time programmers can
address certain sorts of issues faster and more efficiently than a
distributed team of part-timers.  My idea is not to replace the former
with the latter, but rather to make use of both, working toward
closely overlapping goals...

-- Ben G


On Sun, Apr 20, 2008 at 7:49 AM, Bob Mottram <[EMAIL PROTECTED]> wrote:
> Until a true AGI is developed I think it will remain necessary to pay
>  programmers to write programs, at least some of the time.  You can't
>  always rely upon voluntary effort, especially when the problem you
>  want to solve is fairly obscure.
>
>
>
>
>
>
>  On 19/04/2008, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>  > > Translation: We all (me included) now accept as reasonable that in order 
> to
>  >  > briefly earn a living wage, that we must develop radically new and 
> useful
>  >  > technology and then just give it away.
>  >
>  > ...
>  >  > Steve Richfield
>  >
>  >  The above is obviously a "straw man" statement ... but I think it
>  >  **is** true these days that open-sourcing one's code is a viable way
>  >  to get one's software vision realized, and is not necessarily
>  >  contradictory with making a profit.
>  >
>  >  This doesn't mean that OSS is the only path, nor that it's necessarily
>  >  an easy thing to make work...
>  >
>  >
>  >  -- Ben
>  >
>  >
>
> >  ---
>  >  agi
>  >  Archives: http://www.listbox.com/member/archive/303/=now
>  >  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  >  Modify Your Subscription: http://www.listbox.com/member/?&;
>
>
> >  Powered by Listbox: http://www.listbox.com
>  >
>
>  -------
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Ben Goertzel
On Wed, Apr 23, 2008 at 5:21 AM, Joshua Fox <[EMAIL PROTECTED]> wrote:
>
> To return to the old question of why AGI research seems so rare, Samsonovich
> et al. say
> (http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)
>
> 'In fact, there are several scientific communities pursuing the same or
> similar goals, each unified under their own unique slogan: "machine /
> artificial consciousness", "human-level intelligence", "embodied cognition",
> "situation awareness", "artificial general intelligence", "commonsense
> reasoning", "qualitative reasoning", "strong AI", "biologically inspired
> cognitive architectures" (BICA), "computational consciousness",
> "bootstrapped learning", etc. Many of these communities do not recognize
> each other.'

I believe these various academic subcommunities ARE quite aware of each other

And I would divide them into two categories

1)
Those that are concerned with rather specialized approaches to
intelligence, e.g. qualitative reasoning, commonsense reasoning etc.

2)
Those that do not really constitute a coherent research community,
e.g. BICA, human-level AI ... but rather "merely" constitute a few
assorted workshops, journal special issues, etc.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Ben Goertzel
On Wed, Apr 23, 2008 at 11:29 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Ben/Joshua:
>
>  How do you think the AI and AGI fields relate to the embodied & grounded
> cognition movements in cog. sci? My impression is that the majority of
> people here (excluding you) still have only limited awareness of them  - &
> are still operating in total & totally doomed defiance of their findings:

My opinion is that the majority of people here are aware of these
ideas, and consider them unproven speculations not agreeing with their
own intuition ;-)

>  "Grounded cognition rejects traditional views that cognition is computation
>  on amodal symbols in a modular system, independent of
>  the brain's modal systems for perception, action, and introspection.
>  Instead, grounded cognition proposes that modal simulations,
>  bodily states, and situated action underlie cognition."  Barsalou
>
>  Grounded cognition here obviously means not just pointing at things, but
> that all traditional rational operations are, and have to be, supported by
> image-inative simulation in any form of general intelligence.

I wouldn't agree with such a strong statement.  I think the grounding
of ratiocination in image-ination is characteristic of human
intelligence, and must thus be characteristic of any highly human-like
intelligent system ... but, I don't see any reason to believe it's the
ONLY path.

The minds we know or can imagine, almost surely constitute a
teeny-tiny little backwater of the overall space of possible minds ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Ben Goertzel
On Sat, Apr 26, 2008 at 10:03 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
> In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
>  possible in this world
>  if you apply it not to a hypothetical machine or human being but to the
>  whole universe which can be assumed to be a closed system.

Please consult the works of Marcus Hutter (Universal AI) and Juergen Schmidhuber
(Godel Machine).   These thoughts are not new.

Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.   How much generality can be
achieved with how much
processing power, is not yet known -- math hasn't advanced that far yet.

Humans are not totally general, yet are much more general than any of
the AI systems
yet built

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Richard's four criteria and the Novamente Pet Brain

2008-04-26 Thread Ben Goertzel
Richard,

I've been too busy to participate in this thread, but, now I'll chip
in a single comment,
anyways... regarding the intersection btw your thoughts and Novamente's
current work...

You cited the following 4 criteria,

> > "- Memory.  Does the mechanism use stored information about what it was
> doing fifteen minutes ago, when it is making a decision about what to do
> now?  An hour ago?  A million years ago?  Whatever:  if it remembers, then
> it has memory.
> >
> > "- Development.  Does the mechanism change its character in some way over
> time?  Does it adapt?
> >
> > "- Identity.  Do individuals of a certain type have their own unique
> identities, so that the result of an interaction depends on more than the
> type of the object, but also the particular individuals involved?
> >
> > "- Nonlinearity.  Are the functions describing the behavior deeply
> nonlinear?
> >
> > These four characteristics are enough. Go take a look at a natural system
> in physics, or an engineering system, and find one in which the components
> of the system interact with memory, development, identity and nonlinearity.
> You will not find any that are understood.

Someone else replied:

> > I am quite sure there have been many AI system that have had all four of
> these features and that have worked pretty much as planned and whose
> behavior is reasonably well understood

Actually, the Novamente Pet Brain system that we're now experimenting with,
for controlling virtual dogs and other animals, in virtual worlds, does include
nontrivial

-- memory
-- adaptation/development
-- identity
-- nonlinearity

Each pet has its own memory (procedural, episodic and declarative) and
develops new behaviors, skills and biases over time; each pet has its
own personality and identity; and there is plenty of nonlinearity in
multiple aspects and levels.

Yet, this is really a pretty simplistic AI system (though built in an
architecture with grander ambitions and potential), and we certainly
DO understand the system's behavior to a reasonable level -- though we
can't predict exactly what any one pet will do in any given situation;
we just have to run the system and see.

I agree that the above four features, combined, do lead to a lot of
complexity in the "complex systems" sense.  However, I don't agree
that this complexity is so severe as to render implausible an
intuitive understanding, from first principles, of the system's
qualitative large-scale behavior based on the details of its
construction.  It's true we haven't done the math to predict the
system's qualitative large-scale behavior rigorously; but as system
designers and parameter tuners, we can tell how to tweak the system to
get it to generally act in certain ways.

And it really seems to me that the same sort of situation will hold
when we go beyond virtual pets to more generally intelligent virtual
agents based on the same architecture.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES---Mark's defense of falsehood

2008-04-26 Thread Ben Goertzel
I believe the monsters in the video game Black & White also fulfilled Richard's
criteria ...

On Sat, Apr 26, 2008 at 1:53 PM, Russell Wallace
<[EMAIL PROTECTED]> wrote:
> On Sat, Apr 26, 2008 at 6:37 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>  >  OK.  Name these systems and their successes.  PROVE Richard's statement
>  > incorrect.  I'm not seeing anyone responsible doing that.
>
>  I don't know if I count as someone responsible :) but I named two
>  (TD-Gammon and spam filtering); I can name some more if you like.
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES---Mark's defense of falsehood

2008-04-26 Thread Ben Goertzel
They are monsters that learn new behaviors via imitation, and that are
controlled internally by adaptive neural nets using a form of Hebbian
learning.

Nothing that awesome but they do seem to fulfill Richard's criteria.
My friend Jason Hutchens, whose chat bots won the Loebner prize at
least once, wrote some of their AI code.

Novamente's Pet Brain is more sophisticated already...

ben g

On Sat, Apr 26, 2008 at 2:30 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> Ben,
>
>Could you elucidate on this further (or provide references).  Is it worth
> getting Black & White if you're not a big gaming person?
>
>  - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]>
>
>  To: 
>  Sent: Saturday, April 26, 2008 2:14 PM
>  Subject: **SPAM** Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S
> COMPLEXITY THEORIES---Mark's defense of falsehood
>
>
>
> >
> > I believe the monsters in the video game Black & White also fulfilled
> Richard's
> > criteria ...
> >
> > On Sat, Apr 26, 2008 at 1:53 PM, Russell Wallace
> > <[EMAIL PROTECTED]> wrote:
> >
> > >
> > > On Sat, Apr 26, 2008 at 6:37 PM, Mark Waser <[EMAIL PROTECTED]>
> wrote:
> > >  >  OK.  Name these systems and their successes.  PROVE Richard's
> statement
> > >  > incorrect.  I'm not seeing anyone responsible doing that.
> > >
> > >  I don't know if I count as someone responsible :) but I named two
> > >  (TD-Gammon and spam filtering); I can name some more if you like.
> > >
> > >
> > >
> > >  ---
> > >  agi
> > >  Archives: http://www.listbox.com/member/archive/303/=now
> > >  RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > >  Modify Your Subscription: http://www.listbox.com/member/?&;
> > >
> > >  Powered by Listbox: http://www.listbox.com
> > >
> > >
> >
> >
> >
> >
> > --
> > Ben Goertzel, PhD
> > CEO, Novamente LLC and Biomind LLC
> > Director of Research, SIAI
> > [EMAIL PROTECTED]
> >
> > "If men cease to believe that they will one day become gods then they
> > will surely become worms."
> > -- Henry Miller
> >
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: http://www.listbox.com/member/?&;
> >
> > Powered by Listbox: http://www.listbox.com
> >
> >
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


WARNING -- LET'S KEEP THE LIST CIVIL PLEASE ... was Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Ben Goertzel
Ummm... just a little note of warning from the list owner.

Tintner wrote:
> > So I await your geometric solution to this problem - (a mere statement of
> principle will do) - with great interest. Well, actually no. Your answer is
> broadly predictable - you 1) won't have any idea here  2) will have nothing
> to say to the point and  3) be, as usual, all bark and no bite - all insults
> and no ideas.

Waser wrote:
>  Nice ad hominem.  Asshole.

Uh, no.

Mark, you've been a really valuable contributor to this list for a long period
of time.

But, this sort of name-calling is just not apropos on this list.
Don't do it anymore.

Thanks
Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-26 Thread Ben Goertzel
Richard,

>  How does this relate to the original context in which I cited this list
>  of four characteristics?  It loks like your comments are completely outside
> the original context, so they don't add anything of relevance.

I read the thread and I think my comments are relevant

>  Let me bring you up to speed:

>  1) The mere presence of these four characteristics *somewhere* in a
>  system has nothing whatever to do with the argument I presented (this
>  was a distortion introduced by Ed Porter in one of his many fits of
>  misunderstanding).  Any fool could put together a non-complex system
>  with, for example, four distinct modules that each possessed one of
>  those four characteristics.  So what?  I was not talking about such
>  trivial systems, I was talking about systems in which the elements of
>  the system each interacted with the other elements in a way that
>  included these four characteristics.

This last sentence is just not very clearly posed.

The four aspects mentioned were

-- memory
-- adaptation/development
-- identity
-- nonlinearity

In the Pet Brain,

-- memory is a dynamic process associated with a few coupled nonlinear
dynamics acting on a certain data store

-- adaptation/development is a process that involves a number of dynamics
acting on memory

-- the identity of a pet is associated with certain specified
parameters, but also
includes self-organizing patterns in the memory that are guided by
these parameters
and other processes

-- nonlinearity pervades all major aspects of the system, and the
system as a whole

>  So when you point to the fact that "somewhere" in Novamente (in a single
>  'pet' brain) you can find all of these, it has no bearing on the
>  argument I presented.  I was principally referring to these
>  characteristics appearing at the symbol level (and symbol-manipulation
>  level), not the 'pet brain' level.  You can find as much memory,
>  identity, etc etc as you like, in other sundry parts of Novamente, but
>  it won't make any difference to the place where I was pointing to it.

I'm not sure how you're defining the term "symbol."

If you define it in the classical Peircean definition (symbol as contrasted to
icon and index) then indeed the four aspects you mentioned do occur in
the Pet Brain on the symbol level.

>  2)  Even if you do come back to me and say that the symbols inside
>  Novamente all contain all four characteristics, I can only say "so what"
>  a second time ;-).  The question I was asking when I laid down those
>  four characteristics was "How many physical systems do you know of in
>  which the system elements are governed by a mechanism that has all four
>  of these, AND where the system as a whole has a large-scale behavior
>  that has been mathematically proven to arise from the behaviors of the
>  elements of the system?"
>
>  The answer to that question (I'll save you the trouble) is 'zero'.

But why do you place so much emphasis on mathematical proof?

I don't think that mathematical proof is needed for creating an AGI system.

(And I say this as a math PhD, who enjoys math more than pretty much any
other pursuit...)

Formal software verification is still a crude science, so that very few of the
software programs we utilize have been (or could tractably be) proven to
fulfill their specifications.  We create software programs based on piecemeal
rigorous justifications of fragments of the software, combined with intuitive
understanding of the whole.

Furthermore, as a mathematician I'm acutely aware of physicists' often low level
of mathematical rigor.  As a single example, Feynman integrals in particle
physics were used by physicists for decades, to do real calculations predicting
the outcomes of real experiments with great accuracy, before finally some
mathematicians came along and provided them with a rigorous mathematical
grounding.

>  The inference to be made from that fact is that anyone who does put
>  together a system like  -  like, e.g., the fearless Mr. B. Goertzel  -
>  is taking quite a bizarre and extraordinary position, if he says that he
>  alone, of all people, is quite confident that his particular system,
>  unlike all the others, is quite understandable.

"Understandable" is a vague term.  In complex systems it's typical that
one can predict statistically properties of the whole system's behavior, yet
can't predict the details.  So a complete understanding is intractable but
a partial, useful qualitative understanding is more feasible to come by.

Also, I note there's a difference btw an engineered and a natural system,
in terms of the degree of inspection one can achieve of the system's internal
details.

I strongly suspect that in 10-20 years neuroscientists will arrive at a decent
qualitative explanation of how the lower-level mechanisms of the brain generate
the higher-level patterns of human mind.  The reason we haven't yet, is not that
there is some insuperable "complexity barrier", but rather that w

Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote:
>
>   Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54
>
>
>  > Yes, truly general AI is only possible in the case of infinite
>  > processing power, which is
>  > likely not physically realizable.
>  > How much generality can be achieved with how much
>  > Processing power, is not yet known -- math hasn't advanced that far yet.
>
>
>  My point is not only that  'general intelligence without any limits' would
>  need infinite resources of time and memory.
>  This is trivial of course. What I wanted to say is that any intelligence has
>  to be narrow in a sense if it wants be powerful and useful. There must
>  always be strong assumptions of the world deep in any algorithm of useful
>  intelligence.

This is a consequence of the "No Free Lunch" theorem, essentially, isn't it?

http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization

With infinite resources you use exhaustive search (like AIXI or the
Godel Machine) ...
with finite resources you can't afford it, so you need to use (explicitly or
implicitly) search that is guided by some inductive biases.

See Eric Baum's book "What Is Thought?" for much discussion on genetically
encoded inductive bias and its role in AI.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Richard,

>  Question:  "How many systems do you know of in which the system elements
> are governed by a mechanism that has all four of these, AND where the system
> as a whole has a large-scale behavior that has been shown (by any method of
> "showing" except detailed simulation of the system) to arise from the
> behaviors of the elements of the system?  I would like an example of any
> case of a complex system in which there are large numbers of individual
> elements where each element has (a) memory for recent events, (b) adaptation
> and development of its character over long periods of time, where that
> adaptation is sensitive to influences from other elements, (c) an identity,
> so that what one element does to another will depend crucially on which
> element it is, and (d) nonlinearity in the mechanisms that determine how the
> elements relate and adapt."

I don't really understand your definition of "identity" in the above, could you
clarify, preferably with examples?

>  Show me any non-trivial system, whatsoever, in which there is general
> agreement that all four of these characteristics are present in the
> interacting elements, and where someone figured out ahead of time what the
> overall behavior of the system was going to be, given only knowledge of the
> element mechanisms, and without simulating the whole system and looking at
> the simulation.
>
>  There does not have to be a mathematical proof, just some derivation that
> allows me to see an example of someone predicting the behavior from the
> mechanisms.

I'm not sure what you mean by "predicting the behavior."

With the Pet Brain, which does seem to fulfill the criteria you mention above
(pending my new confusion about your meaning of "identity") (with the Atoms
in the Novamente AtomTable as the "elements" in your description), one cannot
predict the precise course of development of the system ... we can't predict
what any one pet will do in response to it's environment ... but we do
understand
what sorts of behaviors the pets are capable of... based on general
understanding
of how the system and its dynamics work...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
>  No:  I am specifically asking for some system other than an AGI system,
> because I am looking for an external example of someone overcoming the
> complex systems problem.

The specific criteria you've described would seem to apply mainly to living
systems ... and we just don't have that much knowledge of the internals of these
yet, due to data-collection issues...

Certainly, the failure of the Biosphere experiment is evidence in your favor.
There, the scientists failed to predict basic high-level properties of
a pretty simple
closed ecosystem, based on their knowledge of the parts.

However, it was not an engineered ecosystem, and their knowledge of the parts
was quite limited compared to our knowledge of the parts of a software system.

In short, my contention is that engineering something, even if it's a
complex system,
places one in a fundamentally different position than if one is
studying a natural system,
simply because one does not understand the makeup of the natural
system that well,
due to limitations in our current measuring instruments.

Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 5:51 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
> > Engineering in the real world is nearly always a mixture of rigor and
> > intuition.  Just like analysis of complex biological systems is.
> >
>
>  AIEe! NO!  You are clearly not an engineer because a true engineer
> just wouldn't say this.
>
>  Engineering should *NEVER* involve intuition.  Engineering does not require
> exact answers as long as you have error bars but the second that you revert
> to intuition and guesses, it is *NOT* engineering anymore.

Well, we may be using the word "intuition" differently.

I'll give a very simple example of intuition, based on the only
engineering paper
I ever published, which was a civil engineering paper.  What we did was use
statistics to predict how likely it was (based on several physical measurements)
that the soil under a house was going to settle over the next few decades
(causing the house to sink irregularly).   This formula we derived is
now used to
determine where to build houses in the outskirts of Las Vegas, and what kind of
foundation to use for the houses.

Not too interesting, but rigorous.

However, one wouldn't bother to use this formula if the soil was too different
in composition from the soil around Vegas.  So in reality the civil
engineer uses
some intuition to decide whether the soil is close enough to the right
kind of soil,
to use our formula.

Now this *could* be made more rigorous, too ... in principle ... but in practice
it isn't.

And so, maybe some houses fall down ;-)

But not many do.  The combination of rigorous formulas applying to restrictive
cases, together with intuition telling you where to apply what formulas, works
OK.

Anyway this is a total digression, and I'm done w/ recreational
emailing for the day!

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
I don't agree with Mark Waser that we can "engineer the complexity out
of intelligence."

I agree with Richard Loosemore that intelligent systems are
intrinsically complex systems in the Santa Fe Institute type sense

However, I don't agree with Richard as to the *extent* of the
complexity problem.  I think he overestimates how hard it will be to
roughly estimate the behavior of AGI systems based on their designs
and measurement of their components.  I think it will be easier to do
this with AGI systems than with natural systems, not because we can
engineer the complexity out of the systems, but because we (as the
designers) understand the systems better, and can measure the systems
more thoroughly...

-- Ben G

On Sun, Apr 27, 2008 at 5:44 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
>
>
> > To the best of my knowledge, nobody has *ever* used "intuitive
> > understanding" to second-guess the stability of an artificial complex
> > system in which those four factors were all present in the elements in a
> > tightly coupled way.
> >
>
>  Um, aren;t those exactly the rocks that BioMind foundered on?
>
>
>
> > So that is all we have as a reply to the complex systems problem:
> > engineers saying that they think they can just use "intuitive
> > understanding" to get around it.
> >
>
>  Again, not this engineer . . . . I say that we should engineer the
> complexity out of it.
>
>
>
> > Rots of ruck, as Rastro would say.
> >
>
>  We don't need no stinkin' luck . . . . we;ve got foresight, planning, and
> engineering
>
>
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Rules of thumb are not intuition ... but applying them requires
intuition... unlike applying rigorous methods...

However even the most rigorous science requires rules of thumb (hence
intuition) to do the problem set-up before the calculations start...

ben

On Sun, Apr 27, 2008 at 6:56 PM, Mark Waser <[EMAIL PROTECTED]> wrote:
> >
> > >  Engineering should *NEVER* involve intuition.  Engineering does not
> require
> > > exact answers as long as you have error bars but the second that you
> revert
> > > to intuition and guesses, it is *NOT* engineering anymore.
> > >
> >
> > Well, we may be using the word "intuition" differently.
> >
>
>  Given your examples, we are.
>
>
> >
> > I'll give a very simple example of intuition, based on the only
> > Not too interesting, but rigorous.
> >
>
>  Yeah.  Generally if it's rigorous, it's not considered intuition.
>
>
> > However, one wouldn't bother to use this formula if the soil was too
> different
> > in composition from the soil around Vegas.  So in reality the civil
> > engineer uses
> > some intuition to decide whether the soil is close enough to the right
> > kind of soil,
> > to use our formula.
> >
> > Now this *could* be made more rigorous, too ... in principle ... but in
> practice
> > it isn't.
> >
>
>  I would have phrased this as "The civil engineer uses some simple rules of
> thumb . . . . " which tend to be pretty well established and where they do
> and do not apply also tend to be pretty well established to.  I've never
> really heard the word intuition used to describe this.
>
>
> > And so, maybe some houses fall down ;-)
> > But not many do.  The combination of rigorous formulas applying to
> restrictive
> > cases, together with intuition telling you where to apply what formulas,
> works
> > OK.
> >
>
>  Yeah, you seem to be using the word intuition where I use the words "rules
> of thumb".  An interesting distinction and one that we probably should both
> remember . . . .
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
>  I said and repeat that we can "engineer the complexity out of intelligence"
> in the Richard Loosemore sense.
>  I did not say and do not believe that we can "engineer the complexity out
> of intelligence" in the Santa Fe Institute sense.

OK, gotcha...

Yeah... IMO, complexity in the sense you ascribe to Richard was never there
in intelligence in the first place ;-)

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Actually, I have to clarify that my knowledge of this totally
digressive topic is about
12 years obsolete.  Maybe it's all done differently now...

>  However, one wouldn't bother to use this formula if the soil was too 
> different
>  in composition from the soil around Vegas.  So in reality the civil
>  engineer uses
>  some intuition to decide whether the soil is close enough to the right
>  kind of soil,
>  to use our formula.
>
>  Now this *could* be made more rigorous, too ... in principle ... but in 
> practice
>  it isn't.
>
>  And so, maybe some houses fall down ;-)
>
>  But not many do.  The combination of rigorous formulas applying to 
> restrictive
>  cases, together with intuition telling you where to apply what formulas, 
> works
>  OK.
>
>  Anyway this is a total digression, and I'm done w/ recreational
>  emailing for the day!
>
>  ben
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Ben Goertzel
Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...

http://en.wikipedia.org/wiki/Euphoria_%28software%29


ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Ben Goertzel
They are using equational models to simulate the muscles and bones
inside the body...

On Thu, May 1, 2008 at 12:05 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> So what are the principles that enable animated characters and materials
> here to react/move in individual continually different ways, where previous
> characters reacted typically and consistently?
>
>  Ben Now this looks like a fairly AGI-friendly approach to controlling
>
> >
> >
> >
> > animated characters ... unfortunately it's closed-source and
> > proprietary though...
> >
> > http://en.wikipedia.org/wiki/Euphoria_%28software%29
> >
> >
> > ben
> >
> > ---
> > agi
> > Archives: http://www.listbox.com/member/archive/303/=now
> > RSS Feed: http://www.listbox.com/member/archive/rss/303/
> > Modify Your Subscription: http://www.listbox.com/member/?&;
> > Powered by Listbox: http://www.listbox.com
> >
> >
> >
> > --
> > No virus found in this incoming message.
> > Checked by AVG.
> > Version: 7.5.524 / Virus Database: 269.23.7/1409 - Release Date: 5/1/2008
> 8:39 AM
> >
> >
> >
>
>
>  ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Ben Goertzel
Actually, it seems their technique is tailor-made for imitative learning

If you gathered data about how people move in a certain context, using
motion capture, then you could use their GA/NN stuff to induce a
program that would generate data similar to the motion-captured data.

This would then be more generalizable than using the raw motion-capture data

-- Ben

On Thu, May 1, 2008 at 2:11 PM, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:
> IMHO, Euphoria shows that pure GA approaches are lame.
>  More details here:
>  http://aigamedev.com/editorial/naturalmotion-euphoria
>
>
>
>  On Thu, May 1, 2008 at 5:39 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>  > Now this looks like a fairly AGI-friendly approach to controlling
>  >  animated characters ... unfortunately it's closed-source and
>  >  proprietary though...
>  >
>  >  http://en.wikipedia.org/wiki/Euphoria_%28software%29
>  >
>  >
>  >  ben
>  >
>
> >  ---
>  >  agi
>  >  Archives: http://www.listbox.com/member/archive/303/=now
>  >  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  >  Modify Your Subscription: http://www.listbox.com/member/?&;
>  >  Powered by Listbox: http://www.listbox.com
>  >
>
>
>
> ---
>  agi
>  Archives: http://www.listbox.com/member/archive/303/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/303/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


<    3   4   5   6   7   8   9   10   11   12   >