Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Ben Goertzel
Your inference trajectory assumes that cybersex and STD are
probabilistically independent within sex but this is not the case.

PLN would make this error using the independence-assumption-based term logic
deduction rule; but in practice this rule is supposed to be overridden in
cases of known dependencies.

ben

On Mon, Jul 28, 2008 at 10:04 AM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 Here is an example of a problematic inference:

 1.  Mary has cybersex with many different partners
 2.  Cybersex is a kind of sex
 3.  Therefore, Mary has many sex partners
 4.  Having many sex partners - high chance of getting STDs
 5.  Therefore, Mary has a high chance of STDs

 What's wrong with this argument?  It seems that a general rule is
 involved in step 4, and that rule can be refined with some
 qualifications (ie, it does not apply to all kinds of sex).  But the
 question is, how can an AGI detect that an exception to a general rule
 has occurred?

 Or, do we need to explicitly state the exceptions to every rule?

 Thanks for any comments!
 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Ben Goertzel
On Mon, Jul 28, 2008 at 11:10 AM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  Your inference trajectory assumes that cybersex and STD are
 probabilistically independent within sex but this is not the case.

 We only know that:
   P(sex | cybersex) = high
   P(STD | sex) = high

 If we're also given that
   P(STD | cybersex) = 0
 then the question is moot -- it is already answered.

 It is a problem because we're not given the 3rd piece of information...



Yes, if there is no other background knowledge that is relevant, then this
error is unavoidable.

If however indirect background knowledge is available, such as the fact that
cyber- often refers to things occurring online, then a reasoning engine
may be able to incorporate this to guess that

P(STD | cybersex)

is small, which will then (if it is a confident conclusion) override the
erroneous independence-assumption-based inference you cite.



  PLN would make this error using the independence-assumption-based term
 logic deduction rule; but in practice this rule is supposed to be overridden
 in cases of known dependencies.


 Why don't PLN use Pei-Wang-style confidence?


PLN uses confidence values within its truth values, with a different
underlying semantics and math than NARS; but that doesn't help much with the
above problem...

There is a confidence-penalty used in PLN whenever an independence
assumption is invoked, but it's not that severe of a penalty -- and nor
should it be.  When additional evidence is not available, making an
independence assumption is appropriate, even though sometimes it will turn
out to be wrong.

Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Ben Goertzel
On Mon, Jul 28, 2008 at 12:14 PM, YKY (Yan King Yin) 
[EMAIL PROTECTED] wrote:

 On 7/28/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 
  PLN uses confidence values within its truth values, with a different
 underlying semantics and math than NARS; but that doesn't help much with the
 above problem...
 
  There is a confidence-penalty used in PLN whenever an independence
 assumption is invoked, but it's not that severe of a penalty -- and nor
 should it be.  When additional evidence is not available, making an
 independence assumption is appropriate, even though sometimes it will turn
 out to be wrong.

 Even if you assume independence, you'll have 2 distinct paths leading
 to contradicting conclusions.  So you need some way to pick a winner.


yes, in PLN we use a similar mechanism to NARS there, called the Rule of
Choice




 I think Pei Wang's definition of confidence is good and can solve this
 example.  I'll check out your book when it's out =)

 YKY


 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com


Re: [agi] need some help with loopy Bayes net

2008-07-04 Thread Ben Goertzel
YKY,

PLN, like NARS, uses inference trails

Although we have tried omitting them, and found interesting results:
errors do propagate, but not boundlessly, and network truth values are
still meaningful

Loopy Bayes nets basically just live with the circularity and rely
on math properties of the Bayes net propagation rules to remove the
possibility of error.  Nice stuff, but it only works under fairly
special assumptions.

Traditional Bayes nets just assume a hierarchical structure and ignore
the conditional probs not in accordance w/ the hierarchy, getting at
them only indirectly via the ones in the hierarchy.  This is why
structure learning is so important in Bayes nets.

-- Ben


On Fri, Jul 4, 2008 at 4:10 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 I'm considering nonmonotonic reasoning using Bayes net, and got stuck.

 There is an example on p483 of J Pearl's 1988 book PRIIS:

 Given:
 birds can fly
 penguins are birds
 penguins cannot fly

 The desiderata is to conclude that penguins are birds, but penguins
 cannot fly.

 Pearl translates the KB to:
   P(f | b) = high
   P(f | p) = low
   P(b | p) = high
 where high and low means arbitrarily close to 1 and 0, respectively.

 If you draw this on paper you'll see a triangular loop.

 Then Pearl continues to deduce:

 Conditioning P(f | p) on both b and ~b,
P(f | p) = P(f | p,b) P(b | p) + P(f | p,~b) [1-P(b | p)]
 P(f | p,b) P(b | p)

 Thus
P(f | p,b)  P(f | p) / P(b | p) which is close to 0.

 Thus Pearl concludes that given penguin and bird, fly is not true.

 But I found something wrong here.  It seems that the Bayes net is
 loopy and we can conclude that fly given penguin and bird can be
 either 0 or 1.  (The loop is somewhat symmetric).

 Ben, do you have a similar problem dealing with nonmonotonicity using
 probabilistic networks?

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be
first overcome  - Dr Samuel Johnson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
But, we don't need to be able to predict the thoughts of an AGI system
in detail, to be able to architect an AGI system that has thoughts...

I agree that predicting the thoughts of an AGI system in detail is
going to be pragmatically impossible ... but I don't agree that
predicting **which** AGI designs can lead to the emergent properties
corresponding to general intelligence, is pragmatically impossible to
do in an analytical and rational way ...

Similarly, I could engineer an artificial weather system displaying
hurricanes, whirlpools, or whatever phenomena you ask me for -- based
on my general understanding of the Navier-stokes equation.   Even
though I could not, then, predict the specific dynamics of those
hurricanes, whirlpools, etc.

We lack the equivalent of the Navier-stokes equation for thoughts.
But we can still arrive at reasonable analytic understandings of
appropriately constrained and formalised AGI designs, with the power
to achieve general intelligence...

ben g

On Mon, Jun 30, 2008 at 1:55 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Hi Ben,

 I don't think the flaw you have identified matters to the main thrust of 
 Richard's argument - and if you haven't summarized Richard's position 
 precisely, you have summarized mine. :-]

 You're saying the flaw in that position is that prediction of complex 
 networks might merely be a matter of computational difficulty, rather than 
 fundamentally intractability. But any formally defined complex system is 
 going to be computable in principle. We can always predict such a system with 
 infinite computing power. That doesn't make it tractable, or open to 
 understanding, because obviously real understanding can't be dependent 
 infinite computing power.

 The question of fundamental intractability comes down to the degree with 
 which we can make predictions about the global level from the local. And 
 let's hope there's progress to be made there because each discovery will make 
 our lives easier, to those of us who would try to understand something like 
 the brain or the body or even just the cell. Or even just folding proteins!

 But it seems pretty obvious to me anyway that we will never be able to 
 predict the weather with any precision without doing an awful lot of 
 computation.

 And what is our mind but the weather in our brains?

 Terren

 --- On Sun, 6/29/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 From: Ben Goertzel [EMAIL PROTECTED]
 Subject: Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE 
 IN AGI
 To: agi@v2.listbox.com
 Date: Sunday, June 29, 2008, 10:44 PM
 Richard,

 I think that it would be possible to formalize your
 complex systems argument
 mathematically, but I don't have time to do so right
 now.

  Or, then again . perhaps I am wrong:  maybe you
 really *cannot*
  understand anything except math?

 It's not the case that I can only understand math --
 however, I have a
 lot of respect
 for the power of math to clarify disagreements.  Without
 math, arguments often
 proceed in a confused way because different people are
 defining terms
 differently,a
 and don't realize it.

 But, I agree math is not the only kind of rigor.  I would
 be happy
 with a very careful,
 systematic exposition of your argument along the lines of
 Spinoza or the early
 Wittgenstein.  Their arguments were not mathematical, but
 were very rigorous
 and precisely drawn -- not slippery.

  Perhaps you have no idea what the actual
  argument is, and that has been the problem all along?
 I notice that you
  avoided answering my request that you summarize your
 argument against the
  complex systems problem ... perhaps you are just
 confused about what the
  argument actually is, and have been confused right
 from the beginning?

 In a nutshell, it seems you are arguing that general
 intelligence is
 fundamentally founded
 on emergent properties of complex systems, and that
 it's not possible for us to
 figure out analytically how these emergent properties
 emerge from the
 lower-level structures
 and dynamics of the complex systems involved.   Evolution,
 you
 suggest, figured out
 some complex systems that give rise to the appropriate
 emergent
 properties to produce
 general intelligence.  But evolution did not do this
 figuring-out in
 an analytical way, rather
 via its own special sort of directed trial and
 error.   You suggest
 that to create a generally
 intelligent system, we should create a software framework
 that makes
 it very easy to
 experiment with  different sorts of complex systems, so
 that we can
 then figure out
 (via some combination of experiment, analysis, intuition,
 theory,
 etc.) how to create a
 complex system that gives rise to the emergent properties
 associated
 with general
 intelligence.

 I'm sure the above is not exactly how you'd phrase
 your argument --
 and it doesn't
 capture all the nuances -- but I was trying to give a
 compact and approximate
 formulation.   If you'd like to give

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
I agree that all designed systems have limitations, but I also suggest
that all evolved systems have limitations.

This is just the no free lunch theorem -- in order to perform better
than random search at certain optimization tasks, a system needs to
have some biases built in, and these biases will cause it to work
WORSE than random search on some other optimization tasks.

No AGI based on finite resources will ever be **truly** general, be it
an engineered or evolved systems

Evolved systems are far from being beyond running into dead ends ...
their adaptability is far from infinite ... the evolutionary process
itself may be endlessly creative, but in that sense so may be the
self-modifying process of an engineered AGI ...

-- Ben G

On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 --- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 but I don't agree that predicting **which** AGI designs can lead
 to the emergent properties corresponding to general intelligence,
 is pragmatically impossible to do in an analytical and rational way ...

 OK, I grant you that you may be able to do that. I believe that we can be 
 extremely clever in this regard. An example of that is an implementation of a 
 Turing Machine within the Game of Life:

 http://rendell-attic.org/gol/tm.htm

 What a beautiful construction. But it's completely contrived. What you're 
 suggesting is equivalent, because your design is contrived by your own 
 intelligence. [I understand that within the Novamente idea is room for 
 non-deterministic (for practical purposes) behavior, so it doesn't suffer 
 from the usual complexity-inspired criticisms of purely logical systems.]

 But whatever achievement you make, it's just one particular design that may 
 prove effective in some set of domains. And there's the rub - the fact that 
 your design is at least partially static will limit its applicability in some 
 set of domains. I make this argument more completely here:

 http://www.machineslikeus.com/cms/news/design-bad-or-why-artificial-intelligence-needs-artificial-life
 or http://tinyurl.com/3coavb

 If you design a robot, you limit its degrees of freedom. And there will be 
 environments it cannot get around in. By contrast, if you have a design that 
 is capable of changing itself (even if that means from generation to 
 generation), then creative configurations can be discovered. The same basic 
 idea works in the mental arena as well. If you specify the mental machinery, 
 there will be environments it cannot get around in, so to speak. There will 
 be important ways in which it is unable to adapt. You are limiting your 
 design by your own intelligence, which though considerable, is no match for 
 the creativity manifest in a single biological cell.

 Terren





 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be
first overcome  - Dr Samuel Johnson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
I wrote a book about the emergence of spontaneous creativity from
underlying complex dynamics.  It was published in 1997 with the title
From Complexity to Creativity.  Some of the material is dated but I
still believe the basic ideas make sense.  Some of the main ideas were
reviewed in The Hidden Pattern (2006).  I don't have time to review
the ideas right now (I'm in an airport during a flight change doing a
quick email check) but suffice to say that I did put a lot of thought
and analysis into how spontaneous creativity emerges from complex
cognitive systems.  So have others.  It is not a total mystery, as
mysterious as the experience can seem subjectively.

-- Ben G

On Mon, Jun 30, 2008 at 1:32 PM, Terren Suydam [EMAIL PROTECTED] wrote:

 Ben,

 I agree, an evolved design has limits too, but the key difference between a 
 contrived design and one that is allowed to evolve is that the evolved 
 critter's intelligence is grounded in the context of its own 'experience', 
 whereas the contrived one's intelligence is grounded in the experience of its 
 creator, and subject to the limitations built into that conception of 
 intelligence. For example, we really have no idea how we arrive at 
 spontaneous insights (in the shower, for example). A chess master suddenly 
 sees the game-winning move. We can be fairly certain that often, these 
 insights are not the product of logical analysis. So if our conception of 
 intelligence fails to explain these important aspects, our designs based on 
 those conceptions will fail to exhibit them. An evolved intelligence, on the 
 other hand, is not limited in this way, and has the potential to exhibit 
 intelligence in ways we're not capable of comprehending.

 [btw, I'm using the scare quotes around the word experience as it applies to 
 AGI because it's a controversial word and I hope to convey the basic idea 
 about experience without getting into technical details about it. I can get 
 into that, if anyone thinks it necessary, just didn't want to get bogged 
 down.]

 Furthermore, there are deeper epistemological issues with the difference 
 between design and self-organization that get into the notion of autonomy as 
 well (i.e., designs lack autonomy to the degree they are specified), but I'll 
 save that for when I feel like putting everyone to sleep :-]

 Terren

 PS. As an aside, I believe spontaneous insight is likely to be an example of 
 self-organized criticality, which is a description of the behavior of 
 earthquakes, avalanches, and the punctuated equilibrium model of evolution. 
 Which is to say, a sudden insight is like an avalanche of mental 
 transformations, triggered by some minor event but the result of a build-up 
 of dynamic tension. Self-organized criticality is
 explained by the late Per Bak in _How Nature Works_, a short, excellent read 
 and an brilliant example of scientific and mathematical progress in the realm 
 of complexity.

 --- On Mon, 6/30/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 I agree that all designed systems have limitations, but I
 also suggest
 that all evolved systems have limitations.

 This is just the no free lunch theorem -- in
 order to perform better
 than random search at certain optimization tasks, a system
 needs to
 have some biases built in, and these biases will cause it
 to work
 WORSE than random search on some other optimization tasks.

 No AGI based on finite resources will ever be **truly**
 general, be it
 an engineered or evolved systems

 Evolved systems are far from being beyond running into dead
 ends ...
 their adaptability is far from infinite ... the
 evolutionary process
 itself may be endlessly creative, but in that sense so may
 be the
 self-modifying process of an engineered AGI ...

 -- Ben G

 On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam
 [EMAIL PROTECTED] wrote:
 
  --- On Mon, 6/30/08, Ben Goertzel
 [EMAIL PROTECTED] wrote:
  but I don't agree that predicting **which**
 AGI designs can lead
  to the emergent properties corresponding to
 general intelligence,
  is pragmatically impossible to do in an analytical
 and rational way ...
 
  OK, I grant you that you may be able to do that. I
 believe that we can be extremely clever in this regard. An
 example of that is an implementation of a Turing Machine
 within the Game of Life:
 
  http://rendell-attic.org/gol/tm.htm
 
  What a beautiful construction. But it's completely
 contrived. What you're suggesting is equivalent, because
 your design is contrived by your own intelligence. [I
 understand that within the Novamente idea is room for
 non-deterministic (for practical purposes) behavior, so it
 doesn't suffer from the usual complexity-inspired
 criticisms of purely logical systems.]
 
  But whatever achievement you make, it's just one
 particular design that may prove effective in some set of
 domains. And there's the rub - the fact that your
 design is at least partially static will limit its
 applicability in some set of domains. I

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
 The argument itself is extremely rigorous:  on all the occasions on which
 someone has disputed the rigorousness of the argument, they have either
 addressed some other issue entirely or they have just waved their hands
 without showing any sign of understanding the argument, and then said ...
 it's not rigorous!.  It is almost comical to go back over the various
 responses to the argument:  not only do people go flying off in all sorts of
 bizarre directions, but they also get quite strenuous about it at the same
 time.

Richard, if your argument is so rigorous, why don't you do this: present
a brief, mathematical formalization of your argument, defining all terms
precisely and carrying out all inference steps exactly, at the level
of a textbook
mathematical proof.

I'll be on vacation for the next 2 weeks w/limited and infrequent email access,
so I'll look out for this when I return.

If you present your argument this way, then you can rest assured I will
understand it, as I'm capable to understand math; then, our arguments can
be more neatly directed ... toward the appropriateness of your formal
definitions and assumptions...

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
Richard,

I think that it would be possible to formalize your complex systems argument
mathematically, but I don't have time to do so right now.

 Or, then again . perhaps I am wrong:  maybe you really *cannot*
 understand anything except math?

It's not the case that I can only understand math -- however, I have a
lot of respect
for the power of math to clarify disagreements.  Without math, arguments often
proceed in a confused way because different people are defining terms
differently,a
and don't realize it.

But, I agree math is not the only kind of rigor.  I would be happy
with a very careful,
systematic exposition of your argument along the lines of Spinoza or the early
Wittgenstein.  Their arguments were not mathematical, but were very rigorous
and precisely drawn -- not slippery.

 Perhaps you have no idea what the actual
 argument is, and that has been the problem all along?  I notice that you
 avoided answering my request that you summarize your argument against the
 complex systems problem ... perhaps you are just confused about what the
 argument actually is, and have been confused right from the beginning?

In a nutshell, it seems you are arguing that general intelligence is
fundamentally founded
on emergent properties of complex systems, and that it's not possible for us to
figure out analytically how these emergent properties emerge from the
lower-level structures
and dynamics of the complex systems involved.   Evolution, you
suggest, figured out
some complex systems that give rise to the appropriate emergent
properties to produce
general intelligence.  But evolution did not do this figuring-out in
an analytical way, rather
via its own special sort of directed trial and error.   You suggest
that to create a generally
intelligent system, we should create a software framework that makes
it very easy to
experiment with  different sorts of complex systems, so that we can
then figure out
(via some combination of experiment, analysis, intuition, theory,
etc.) how to create a
complex system that gives rise to the emergent properties associated
with general
intelligence.

I'm sure the above is not exactly how you'd phrase your argument --
and it doesn't
capture all the nuances -- but I was trying to give a compact and approximate
formulation.   If you'd like to give an alternative, equally compact
formulation, that
would be great.

I think the flaw of your argument lies in your definition of
complexity, and that this
would be revealed if you formalized your argument more fully.  I think
you define
complexity as a kind of fundamental irreducibility that the human
brain does not possess,
and that engineered AGI systems need not possess.  I think that real
systems display
complexity which makes it **computationally difficult** to explain
their emergent properties
in terms of their lower-level structures and dynamics, but not as
fundamentally intractable
as you presume.

But because you don't formalize your notion of complexity adequately,
it's not possible
to engage you in rational argumentation regarding the deep flaw at the
center of your
argument.

However, I cannot prove rigorously that the brain is NOT complex in
the overly strong
sense you  allude it is ... and nor can I prove rigorously that a
design like Novamente Cognition
Engine or OpenCog Prime will give rise to the emergent properties
associated with
general intelligence.  So, in this sense, I don't have a rigorous
refutation of your argument,
and nor would I if you rigorously formalized your argument.

However, I think a rigorous formulation of your argument would make it
apparent to
nearly everyone reading it that your definition of complexity is
unreasonably strong.

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ben Goertzel
On Sat, Jun 28, 2008 at 4:13 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ed Porter wrote:

 I do not claim the software architecture for AGI has been totally solved.
 But I believe that enough good AGI approaches exist (and I think Novamente
 is one) that when powerful hardware available to more people we will be
 able
 to relatively quickly get systems up and running that demonstrate the
 parts
 of the problems we have solved.  And that will provide valuable insights
 and
 test beds for solving the parts of the problem that we have not yet
 solved.

 You are not getting my point.  What you just said was EXACTLY what was said
 in 1970, 1971, 1972, 1973 ..2003, 2004, 2005, 2006, 2007 ..

 And every time it was said, the same justification for the claim was given:
  I just have this belief that it will work.


It is not the case that the reason I believe Novamente/OpenCog can work for AGI
is just a belief

Nor, however, is the reason an argument that can be summarized in an email.

I'm setting out on a 2-week vacation on Monday (June 30 - July 13), on
which I'll
be pretty much without email (in the wilds of Alaska ;-) ... so it's a bad time
for me to get involved in deep discussions

But I hope to release some docs on OpenCog Prime later this summer, which
will disclose a bit more of my reasons for thinking the approach can succeed.

Ed has seen much of this material before, but most others on this list
have not...

There is a broad range of qualities-of-justification, between a mere belief
on the one hand, and a rigorous proof on the other.

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ben Goertzel
Richard,

 So long as the general response to the complex systems problem is not This
 could be a serious issue, let's put our heads together to investigate it,
 but My gut feeling is that this is just not going to be a problem, or
 Quit rocking the boat!, you can bet that nobody really wants to ask any
 questions about whether the approaches are correct, they just want to be
 left alone to get on with their approaches.

Both Ed Porter and myself have given serious thought to the complex systems
problem as you call it, and have discussed it with you at length.  I
also read the
only formal paper you sent me dealing with it (albeit somewhat
indirectly) and also
your various online discourses on the topic.

Ed and I don't agree with you on the topic, but not because of lack of thinking
or attention.

Your argument FOR the existence of a complex systems problem with Novamente
or OpenCog, is not any more rigorous than our argument AGAINST it.

Similarly, I have no rigorous argument that Novamente and OpenCog won't fail
because of the lack of a soul.   I can't prove this formally -- and
even if I did, those who
believe a soul is necessary for AI could always dispute the
mathematical assumptions
of my proof.  And those who do claim a soul is necessary, have no
rigorous arguments
in their favor, except ones based transparently on assumptions I reject...

And so it goes...

Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
Steve,

Those of us w/ experience in the field have heard the objections you
and Tintner are making hundreds or thousands of times before.  We have
already processed the arguments you're making and found them wanting.
And we have already gotten tired of arguing those same points, back in
our undergrad or grad school days (or analogous time periods for those
who didn't get PhD's...).

The points you guys are making are not as original as you seem to
think.  And the reason we don't take time to argue against them in
detail is that it's boring and we're busy.  These points have already
been extensively argued by others in the published literature over the
past few decades; but I also don't want to take the time to dig up
citations for you

I'm not saying that I have an argument in favor of my approach, that
would convince a skeptic.  I know I don't.  The only argument that
will convince a skeptic is to complete a functional human-level AGI.
And even that won't be enough for some skeptics.  (Maybe a fully
rigorous formal theory of how to create an AGI with a certain
intelligence level given specific resource constraints would convince
some skeptics, but not many I suppose -- discussions would devolve
into quibbles over the definition of intelligence, and other
particular mathematical assumptions of the sort that any formal
analysis must make.)

OK.  Back to work on the OpenCog Prime documentation, which IMO is a
better use of my time than endlessly repeating the arguments from
philosophy-of-mind and cog-sci class on an email list ;-)

Sorry if my tone seems obnoxious, but I didn't find your description
of those of us working on actual AI systems as having a herd
mentality very appealing.  The truth is, one of the big problems in
the field is that nearly everyone working on a concrete AI system has
**their own** particular idea of how to do it, and wants to proceed
independently rather than compromising with others on various design
points.  It's hardly a herd mentality -- the different systems out
there vary wildly in many respects.

-- Ben G

On Sun, Jun 8, 2008 at 3:28 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
 Mike Tintner, et al,

 After failing to get ANY response to what I thought was an important point
 (Paradigm Shifting regarding Consciousness) I went back through my AGI inbox
 to see what other postings by others weren't getting any responses. Mike
 Tintner was way ahead of me in no-response postings.

 A quick scan showed that these also tended to address high-level issues that
 challenge the contemporary herd mentality. In short, most people on this
 list appear to be interested only in HOW to straight-line program an AGI
 (with the implicit assumption that we operate anything at all like we appear
 to operate), but not in WHAT to program, and most especially not in any
 apparent insurmountable barriers to successful open-ended capabilities,
 where attention would seem to be crucial to ultimate success.

 Anyone who has been in high-tech for a few years KNOWS that success can come
 only after you fully understand what you must overcome to succeed. Hence,
 based on my own past personal experiences and present observations here,
 present efforts here would seem to be doomed to fail - for personal if not
 for technological reasons.

 Normally I would simply dismiss this as rookie error, but I know that at
 least some of the people on this list have been around as long as I have
 been, and hence they certainly should know better since they have doubtless
 seen many other exuberant rookies fall into similar swamps of programming
 complex systems without adequate analysis.

 Hey you guys with some gray hair and/or bald spots, WHAT THE HECK ARE YOU
 THINKING?

 Steve Richfield

 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
 The truth is, one of the big problems in
 the field is that nearly everyone working on a concrete AI system has
 **their own** particular idea of how to do it, and wants to proceed
 independently rather than compromising with others on various design
 points.  It's hardly a herd mentality -- the different systems out
 there vary wildly in many respects.

 -- Ben G

To analogize to another field, in his book Three Roads to Quantum Gravity,
Lee Smolin identifies three current approaches to quantum gravity:

1-- string theory

2-- loop quantum gravity

3-- miscellaneous mathematical approaches based on various odd formalisms
and ideas

I think that AGI, right now, could also be analyzed as having four
main approaches

1-- logic-based ... including a host of different logic formalisms

2-- neural net/ brain simulation based ... including some biologically
quasi-realistic systems and some systems that are more formal and
abstract

3-- integrative ... which itself is a very broad category with a lot
of heterogeneity ... including e.g. systems composed of wholly
distinct black boxes versus systems that have intricate real-time
feedbacks between different components' innards

4-- miscellaneous ... evolutionary learning, etc. etc.

It's hardly a herd, it's more of a chaos ;-p

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
 While the details vary widely, Mike and I were addressing the very concept
 of writing code to perform functions (e.g. thinking) that apparently
 develop on their own as emergent properties, and in the process foreclosing
 on many opportunities, e.g. developing in variant ways to address problems
 in new paradigms. Direct programming would seem to lead to lesser rather
 than greater intelligence. Am I correct that this is indeed a central
 thread in all of the different systems that you had in mind?

Different AGI systems rely on emergence to varying extents ...

No one knows which brain functions rely on emergence to which extents ...
we're still puzzling this out even in relatively well-understood brain regions
like visual cortex.  (Feedforward connections in visual cortex are sorta
well understood, but feedback connections, which is where emergence might
play in, are very poorly understood as yet.)

For instance, the presence of a hierarchy of progressively more abstract
feature detectors in visual cortex clearly does NOT emerge in a strong sense...
it may emerge during fetal and early-childhood neural self-organization, but in
a way that is carefully genetically preprogrammed.

But, the neural structures that carry out object-recognition may well emerge
as a result of complex nonlinear dynamics involving learning in both the
feedback and feedforward connections...

so my point is, the brain is a mix of wired-in and emergent stuff, and we
don't know where the boundary lies...

as with vision, similarly e.g. for language understanding.  Read Jackendoff's
book

Jackendoff, Ray (2002). Foundations of Language: Brain, Meaning,
Grammar, Evolution.

and the multi-author book

mitpress.mit.edu/book-home.tcl?isbn=0262050528

for thoughtful treatments of the subtle relations btw programmed-in
and learned aspects of human intelligence ... much of the discussion
pertains implicitly to emergence too, though they don't use that word
much ... because emergence is key to learning...

In the Novamente design we've made some particular choices about what
to build in versus what to allow to emerge.  But, for sure, the notion
of emergence
from complex self-organizing dynamics has been a key part of our thinking in
making the design...

Neural net AGI approaches tend to leave more to emerge, whereas logic based
approaches tend to leave less... but that's just a broad generalization

In short there is a huge spectrum of choices in the AGi field regarding what
to build in versus what to allow to emerge ... not a herd mentality at all...

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
Nothing will ever be attempted if all possible objections must be
first overcome   - Dr Samuel Johnson


-- Ben G

On Mon, Jun 9, 2008 at 7:41 AM, Jim Bromer [EMAIL PROTECTED] wrote:
 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]

 My approach is: first you look at the problem of crossing domains in its own
 terms - work out an ideal way to solve it - which will probably be close to
 the way the mind does solve it -  then think about how to implement your
 solution technically...
 --
 Instead of talking about what you would do,  do it.

 I mean, work out your ideal way to solve the questions of the mind and share
 it with us after you've have found some interesting results.

 Jim Bromer
 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Ben Goertzel
Regarding how much of the complexity of real neurons we would need to
put into a computational neural net model in order to make a model
displaying a realistic  emulation of neural behavior -- the truth is
we JUST DON'T KNOW

Izhikevich for instance

http://vesicle.nsi.edu/users/izhikevich/human_brain_simulation/Blue_Brain.htm

gets more detailed than standard formal neural net models, but is it
detailed enough?  We really don't know.  I like his work for its use
of nonlinear dynamics and emergence though.

Until we understand the brain better, we can only speculate about what
level of detail is needed...

This is part of the reason why I'm not working on a closely
brain-based AGI approach...

I find neuroscience important and fascinating, and I try to keep up
with the most relevant parts of the field, but I don't think it's
mature enough to really usefully guide AGI development yet.

-- Ben G



On Mon, Jun 2, 2008 at 6:15 AM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Mon, Jun 2, 2008 at 2:03 AM, Mark Waser [EMAIL PROTECTED] wrote:
 No, this is not a variant of the analog is fundamentally different from
 digital category.

 Each of the things that I mentioned could be implemented digitally --
  however, they are entirely new classes of things to consider and require a
 lot more data and processing.

 I find it very interesting that you can't even answer a straight yes-or-no
 question without resorting to obscuring BS and inventing strawmen.

 Are you actually claiming that neurotransmitter levels are irrelevant or are
 you implementing them?

 Are you claiming that leakage along the axons and dendrites is irrelevant or
 are you modeling it?


 Mark, I think the point is that there should be a simple model that
 produces the same capabilities as a neuron (or brain). Most of these
 biological particulars are important for biological brain, but it
 should be possible to engineer them away on computational substrate
 when we have a high-level model of what they are actually for.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be
first overcome   - Dr Samuel Johnson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Ben Goertzel

 But enough of that, let's get to the meat of it:  Are you arguing that the
 function that is a neuron is not an elementary operator for whatever
 computational model describes the brain?


We don't know which function that describes a neuron we need to use --
are Izhikevich's nonlinear dynamics models of ion channels good
enough, or do we need to go deeper?

Also we don't know about the importance of extracellular charge
diffusion... computation/memory happening in the glial network ...
etc. ... phenomena which suggest that the neuron-functions are not the
only elementary operators at play in brain dynamics...

Lots of fun stuff still to be learned ;-)

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work.  Can't you see that many others have tried to use
FOL and ILP already, and they've run into intractable combinatorial
explosion problems?

Some may argue that my approach isn't radical **enough** (and in spite
of my innate inclination toward radicalism, I'm trying hard in my AGI work
to be no more radical than is really needed, out of a desire to save time/
effort by reusing others' insights wherever  possible) ... but at least I'm
introducing a host of clearly novel technical ideas.

What you seem to be suggesting is just to implement material from
textbooks on a large knowledge base.

Why do you think you're gonna make it work?  Because you're gonna
build a bigger KB than Cyc has built w/ their 20 years of effort and
tens to hundreds of million of dollars of US gov't funding???

-- Ben G

On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).

I wonder why you don't join Stephen Reed on the texai project?  Is it
because you don't like the open-source nature of his project?

ben

On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

 What you seem to be suggesting is just to implement material from
 textbooks on a large knowledge base.

 Why do you think you're gonna make it work?  Because you're gonna
 build a bigger KB than Cyc has built w/ their 20 years of effort and
 tens to hundreds of million of dollars of US gov't funding???

 -- Ben G

 On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel

 As we have discussed a while back on the OpenCog mail list, I would like to
 see a RDF interface to some level of the OpenCog Atom Table.  I think that
 would suit both YKY and myself.  Our discussion went so far as to consider
 ways to assign URI's to appropriate atoms.

Yes, I still think that's a good idea and I'm fairly sure it will
happen this year... probably not too long after the code is considered
really ready for release...

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
 First of all, the *tractability* of your algorithm depends on
 heuristics that you design, which are separable from the underlying
 probabilistic logic calculus.  In your mind, these 2 things may be
 mixed up.

 Indefinite probabilities DO NOT imply faster inference.
 Domain-specific heuristics do that.

Not all heuristics for inference control are narrowly domain-specific

Some may be generally applicable across very broad sets of  domains,
say across all domains satisfying certain broad mathematical
properties such as similar theorems tend to have similar proofs.

So, I agree that indefinite probabilities themselves don't imply
faster inference.

However, we have some heuristics for (relatively) fast inference
control that we believe will apply across any domains satisfying
certain broad mathematical properties ... and that won't work with
traditional representations of uncertainty


 Secondly, I have no problem at all, with using your indefinite
 probability approach.

 It's a laudable achievement what you've accomplished.

 Thirdly, probabilistic logics -- of *any* flavor -- should
 [approximately] subsume binary logic if they are sound.  So there is
 no reason why your logic is so different that it cannot be expressed
 in FOL.

Yes of course it can be expressed in FOL ... it can be expressed in
Morse Code too, but I don't see a point to it ;-)  ... it could also be realized
via a mechanical contraption made of TinkerToys ... like Danny
Hillis's

http://www.ohgizmo.com/wp-content/uploads/2006/12/tinkertoycomputer_1.jpg

;-)


 But are you saying that the same cannot be achieved using FOL?


If you attach indefinite probabilities to FOL propositions, and create
indefinite probability formulas corresponding to standard FOL rules,
you will have a subset of PLN

But you'll have a hard time applying Bayes rule to FOL propositions
without being willing to assign probabilities to terms ... and you'll
have a hard time applying it to FOL variable expressions without doing
something that equates to assigning probabilities to propositions w.
unbound variables ... and like I said, I haven't seen any other
adequate way of propagating pdf's through quantifiers than the one we
use in PLN, though Halpern's book describes a lot of inadequate ways
;-)

 4) most critically perhaps, using uncertain truth values within inference
 control to help pare down the combinatorial explosion

 Uncertain truth values DO NOT imply faster inference.  In fact, they
 slow down inference wrt binary logic.

 If your inference algorithm is faster than resolution, and it's sound
 (so it subsumes binary logic), then you have found a faster FOL
 inference algorithm.  But that's not true;  what you're doing is
 domain-specific heuristics.

As noted above, the truth is somewhere inbetween.

You can find inference control heuristics that exploit general
mathematical properties of domains -- so they don't apply to ALL
domains, but nor are they specialized to any particular domain.

Evolution is like this in fact -- it's no good at optimizing random
fitness functions, but it's good at optimizing fitness functions
satisfying certain mathematical properties, regardless of the specific
domain they refer to

 I think one can do
indefinite probability + FOL + domain-specific heuristics
 just as you can do
indefinite probability + term logic + domain-specific heuristics
 but it may cost an amount of effort that you're unwilling to pay.

well we do both in PLN ... PLN is not a pure term logic...

 This is a very sad situation...

Oh ... I thought it was funny ... I suppose I'm glad I have a perverse
sense of humour ;-D

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel

 You have done something new, but not so new as to be in a totally
 different dimension.

 YKY

I have some ideas more like that too but I've postponed trying to sell them
to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff
like PLN ...

Look for some stuff on the applications of hypersets and division algebras
to endowing AGIs with free will and reflective awareness, maybe in
early 09 ...  ;)

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] modus ponens

2008-06-03 Thread Ben Goertzel
I mean this form

http://en.wikipedia.org/wiki/Modus_ponens

i.e.

A implies B
A
|-
B

Probabilistically, this means you have

P(B|A)
P(A)

and want to infer from these

P(B)

under the most direct interpretation...

ben


On Wed, Jun 4, 2008 at 12:08 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Modus ponens can be defined in a few ways.

 If you take the binary logic definition:
A - B  means  ~A v B
 you can translate this into probabilities but the result is a mess.  I
 have analysed this in detail but it's complicated.  In short, this
 definition is incompatible with probability calculus.

 Instead I simply use
   A - B  meaning  P(B|A) = p
 where p is the probability.  You can change p into an indefinite
 probability or interval.

 Is your modus ponens different from this?

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
Propositions are not the only things that can have truth values...

I don't have time to carry out a detailed mathematical discussion of
this right now...

We're about to (this week) finalize the PLN book draft ... I'll send
you a pre-publication PDF early next week and then you can read it and
we can argue this stuff after that ;-)

ben

On Wed, Jun 4, 2008 at 1:01 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Ben,

 If we don't work out the correspondence (even approximately) between
 FOL and term logic, this conversation would not be very fruitful.  I
 don't even know what you're doing with PLN.  I suggest we try to work
 it out here step by step.  If your approach really makes sense to me,
 you will gain another helper =)   Also, this will be good for your
 project's documentation.

 I have some examples:

 Eng:  Some philosophers are wise
 TL:  +Philosopher+Wise
 FOL:  philosopher(X) - wise(X)

 Eng:  Romeo loves Juliet
 TL:  +-Romeo* + (Loves +-Juliet*)
 FOL:  loves(romeo, juliet)

 Eng:  Women often have long hair
 TL:  ?
 FOL:  woman(X) - long_hair(X)

 I know your term logic is slightly different from Fred Sommers'.  Can
 you fill in the TL parts and also attach indefinite probabilities?

 On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

 Re assigning probabilties to terms...

 Term in term logic is completely different from term in FOL.  I
 guess terms in term logic roughly correspond to predicates or
 propositions in FOL.  Terms in FOL seem to have no counterpart in term
 logic..

 Anyway there should be no confusion here.  Propositions are the ONLY
 things that can have truth values.  This applies to term logic as well
 (I just refreshed my memory of TL).  When truth values go from { 0, 1
 } to [ 0, 1 ], we get single-value probabilistic logic.  All this has
 a very solid and rigorous foundation, based on so-called model theory.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Ben Goertzel
 I think it's fine that you use the term atom in your own way.  The
 important thing is, whatever the objects that you attach probabilities
 to, that class of objects should correspond to *propositions* in FOL.
 From there it would be easier for me to understand your ideas.

Well, no, we attach probabilities to terms as well as to relationships
... and to expressions with free as well as bound variables...

You can map terms and free-variable expressions into propositions if
you want to, though...

for instance the term

cat

has probability

P(cat)

which you could interpret as

P(x is a cat | x is in my experience base)

and the free-variable expression

eats(x, mouse)

has probability

P( eats(x,mouse) )

which can be interpreted as

P( eats(x,mouse) is true | x is in my experience base)

However these propositional representations are a bit awkward and are
not the way to represent things for the PLN rules to be simply
applied... it is nicer by far to leave the experiential semantics
implicit...

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Uncertainty

2008-06-02 Thread Ben Goertzel
I would imagine so, but I havent thought about the details

I am traveling now but will think about this when I get home and can
refresh my memory by rereading the appropriate sections of
Probabilistic Robotics ...

ben

On 6/2/08, Bob Mottram [EMAIL PROTECTED] wrote:
 2008/6/2 Ben Goertzel [EMAIL PROTECTED]:
  I think the PLN / indefinite probabilities approach is a complete and
  coherent solution to the problem.  It is complex, true, but these are
  not simple issues...


 I was wondering whether indefinite probabilities could be used to
 represent a particle filter.


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Ben Goertzel
 More likely though, is that your algorithm is incomplete wrt FOL, ie,
 there may be some things that FOL can infer but PLN can't.  Either
 that, or your algorithm may be actually slower than FOL.

FOL is not an algorithm, it:s a representational formalism...

As compared to standard logical theorem-proving algorithms, the design
intention is that Novamente/OpenCogs inference algorithms will be
vastly more efficient on the average case for those inference problems
typically confronting an embodied social organism.

Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-01 Thread Ben Goertzel
 Here are some examples in FOL:

 Mary is female
female(mary)

Could be

Inheritance Mary female

or

Evaluation female mary

(the latter being equivalent to female(mary) )

but none of these has an uncertain truth value attached...


 This is a [production] rule:  (not to be confused with an inference rule)
 A female child is called a daughter
daughter(X) - child(X)  female(X)
 where universal quantification is assumed.

You could say

ForAll $X
   ExtensionalImplication
   And
   Evaluation child ($X)
   Evaluation female ($X)
   Evaluation daughter($X)

which is equivalent to the pred logic formulation
you've given.

But it will often be more useful to say

Implication
   And
   Evaluation child ($X)
   Evaluation female ($X)
   Evaluation daughter($X

which leaves the variable unbound, and which replaces the purely
extensional implication with an Implication that is mixed extensional
and intensional.

And one will normally want to attach an uncertain TV like an
indefinite probability to an expression like this, rather than leaving
it with a crisp TV.

The definition of

IntensionalImplication A B

is

ExtensionalImplication Prop(A) Prop(B)

where Prop(X) is the fuzzy set of properties of X

The definition of Implication is a weighted average of extensional and
intensional implication

I guess that gives a flavor of the difference

 *** bonus question ***
 Can you give an example of something expressed in PLN that is very
 hard or impossible to express in FOL?

FOL can express anything, as can combinatory logic and a load of other
Turing-complete formalisms.

However, expressing uncertainty is awkward and inefficient in FOL, as
opposed to if one uses a specific mechanism like indefinite truth
values.

Similarly, expressing intensional relationships is awkward and
inefficient in FOL as there is no built in notion of fuzzy sets of
properties

And there is no notion of assigning a truth value to a formula with
unbound variables in FOL, but one can work around this by using
variables that are universally bound to a context that is then itself
variable (again, more complex and awkward)

-- ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Live Forever Machines...

2008-06-01 Thread Ben Goertzel
I'll respond to other points tomorrow or the day after (am currently
on a biz trip through Asia), but just one thing now... You say

 With NO money, none of either of our efforts stands a chance. With some
 realistic investment money, scanning would at minimum be cheap insurance
 that you will be able to overcome ALL of your future problems.

but I'm not sure this is true.  Linux got a long way with no money,
and eventually its freeware success brought investment from various
sources.  This is part of the inspiration underlying OpenCog 
Given a viable AGI design (which I have) and an initial, partial
codebase (which we have, via opencog), and a population of
enthusiastic and qualified OSS contributors, there's no reason that $$
is needed, though it can certainly accelerate things.  The big
challenge becomes keeping things going on the right course, in the
relevant senses of right


ben g


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-01 Thread Ben Goertzel
 Do OpenCog atoms roughly correspond to logical atoms?

Not really

 And what is the counterpart of (logic) propositions in OpenCog?

ExtensionalImplication relations I guess...

 I suggest don't use non-standard terminology 'cause it's very confusing...

So long as it's well-defined, I guess it's OK...

The standard terminology leads in wrong conceptual directions alas...

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Uncertainty

2008-06-01 Thread Ben Goertzel
 I have briefly surveyed the research on uncertain reasoning, and found
 out that no one has a solution to the entire problem.  Ben and Pei
 Wang may be working towards their solutions but a satisfactory one may
 be difficult to find.

I think the PLN / indefinite probabilities approach is a complete and
coherent solution to the problem.  It is complex, true, but these are
not simple issues...

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] news bit: Is this a unified theory of the brain? Do Bayesian statistics rule the brain?

2008-06-01 Thread Ben Goertzel
This stuff is important, but has been around in the literature for years now...

On Mon, Jun 2, 2008 at 6:59 AM, David Hart [EMAIL PROTECTED] wrote:
 From http://www.mindhacks.com/blog/2008/05/do_bayesian_statisti.html

 This week's New Scientist has a fascinating article on a possible 'grand
 theory' of the brain that suggests that virtually all brain functions can be
 modelled with Bayesian statistics.

 The link (above) is a blog copy of the article in New Scientist.

 -dave
 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Live Forever Machines...

2008-05-31 Thread Ben Goertzel
 if there is to be any substantial investment into
 AGI efforts, it would seem reasonable to expect to see the first money going
 into a scanning UV fluorescence microscope.

 *Hence, if YOU are looking for money for AGI development, then you should
 also be looking for money to develop a scanning UV fluorescence microscope,
 as it will insure that you can figure out EVERYTHING needed to make an AGI.
 Otherwise, all you need is just one puzzle that you can't see how to solve,
 and your entire effort ends up in the bit bucket. Your prospective investors
 are probably focuesd on just such problems as you read this. This would not
 only be cheap insurance, but should help your investor(s) see that you
 WILLsucceed, despite any unforeseen problems.
 *
 **
 Not only is AGI stymied by the lack of this device, but so is neuroscience,
 cancer research, and a number of other biological fields. Of course, it
 hasn't occurred to biologists that this device is practical to make because
 they can't see their way past the computer problems - that many of the
 people here on this forum could handle, even with a hangover.

 Steve Richfield



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-26 Thread Ben Goertzel
mark,

 What I'd rather do instead is see if we can get a .NET parallel track
 started over the next few months, see if we can get everything ported, and
 see the relative productivity between the two paths.  That would provide a
 provably true answer to the debate.

Well, it's an open-source project, so to each his own...

However, at this point, folks working on OpenCog stuff under the

-- GSoC
-- SIAI
-- Novamente

organizations are going to be working on the current, actually existing
C++ implementation

IMO, the vast majority of work in this sort of project has to do with fiddling
with the AI algorithms to make them work right, rather than nuts-and-bolts
engineering, so I'm not sure the choice of language is going to make that
much difference ... except that with C++ it's obviously  more possible to make
the code efficient where it needs to be.  (Features like template
metaprogramming
are great but of course one can live without them.)   And in the current phase,
killer efficiency is not going to make much difference either, while we're still
working out algorithm details.  The potential for killer efficiency
will come into
play a little later once the main issue is scaling rather than algorithm
refinement.

I would much rather see work go into working out the various algorithmic
details that are left pending by the OpenCog Prime documentation (not yet
released by me, alas... and spending time on these emails doesn't help...)
than on reimplementing the same code in multiple programming languages.

But as an open-source project there is the opportunity for multiple forks
and directions.

In the event that a C# fork does get off the ground, it would be nice if things
were worked out so that C++ MindAgents could act on the C# core.  Ultimately
the deeper code goes in the MindAgents not in the core system anyway, in
the OpenCog design.  If the same MindAgents could be used in both cores
then having two cores would not impede development much, and might even
accelerate it if it allows developers to do more work in their
preferred languages
and development environments.

-- Ben G


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] Re: Mark Waser arguing that OpenCog should be recoded in .Net ;-p

2008-05-26 Thread Ben Goertzel
Mark,

If it were possible to make both C# and C+ versions of the core
(AtomTable and scheduler), and have both C# and C++  MindAgents run on
both, then we would have a favorable situation in terms of allowing
everyone to use their own fave languages and development environments.

-- Ben G

On Mon, May 26, 2008 at 7:18 AM, Mark Waser [EMAIL PROTECTED] wrote:
 While all the language wars continue, I'd like to re-emphasize my original
 point (directly copied from the original e-mail) -- One of the things that
 I've been tempted to argue for a while is an entirely alternate underlying
 software architecture for OpenCog -- people can then develop in the
 architecture that is most convenient and then we could have people
 cross-port between the two.

 Seriously people, I'm not asking anyone to move away from what *you* are
 familiar with if you don't want to.  I'm saying that maybe we should
 deliberately attempt to open this up so that we get *more* contributions and
 viewpoints -- at the admitted cost of needing better documentation and
 control -- which is really necessary anyways.  My belief is that seeing
 what happens will cause a migration -- but I'm not invested in that belief
 and would be happy and see huge benefits either way.

 Mark

 P.S.  Thank you for the forward Ben.

 - Original Message -
 From: Ben Goertzel
 To: [EMAIL PROTECTED]
 Sent: Sunday, May 25, 2008 8:29 PM
 Subject: Mark Waser arguing that OpenCog should be recoded in .Net ;-p

 This email thread on the AGI list really seems more appropriate for the
 OpenCog list... so I'm forwarding it here...

 -- Ben G


 --
 From: Mark Waser [EMAIL PROTECTED]
 Date: Sun, May 25, 2008 at 4:23 PM
 To: agi@v2.listbox.com


 Yeah, I'll certainly grant you that.  The unfortunate problem is that people
 coming in late don't see the prior arguments and then engage in behavior
 that they believe is similar but entirely without the scientific rigor that
 you normally follow but don't always visibly display.

 Also, on the other hand, for certain classes of issues where you are less of
 an expert -- like in large-scale systems architecture (both software and
 conceptual), a number of your previously posted arguments are *I believe* at
 best questionable if not outright wrong.  The fact that these assumptions
 aren't open for inspection at a convenient location is problematical if many
 other things are built on top of them and then they turn out to be wrong.

 We need to start to gather the best of these assumptions and debates in one
 place (probably a wiki) because long-term e-mail looping is not efficient.
 I've had this as a low priority thought for the AGI-Network but I think that
 I'm going to escalate it's priority substantially and see if I can't come up
 with a conceptual design for such a wiki (with scaled and isolated
 privileges) over the next couple of weeks.

 One of the things that I've been tempted to argue for a while is an entirely
 alternate underlying software architecture for OpenCog -- people can then
 develop in the architecture that is most convenient and then we could have
 people cross-port between the two.  I strongly contend that the current
 architecture does not take advantage of a large part of the newest advances
 and infrastructures of the past half-decade.  I think that if people saw
 what could be done with far less time and code utilizing already existing
 functionality and better tools that C++ would be a dead issue.
 --
 From: Ben Goertzel [EMAIL PROTECTED]
 Date: Sun, May 25, 2008 at 4:26 PM
 To: agi@v2.listbox.com


 Somehow I doubt that this list will be the place where the endless
 OS/language
 wars plaguing the IT community are finally solved ;-p

 Certainly there are plenty of folks with equal software engineering
 experience
 to you, advocating the Linux/C++ route (taken in the current OpenCog
 version)
 rather than the .Net/C# route that I believe you advocate...

 -- Ben G
 --
 From: Lukasz Stafiniak [EMAIL PROTECTED]
 Date: Sun, May 25, 2008 at 5:24 PM
 To: agi@v2.listbox.com


 No, I believe he advocates OCaml vs. F#   ;-)
 (sorry for leaving-out Haskell and others)
 --
 From: Mark Waser [EMAIL PROTECTED]
 Date: Sun, May 25, 2008 at 5:59 PM
 To: agi@v2.listbox.com


 Cool.  An *argument from authority* without even having an authority.  Show
 me those plenty of folks and their reasons for advocating Linux/C++.
 Times have changed.  Other alternatives have advanced tremendously.  You are
 out of date and using and touting obsolete software and development
 methods.  I *don't* believe that you can find an expert who has remained
 current on technology who will back your point.

 {NOTE:  It's also always interesting to see someone say that the argument is
 OS/language vs. framework/language (don't you know enough to compare apples
 to apples?)]

 More importantly, I don't believe that I've ever explicitly endorsed C#.
 What I've always pushed is the .NET framework

Re: [agi] More Info Please

2008-05-26 Thread Ben Goertzel
On Mon, May 26, 2008 at 8:33 PM, J. Andrew Rogers
[EMAIL PROTECTED] wrote:
 Replying to myself,

 I'll let Mark have the last word since, after all, it is *his* project and
 not mine. :-)

I assume that last sentence was sarcastic ;-)

Of course, while Mark is a valued participant in OpenCog, it's not
*his* personal project ...
and the initial OpenCog system is C++, mainly tested in a Unix environment ...

FWIW, my impression about the ubiquity of Unix servers in Silicon
Valley agrees w/yours.

This is obviously because Silicon Valley is currently obsessed with
Web apps, and Unix is
generally recognized as a better platform for the large-scale
deployment of Web apps.

And no, I don't feel like spending my whole evening looking up copious
statistics to support this assertion.

However, I'll quote just one simple stat:


About 90% of the Internet relies on Unix operating systems running
Apache, the world's most widely used Web server.

from

http://linux.about.com/cs/linux101/a/unix_win.htm

I spent 10 minutes looking for data regarding developer productivity
on Linux vs. Windows,
but mostly just found bullshit about M$ vs. IBM, concerning fatally
flawed, mock-scientific studies
funded by Microsoft ;-p

http://websphere.sys-con.com/read/46828.htm

(note that this study, while conducted by M$ in an extremely dishonest
way, is also really about
IBM WebSphere rather than about Linux C++ programming, so it's not
directly pertinent to
this discussion anyway.  Except to highlight the difficulties of doing
this sort of comparison in
a meaningful way.)

OK ... enough of that ... back to doing useful work ;-)

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
My own view is that our state of knowledge about AGI is far too weak
for us to make detailed
plans about how to **ensure** AGI safety, at this point

What we can do is conduct experiments designed to gather data about
AGI goal systems and
AGI dynamics, which can lead us to more robust AGI theories, which can
lead us to detailed
plans about how to ensure AGI safety (or, pessimistically, detailed
knowledge as to why
this is not possible)

However, it must of course be our intuition that guides these
experiments.  My intuition tells
me that a system with probabilistic-logic-based goal-achievement at
its core is much more
likely to ultimately lead to a safe AI than a system with neural net
dynamics at its core.  But
vague statements like the one in the previous sentence are of limited
use; their main use is in
leading to more precise formulations and experiments...

-- Ben G

On Sun, May 25, 2008 at 6:26 AM, Panu Horsmalahti [EMAIL PROTECTED] wrote:
 What is your approach on ensuring AGI safety/Friendliness on this project?
 
 agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
On Sun, May 25, 2008 at 10:42 AM, Mark Waser [EMAIL PROTECTED] wrote:
 My own view is that our state of knowledge about AGI is far too weak
 for us to make detailed
 plans about how to **ensure** AGI safety, at this point

 I disagree strenuously.  If our arguments will apply to *all* intelligences
 (/intelligent architectures) -- like Omohundro attempts to do --  instead of
 just certain AGI subsets, then I believe that our lack of knowledge about
 particular subsets is irrelevant.

yes, but I don't think these general arguments are going to tell us
all that much
about particular AGI systems ... they can go only so far, and not far enough...

 I believe that there is a location in the state space of intelligence that
 is a viable attractor that equates to Friendliness and morality.  I think
 that a far more effective solution to the Friendliness problem would be to
 ensure that we place an entity in that attractor rather than attempt to
 control it's behavior via it's architecture.

Ah, so you're OK with beliefs but not intuitions ???   '-)

I hope such an attractor exists but I'm not as certain as you seem to be

 What we can do is conduct experiments designed to gather data about
 AGI goal systems and
 AGI dynamics, which can lead us to more robust AGI theories, which can
 lead us to detailed
 plans about how to ensure AGI safety (or, pessimistically, detailed
 knowledge as to why
 this is not possible)

 I think that this is all spurious pseudo-scientific BS.  I think that the
 state space is way too large to be thoroughly explored from first
 principles.  Start with human friendliness and move out and you stand a
 chance.  Trying to compete with billions of years of evolution and it's
 parallel search over an unimaginably large number of entities by
 re-inventing the wheel is just plain silly.

I disagree.  Obviously you could make the same argument about airplanes.

Experiments with different shaped wings helped us to refine the relevant
specializations of fluid dynamics theory, which now let us calculate a bunch
more relevant stuff from first principles than we could before these
experiments and
this theory were done.  But we still can't solve the Navier-Stokes Equation
in general in any useful way.

 However, it must of course be our intuition that guides these
 experiments.  My intuition tells

 Intuition is not science.  Intuition is just hardened opinion.

 Intuition has been scientifically proven to *frequently* be a bad guide
 where morality and ethics are concerned (don't you read the papers I post to
 the list?).

 Why don't we use real science?

Something has got to guide the choice of which experiments to do.

In a field without any solid theory yet, how do you choose which experiments
to run, except via intuition [or replace some related word if you
don't like that
one]

 I would scientifically/logically argue that your intuition is correct
 because it is more possible to analyze, evaluate, and *redirect* a
 goal-achievement architecture than a system with inscrutable neural net
 dynamics at its core.

 Your intuition wasn't particularly helpful because it gave no reasoning or
 basis for your belief.  My statement was more worthwhile because it gave
 reasons that can be further analyzed, refined, and/or disproved.

I have reasoning and basis for that intuition but I omitted it due to not having
time to write a longer email.  Also I though the reasoning and basis were
obvious.

Note however that NN dynamics are not totally inscrutable, e.g. folks have
analyzed the innards of backprop neural nets using PCA and other statistical
methods.  And of course a huge logic system with billions of propositions
continually combining via inference may be pretty damn inscrutable.

So the point is not irrefutable by any means, which is why I called
it an intuitive argument rather than a rigorous one.

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
 Please, if you're going to argue something --
 please take the time to argue it and don't pretend that you can't magically
 solve it all with your guesses (I mean, intuition).

time for mailing list posts is scarce for me these days, so sometimes I post
a conclusion w/out the supporting arguments ... but the arguments are usually
already there in prior publications ;-)

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-25 Thread Ben Goertzel
Mark,

 For OpenCog we had to make a definite choice and we made one.  Sorry
 you don't agree w/ it.

 I agree that you had to make a choice and made the one that seemed right to
 various reason.  The above comment is rude and snarky however --
  particularly since it seems to come *because* you can't justify your
 choice. I would expect better of you.

 = = = = = = =

 Let's try this again.  Get your experts together and create a short list of
 why C++ on Linux (and any infrastructure there that isn't immediately
 available under .Net) is better than the combination of all the .Net
 languages and all the infrastructure available there that isn't immediately
 available under Linux.  No resorting to pseudo-democracies of experts, how
 about real reasons that YOU will stand behind and be willing to defend.

This would be a reasonable exercise, but I simply don't have time to
deal with it
right now.

I'm about to leave on a 2.5 weeks business / research-collaboration trip to
Asia, and en route I hope to make some progress on mutating Novamente docs
into OpenCog docs.  No time to burn on these arguments at the moment.

However, it might be worthwhile to create a page on the OpenCog wiki
focused on this issue, if others are also interested in it.

There could be a section on the page arguing the potential advantages
of .Net for
OpenCog; a section on the page arguing the intended advantages of the current
approach; and other sections written by folks advocating other approaches
(e.g. LISP-centric, whatever...).

Perhaps if you create this page and get it started w/ your own arguments, others
will contribute theirs and we can advance the debate that way.

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More Info Please

2008-05-23 Thread Ben Goertzel
Peter has some technical info on his overall (adaptive neural net)
based approach to AI, on his company website, which is based on a
paper he wrote in the AGI volume Cassio and I edited for Springer
(written 2002, published 2006).

However, he has kept his specific commercial product direction tightly
under wraps.

I believe Peter's ideas are interesting but I have my doubts that his
approach is really AGI-capable.  However, I don't feel comfortable
going into great deal on my reasons, because Peter seems to value
secrecy regarding his approach... I've had a mild amount of insider
info regarding the approach (e.g. due to visiting his site a few years
ago, etc.) and don't want to blab stuff on this list that he'd want me
to keep secret...

Ben


On Fri, May 23, 2008 at 5:40 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 ... on this:

 http://www.adaptiveai.com/news/index.htm

   Towards Commercialization

 It's been a while. We've been busy. A good kind of busy.

 At the end of March we completed an important milestone: a demo system
 consolidating our prior 10 months' work. This was followed by my annual
 pilgrimage to our investors in Australia. The upshot of all this is that we
 now have some additional seed funding to launch our commercialization phase
 late this year.

 On the technical side we still have a lot of hard work ahead of us.
 Fortunately we have a very strong and highly motivated team, so that over
 the next 6 months we expect to make as much additional progress as we have
 over the past 12. Our next technical milestone is around early October by
 which time we'll want our 'proto AGI' to be pretty much ready to start
 earning a living.

 By the end of 2008 we should be ready to actively pursue commercialization
 in addition to our ongoing RD efforts. At that time we'll be looking for a
 high-powered CEO to head up our business division which we expect to grow to
 many hundreds of employees over a few years.

 Early in 2009 we plan to raise capital for this commercial venture, and if
 things go according to plan we'll have a team of around 50 by the middle of
 the year.

 Well, exciting future plans, but now back to work.

 Peter 



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: Symbol Grounding [WAS Re: [agi] AGI-08 videos]

2008-05-06 Thread Ben Goertzel
Richard wrote:

  Then, when we came back from the break, Ben Goertzel announced that the
 roundtable on symbol grounding was cancelled, to make room for some other
 discussion on a topic like the future of AGI, or some such.  I was
 outraged by this.  The subsequent discussion was a pathetic waste of time,
 during which we just listened to a bunch of people making vacuous
 speculations and jokes about artificial intelligence.

  In the end, I decided that the reason this happened was that when the
 workshop was being planned, the title was chosen in ignorance.  That, in
 fact, Ben never even intended to talk about the real issue of grounding
 symbols, but just needed a plausible-sounding theme-buzzword, and so he just
 intended the workshop to be about a meaningless concept like connecting AGI
 systems to the real world.

No, that is not the case.

What happened, as I recall, was that the conference schedule was
running late, and one of the speakers from the session on symbol
grounding had cancelled anyway, so it seemed apropos to skip from that
session to the next one -- since **something** had to be cancelled to
make the schedule fit

That conference was a small workshop and was pretty loosely organized,
I decided to let the discussion and content flow according to the
general interests of the participants.  As it happened the
participants as a whole were not gripped by the symbol grounding theme
and gravitated to other topics, which was OK with me.
Unfortunately to Richard this seemed to have been his main interest.

Feedback on AGI-06 overall was overwhelmingly positive; in fact
Richard's is the only significantly negative report I've seen.

AGI-08 was more formally structured, as will be AGI-09; but these are
larger conferences, which require more structure to run at all
smoothly.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] jamming with OpenCog / Novamente

2008-05-06 Thread Ben Goertzel
Predicate logic vs term logic won't be an issue for OpenCog, as the
AtomTable knowledge representation supports both (and many other)
formalisms.

I don't **think** the sentential KB will be a problem because i
believe each of your sentences will be representable as an Implication
or Equivalence relationship in the AtomTable.  If you give me a
specific example of a sentence in your representation, I will tell you
how it could most straightforwardly be represented in the AtomTable
using the PLN-friendly node and link types.

thanks
Ben


On Tue, May 6, 2008 at 11:40 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 I'm wondering if it's possible to plug in my learning algorithm to
  OpenCog / Novamente?

  The main incompatibilities stem from:

  1.  predicate logic vs term logic
  2.  graphical KB vs sentential KB

  If there is a way to somehow bridge these gaps, it may be possible

  YKY

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-04 Thread Ben Goertzel
Hi,

Somebody could write an excellent paper about the
 potential pitfalls of such an approach (detail, fidelity, deep causality
 issues behind appearance, function, and inter-object + inter-feature
 relationships, and so on).  If nobody else is working in detail on
 publishing such an analysis perhaps I will study those issues for some
 months and try to write something for AGI-09 about it.

If you'd like to collaborate on such a paper, I'd be interested.

I have thought for a while of writing a paper with a title or theme of

What Must a World Be for an AGI to Develop In it?

... the goal being to guesstimate a requirements spec for virtual worlds
for AGI ... or, more plausibly, different requirements specs according to
different AI paradigms

To me there are 3 relevant paradigms here

1)
Perception and action centric, which leads to the conclusion that
virtual worlds will only be good enough for AGI when they're very
close to the richness of the real physical world

2)
Socialization, logic and language centric, which leads to the conclusion
that current virtual worlds are likely good enough

3)
Integrative, which leads to the conclusion that current virtual
worlds probably are NOT good enough, but that some relatively
moderate improvements to them could make them so

The sorts of improvements I'm thinking of are stuff like

-- replace animation based skeleton control with Player/Gazebo
style servomechanism based control

-- make more objects decomposed of parts in a meaningful way ...
including enabling stuff like slicing a cake in various ways with a
knife, ripping a page out of a book, etc.

-- give the AI more detailed feedback regarding interaction of
parts of its body w/ the external world, and also some internal
body feelings related to things in the world

-- simplified, not necessarily realistic fluid mechanics
[this one is a nicety, not a necessity, but it would certainly
help with understanding a lot of NLP metaphors] ... having
a world consisting of things only in one state of matter is
somewhat conceptually limiting...


-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-04 Thread Ben Goertzel
Loosemore wrote:
  I hear people enthusing about systems that are filled with holes that were
 discovered decades ago, but still no fix.  I read vague speculations and the
 use of buzzwords ('Theory of Mind'!?).  I see papers discussing narrow AI
 projects.

I suppose there was all that at AGI-08 ... but there was also a lot
more than that ...

There was more genuine dialogue among folks with different (deeply thought)
perspectives on AGI theory and design, than I've seen in any gathering
(online or F2F) before.  This is worth a lot, and I expect this sort
of interaction
to intensify over the next few years...

I'm sorry that sharing insights with others whose perspectives are
different from
your own, is so uninteresting to you, Richard.  You are among the most
ardent and extreme dogmatists I have encountered in the AGI field.  I think
your perspective is interesting but your degree of confidence in your
correctness and everyone else's wrongness often strikes me as irrational,
given the level of ignorance we all have about this area of study...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] AGI-08 videos

2008-05-04 Thread Ben Goertzel
Richard wrote:
  My god, Mark:  I had to listen to people having a general discussion of
 grounding (the supposed them of that workshop) without a single person
 showing the slightest sign that they had more than an amateur's perspective
 on what that concept actually means.

I guess you are talking about AGi-06

While that was a successful workshop -- whose perceived success by the vast
majority of participants led to the convening of the even more
succesful AGI-08 --
it didn't really do a good job of sticking to the initially intended theme of
symbol grounding.  I agree that not much of interest about symbol grounding
came out of AGI-06 ... what came out of it was more a theme of the convergence
of various independently originating AGI architectures.  But to me it's a good
rather than  bad thing when a dialogue among peers leads to unexpected
conclusions and directions...

I have a right to be disgusted with
 the current state of this field.

The state of **achievement** in the AGI field is pretty poor to date,
but it's not
as though YOU have achieved anything particularly notable, either... ;-p

I think there are a lot of folks in the field with a pretty deep
understanding of
the nature of the AGI problem.  Different folks understand different aspects
better, of course.  There are of course also well-known researchers whose
approaches I find dramatically wrong-headed...

While I ... similarly to you and many others in this field that seems
to be populated
mainly by narcissistic egomaniacs ;-) ... tend to think I have a
better understanding
of AGI than other researchers, I still believe other researchers have deep
insights into various aspects of AGI that I can learn something from.

Building a mind is a big problem, and I think we can grapple w/ it better
as a community than as individuals

I think the NM/OpenCog design can work, but there are many details to be
worked out, and I hope events like the AGI conferences can help build
a community focused on working them out together ... as well as working
out the details of other AGI designs at the same time ... While I do bet
that NM or OpenCog will get there first, I don't mainly view this as an AGI
race of one researcher against another, rather as a race by the whole AGI
community to get to the goal of a beneficial AGi before someone creates
a rotten AGi or something else nasty happens on the planet...

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Ben Goertzel
Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...

http://en.wikipedia.org/wiki/Euphoria_%28software%29


ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Ben Goertzel
They are using equational models to simulate the muscles and bones
inside the body...

On Thu, May 1, 2008 at 12:05 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 So what are the principles that enable animated characters and materials
 here to react/move in individual continually different ways, where previous
 characters reacted typically and consistently?

  Ben Now this looks like a fairly AGI-friendly approach to controlling

 
 
 
  animated characters ... unfortunately it's closed-source and
  proprietary though...
 
  http://en.wikipedia.org/wiki/Euphoria_%28software%29
 
 
  ben
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  No virus found in this incoming message.
  Checked by AVG.
  Version: 7.5.524 / Virus Database: 269.23.7/1409 - Release Date: 5/1/2008
 8:39 AM
 
 
 


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Interesting approach to controlling animated characters...

2008-05-01 Thread Ben Goertzel
Actually, it seems their technique is tailor-made for imitative learning

If you gathered data about how people move in a certain context, using
motion capture, then you could use their GA/NN stuff to induce a
program that would generate data similar to the motion-captured data.

This would then be more generalizable than using the raw motion-capture data

-- Ben

On Thu, May 1, 2008 at 2:11 PM, Lukasz Stafiniak [EMAIL PROTECTED] wrote:
 IMHO, Euphoria shows that pure GA approaches are lame.
  More details here:
  http://aigamedev.com/editorial/naturalmotion-euphoria



  On Thu, May 1, 2008 at 5:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
   Now this looks like a fairly AGI-friendly approach to controlling
animated characters ... unfortunately it's closed-source and
proprietary though...
  
http://en.wikipedia.org/wiki/Euphoria_%28software%29
  
  
ben
  

   ---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
  



 ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 3:54 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:

   Ben Goertzel [mailto:[EMAIL PROTECTED] wrote 26. April 2008 19:54


   Yes, truly general AI is only possible in the case of infinite
   processing power, which is
   likely not physically realizable.
   How much generality can be achieved with how much
   Processing power, is not yet known -- math hasn't advanced that far yet.


  My point is not only that  'general intelligence without any limits' would
  need infinite resources of time and memory.
  This is trivial of course. What I wanted to say is that any intelligence has
  to be narrow in a sense if it wants be powerful and useful. There must
  always be strong assumptions of the world deep in any algorithm of useful
  intelligence.

This is a consequence of the No Free Lunch theorem, essentially, isn't it?

http://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization

With infinite resources you use exhaustive search (like AIXI or the
Godel Machine) ...
with finite resources you can't afford it, so you need to use (explicitly or
implicitly) search that is guided by some inductive biases.

See Eric Baum's book What Is Thought? for much discussion on genetically
encoded inductive bias and its role in AI.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Richard,

  Question:  How many systems do you know of in which the system elements
 are governed by a mechanism that has all four of these, AND where the system
 as a whole has a large-scale behavior that has been shown (by any method of
 showing except detailed simulation of the system) to arise from the
 behaviors of the elements of the system?  I would like an example of any
 case of a complex system in which there are large numbers of individual
 elements where each element has (a) memory for recent events, (b) adaptation
 and development of its character over long periods of time, where that
 adaptation is sensitive to influences from other elements, (c) an identity,
 so that what one element does to another will depend crucially on which
 element it is, and (d) nonlinearity in the mechanisms that determine how the
 elements relate and adapt.

I don't really understand your definition of identity in the above, could you
clarify, preferably with examples?

  Show me any non-trivial system, whatsoever, in which there is general
 agreement that all four of these characteristics are present in the
 interacting elements, and where someone figured out ahead of time what the
 overall behavior of the system was going to be, given only knowledge of the
 element mechanisms, and without simulating the whole system and looking at
 the simulation.

  There does not have to be a mathematical proof, just some derivation that
 allows me to see an example of someone predicting the behavior from the
 mechanisms.

I'm not sure what you mean by predicting the behavior.

With the Pet Brain, which does seem to fulfill the criteria you mention above
(pending my new confusion about your meaning of identity) (with the Atoms
in the Novamente AtomTable as the elements in your description), one cannot
predict the precise course of development of the system ... we can't predict
what any one pet will do in response to it's environment ... but we do
understand
what sorts of behaviors the pets are capable of... based on general
understanding
of how the system and its dynamics work...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
  No:  I am specifically asking for some system other than an AGI system,
 because I am looking for an external example of someone overcoming the
 complex systems problem.

The specific criteria you've described would seem to apply mainly to living
systems ... and we just don't have that much knowledge of the internals of these
yet, due to data-collection issues...

Certainly, the failure of the Biosphere experiment is evidence in your favor.
There, the scientists failed to predict basic high-level properties of
a pretty simple
closed ecosystem, based on their knowledge of the parts.

However, it was not an engineered ecosystem, and their knowledge of the parts
was quite limited compared to our knowledge of the parts of a software system.

In short, my contention is that engineering something, even if it's a
complex system,
places one in a fundamentally different position than if one is
studying a natural system,
simply because one does not understand the makeup of the natural
system that well,
due to limitations in our current measuring instruments.

Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
On Sun, Apr 27, 2008 at 5:51 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Engineering in the real world is nearly always a mixture of rigor and
  intuition.  Just like analysis of complex biological systems is.
 

  AIEe! NO!  You are clearly not an engineer because a true engineer
 just wouldn't say this.

  Engineering should *NEVER* involve intuition.  Engineering does not require
 exact answers as long as you have error bars but the second that you revert
 to intuition and guesses, it is *NOT* engineering anymore.

Well, we may be using the word intuition differently.

I'll give a very simple example of intuition, based on the only
engineering paper
I ever published, which was a civil engineering paper.  What we did was use
statistics to predict how likely it was (based on several physical measurements)
that the soil under a house was going to settle over the next few decades
(causing the house to sink irregularly).   This formula we derived is
now used to
determine where to build houses in the outskirts of Las Vegas, and what kind of
foundation to use for the houses.

Not too interesting, but rigorous.

However, one wouldn't bother to use this formula if the soil was too different
in composition from the soil around Vegas.  So in reality the civil
engineer uses
some intuition to decide whether the soil is close enough to the right
kind of soil,
to use our formula.

Now this *could* be made more rigorous, too ... in principle ... but in practice
it isn't.

And so, maybe some houses fall down ;-)

But not many do.  The combination of rigorous formulas applying to restrictive
cases, together with intuition telling you where to apply what formulas, works
OK.

Anyway this is a total digression, and I'm done w/ recreational
emailing for the day!

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
I don't agree with Mark Waser that we can engineer the complexity out
of intelligence.

I agree with Richard Loosemore that intelligent systems are
intrinsically complex systems in the Santa Fe Institute type sense

However, I don't agree with Richard as to the *extent* of the
complexity problem.  I think he overestimates how hard it will be to
roughly estimate the behavior of AGI systems based on their designs
and measurement of their components.  I think it will be easier to do
this with AGI systems than with natural systems, not because we can
engineer the complexity out of the systems, but because we (as the
designers) understand the systems better, and can measure the systems
more thoroughly...

-- Ben G

On Sun, Apr 27, 2008 at 5:44 PM, Mark Waser [EMAIL PROTECTED] wrote:


  To the best of my knowledge, nobody has *ever* used intuitive
  understanding to second-guess the stability of an artificial complex
  system in which those four factors were all present in the elements in a
  tightly coupled way.
 

  Um, aren;t those exactly the rocks that BioMind foundered on?



  So that is all we have as a reply to the complex systems problem:
  engineers saying that they think they can just use intuitive
  understanding to get around it.
 

  Again, not this engineer . . . . I say that we should engineer the
 complexity out of it.



  Rots of ruck, as Rastro would say.
 

  We don't need no stinkin' luck . . . . we;ve got foresight, planning, and
 engineering




  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Rules of thumb are not intuition ... but applying them requires
intuition... unlike applying rigorous methods...

However even the most rigorous science requires rules of thumb (hence
intuition) to do the problem set-up before the calculations start...

ben

On Sun, Apr 27, 2008 at 6:56 PM, Mark Waser [EMAIL PROTECTED] wrote:
 
Engineering should *NEVER* involve intuition.  Engineering does not
 require
   exact answers as long as you have error bars but the second that you
 revert
   to intuition and guesses, it is *NOT* engineering anymore.
  
 
  Well, we may be using the word intuition differently.
 

  Given your examples, we are.


 
  I'll give a very simple example of intuition, based on the only
  Not too interesting, but rigorous.
 

  Yeah.  Generally if it's rigorous, it's not considered intuition.


  However, one wouldn't bother to use this formula if the soil was too
 different
  in composition from the soil around Vegas.  So in reality the civil
  engineer uses
  some intuition to decide whether the soil is close enough to the right
  kind of soil,
  to use our formula.
 
  Now this *could* be made more rigorous, too ... in principle ... but in
 practice
  it isn't.
 

  I would have phrased this as The civil engineer uses some simple rules of
 thumb . . . .  which tend to be pretty well established and where they do
 and do not apply also tend to be pretty well established to.  I've never
 really heard the word intuition used to describe this.


  And so, maybe some houses fall down ;-)
  But not many do.  The combination of rigorous formulas applying to
 restrictive
  cases, together with intuition telling you where to apply what formulas,
 works
  OK.
 

  Yeah, you seem to be using the word intuition where I use the words rules
 of thumb.  An interesting distinction and one that we probably should both
 remember . . . .


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
  I said and repeat that we can engineer the complexity out of intelligence
 in the Richard Loosemore sense.
  I did not say and do not believe that we can engineer the complexity out
 of intelligence in the Santa Fe Institute sense.

OK, gotcha...

Yeah... IMO, complexity in the sense you ascribe to Richard was never there
in intelligence in the first place ;-)

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-27 Thread Ben Goertzel
Actually, I have to clarify that my knowledge of this totally
digressive topic is about
12 years obsolete.  Maybe it's all done differently now...

  However, one wouldn't bother to use this formula if the soil was too 
 different
  in composition from the soil around Vegas.  So in reality the civil
  engineer uses
  some intuition to decide whether the soil is close enough to the right
  kind of soil,
  to use our formula.

  Now this *could* be made more rigorous, too ... in principle ... but in 
 practice
  it isn't.

  And so, maybe some houses fall down ;-)

  But not many do.  The combination of rigorous formulas applying to 
 restrictive
  cases, together with intuition telling you where to apply what formulas, 
 works
  OK.

  Anyway this is a total digression, and I'm done w/ recreational
  emailing for the day!

  ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Ben Goertzel
On Sat, Apr 26, 2008 at 10:03 AM, Dr. Matthias Heger [EMAIL PROTECTED] wrote:
 In my opinion you can apply Gödel's theorem to prove that 100% AGI is not
  possible in this world
  if you apply it not to a hypothetical machine or human being but to the
  whole universe which can be assumed to be a closed system.

Please consult the works of Marcus Hutter (Universal AI) and Juergen Schmidhuber
(Godel Machine).   These thoughts are not new.

Yes, truly general AI is only possible in the case of infinite
processing power, which is
likely not physically realizable.   How much generality can be
achieved with how much
processing power, is not yet known -- math hasn't advanced that far yet.

Humans are not totally general, yet are much more general than any of
the AI systems
yet built

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


[agi] Richard's four criteria and the Novamente Pet Brain

2008-04-26 Thread Ben Goertzel
Richard,

I've been too busy to participate in this thread, but, now I'll chip
in a single comment,
anyways... regarding the intersection btw your thoughts and Novamente's
current work...

You cited the following 4 criteria,

  - Memory.  Does the mechanism use stored information about what it was
 doing fifteen minutes ago, when it is making a decision about what to do
 now?  An hour ago?  A million years ago?  Whatever:  if it remembers, then
 it has memory.
 
  - Development.  Does the mechanism change its character in some way over
 time?  Does it adapt?
 
  - Identity.  Do individuals of a certain type have their own unique
 identities, so that the result of an interaction depends on more than the
 type of the object, but also the particular individuals involved?
 
  - Nonlinearity.  Are the functions describing the behavior deeply
 nonlinear?
 
  These four characteristics are enough. Go take a look at a natural system
 in physics, or an engineering system, and find one in which the components
 of the system interact with memory, development, identity and nonlinearity.
 You will not find any that are understood.

Someone else replied:

  I am quite sure there have been many AI system that have had all four of
 these features and that have worked pretty much as planned and whose
 behavior is reasonably well understood

Actually, the Novamente Pet Brain system that we're now experimenting with,
for controlling virtual dogs and other animals, in virtual worlds, does include
nontrivial

-- memory
-- adaptation/development
-- identity
-- nonlinearity

Each pet has its own memory (procedural, episodic and declarative) and
develops new behaviors, skills and biases over time; each pet has its
own personality and identity; and there is plenty of nonlinearity in
multiple aspects and levels.

Yet, this is really a pretty simplistic AI system (though built in an
architecture with grander ambitions and potential), and we certainly
DO understand the system's behavior to a reasonable level -- though we
can't predict exactly what any one pet will do in any given situation;
we just have to run the system and see.

I agree that the above four features, combined, do lead to a lot of
complexity in the complex systems sense.  However, I don't agree
that this complexity is so severe as to render implausible an
intuitive understanding, from first principles, of the system's
qualitative large-scale behavior based on the details of its
construction.  It's true we haven't done the math to predict the
system's qualitative large-scale behavior rigorously; but as system
designers and parameter tuners, we can tell how to tweak the system to
get it to generally act in certain ways.

And it really seems to me that the same sort of situation will hold
when we go beyond virtual pets to more generally intelligent virtual
agents based on the same architecture.

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES---Mark's defense of falsehood

2008-04-26 Thread Ben Goertzel
I believe the monsters in the video game Black  White also fulfilled Richard's
criteria ...

On Sat, Apr 26, 2008 at 1:53 PM, Russell Wallace
[EMAIL PROTECTED] wrote:
 On Sat, Apr 26, 2008 at 6:37 PM, Mark Waser [EMAIL PROTECTED] wrote:
OK.  Name these systems and their successes.  PROVE Richard's statement
   incorrect.  I'm not seeing anyone responsible doing that.

  I don't know if I count as someone responsible :) but I named two
  (TD-Gammon and spam filtering); I can name some more if you like.



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S COMPLEXITY THEORIES---Mark's defense of falsehood

2008-04-26 Thread Ben Goertzel
They are monsters that learn new behaviors via imitation, and that are
controlled internally by adaptive neural nets using a form of Hebbian
learning.

Nothing that awesome but they do seem to fulfill Richard's criteria.
My friend Jason Hutchens, whose chat bots won the Loebner prize at
least once, wrote some of their AI code.

Novamente's Pet Brain is more sophisticated already...

ben g

On Sat, Apr 26, 2008 at 2:30 PM, Mark Waser [EMAIL PROTECTED] wrote:
 Ben,

Could you elucidate on this further (or provide references).  Is it worth
 getting Black  White if you're not a big gaming person?

  - Original Message - From: Ben Goertzel [EMAIL PROTECTED]

  To: agi@v2.listbox.com
  Sent: Saturday, April 26, 2008 2:14 PM
  Subject: **SPAM** Re: [agi] THE NEWEST REVELATIONS ABOUT RICHARD'S
 COMPLEXITY THEORIES---Mark's defense of falsehood



 
  I believe the monsters in the video game Black  White also fulfilled
 Richard's
  criteria ...
 
  On Sat, Apr 26, 2008 at 1:53 PM, Russell Wallace
  [EMAIL PROTECTED] wrote:
 
  
   On Sat, Apr 26, 2008 at 6:37 PM, Mark Waser [EMAIL PROTECTED]
 wrote:
  OK.  Name these systems and their successes.  PROVE Richard's
 statement
 incorrect.  I'm not seeing anyone responsible doing that.
  
I don't know if I count as someone responsible :) but I named two
(TD-Gammon and spam filtering); I can name some more if you like.
  
  
  
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
  
Powered by Listbox: http://www.listbox.com
  
  
 
 
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller
 
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
 
  Powered by Listbox: http://www.listbox.com
 
 



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


WARNING -- LET'S KEEP THE LIST CIVIL PLEASE ... was Re: [agi] How general can be and should be AGI?

2008-04-26 Thread Ben Goertzel
Ummm... just a little note of warning from the list owner.

Tintner wrote:
  So I await your geometric solution to this problem - (a mere statement of
 principle will do) - with great interest. Well, actually no. Your answer is
 broadly predictable - you 1) won't have any idea here  2) will have nothing
 to say to the point and  3) be, as usual, all bark and no bite - all insults
 and no ideas.

Waser wrote:
  Nice ad hominem.  Asshole.

Uh, no.

Mark, you've been a really valuable contributor to this list for a long period
of time.

But, this sort of name-calling is just not apropos on this list.
Don't do it anymore.

Thanks
Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Richard's four criteria and the Novamente Pet Brain

2008-04-26 Thread Ben Goertzel
Richard,

  How does this relate to the original context in which I cited this list
  of four characteristics?  It loks like your comments are completely outside
 the original context, so they don't add anything of relevance.

I read the thread and I think my comments are relevant

  Let me bring you up to speed:

  1) The mere presence of these four characteristics *somewhere* in a
  system has nothing whatever to do with the argument I presented (this
  was a distortion introduced by Ed Porter in one of his many fits of
  misunderstanding).  Any fool could put together a non-complex system
  with, for example, four distinct modules that each possessed one of
  those four characteristics.  So what?  I was not talking about such
  trivial systems, I was talking about systems in which the elements of
  the system each interacted with the other elements in a way that
  included these four characteristics.

This last sentence is just not very clearly posed.

The four aspects mentioned were

-- memory
-- adaptation/development
-- identity
-- nonlinearity

In the Pet Brain,

-- memory is a dynamic process associated with a few coupled nonlinear
dynamics acting on a certain data store

-- adaptation/development is a process that involves a number of dynamics
acting on memory

-- the identity of a pet is associated with certain specified
parameters, but also
includes self-organizing patterns in the memory that are guided by
these parameters
and other processes

-- nonlinearity pervades all major aspects of the system, and the
system as a whole

  So when you point to the fact that somewhere in Novamente (in a single
  'pet' brain) you can find all of these, it has no bearing on the
  argument I presented.  I was principally referring to these
  characteristics appearing at the symbol level (and symbol-manipulation
  level), not the 'pet brain' level.  You can find as much memory,
  identity, etc etc as you like, in other sundry parts of Novamente, but
  it won't make any difference to the place where I was pointing to it.

I'm not sure how you're defining the term symbol.

If you define it in the classical Peircean definition (symbol as contrasted to
icon and index) then indeed the four aspects you mentioned do occur in
the Pet Brain on the symbol level.

  2)  Even if you do come back to me and say that the symbols inside
  Novamente all contain all four characteristics, I can only say so what
  a second time ;-).  The question I was asking when I laid down those
  four characteristics was How many physical systems do you know of in
  which the system elements are governed by a mechanism that has all four
  of these, AND where the system as a whole has a large-scale behavior
  that has been mathematically proven to arise from the behaviors of the
  elements of the system?

  The answer to that question (I'll save you the trouble) is 'zero'.

But why do you place so much emphasis on mathematical proof?

I don't think that mathematical proof is needed for creating an AGI system.

(And I say this as a math PhD, who enjoys math more than pretty much any
other pursuit...)

Formal software verification is still a crude science, so that very few of the
software programs we utilize have been (or could tractably be) proven to
fulfill their specifications.  We create software programs based on piecemeal
rigorous justifications of fragments of the software, combined with intuitive
understanding of the whole.

Furthermore, as a mathematician I'm acutely aware of physicists' often low level
of mathematical rigor.  As a single example, Feynman integrals in particle
physics were used by physicists for decades, to do real calculations predicting
the outcomes of real experiments with great accuracy, before finally some
mathematicians came along and provided them with a rigorous mathematical
grounding.

  The inference to be made from that fact is that anyone who does put
  together a system like  -  like, e.g., the fearless Mr. B. Goertzel  -
  is taking quite a bizarre and extraordinary position, if he says that he
  alone, of all people, is quite confident that his particular system,
  unlike all the others, is quite understandable.

Understandable is a vague term.  In complex systems it's typical that
one can predict statistically properties of the whole system's behavior, yet
can't predict the details.  So a complete understanding is intractable but
a partial, useful qualitative understanding is more feasible to come by.

Also, I note there's a difference btw an engineered and a natural system,
in terms of the degree of inspection one can achieve of the system's internal
details.

I strongly suspect that in 10-20 years neuroscientists will arrive at a decent
qualitative explanation of how the lower-level mechanisms of the brain generate
the higher-level patterns of human mind.  The reason we haven't yet, is not that
there is some insuperable complexity barrier, but rather that we
lack the appropriate
data.

For an AGI 

Re: [agi] Other AGI-like communities

2008-04-23 Thread Ben Goertzel
On Wed, Apr 23, 2008 at 5:21 AM, Joshua Fox [EMAIL PROTECTED] wrote:

 To return to the old question of why AGI research seems so rare, Samsonovich
 et al. say
 (http://members.cox.net/alexei.v.samsonovich/samsonovich_workshop.pdf)

 'In fact, there are several scientific communities pursuing the same or
 similar goals, each unified under their own unique slogan: machine /
 artificial consciousness, human-level intelligence, embodied cognition,
 situation awareness, artificial general intelligence, commonsense
 reasoning, qualitative reasoning, strong AI, biologically inspired
 cognitive architectures (BICA), computational consciousness,
 bootstrapped learning, etc. Many of these communities do not recognize
 each other.'

I believe these various academic subcommunities ARE quite aware of each other

And I would divide them into two categories

1)
Those that are concerned with rather specialized approaches to
intelligence, e.g. qualitative reasoning, commonsense reasoning etc.

2)
Those that do not really constitute a coherent research community,
e.g. BICA, human-level AI ... but rather merely constitute a few
assorted workshops, journal special issues, etc.

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] Other AGI-like communities

2008-04-23 Thread Ben Goertzel
On Wed, Apr 23, 2008 at 11:29 AM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben/Joshua:

  How do you think the AI and AGI fields relate to the embodied  grounded
 cognition movements in cog. sci? My impression is that the majority of
 people here (excluding you) still have only limited awareness of them  - 
 are still operating in total  totally doomed defiance of their findings:

My opinion is that the majority of people here are aware of these
ideas, and consider them unproven speculations not agreeing with their
own intuition ;-)

  Grounded cognition rejects traditional views that cognition is computation
  on amodal symbols in a modular system, independent of
  the brain's modal systems for perception, action, and introspection.
  Instead, grounded cognition proposes that modal simulations,
  bodily states, and situated action underlie cognition.  Barsalou

  Grounded cognition here obviously means not just pointing at things, but
 that all traditional rational operations are, and have to be, supported by
 image-inative simulation in any form of general intelligence.

I wouldn't agree with such a strong statement.  I think the grounding
of ratiocination in image-ination is characteristic of human
intelligence, and must thus be characteristic of any highly human-like
intelligent system ... but, I don't see any reason to believe it's the
ONLY path.

The minds we know or can imagine, almost surely constitute a
teeny-tiny little backwater of the overall space of possible minds ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: Open source (was Re: [agi] The Strange Loop of AGI Funding: now logically proved!)

2008-04-20 Thread Ben Goertzel
Bob...

... and of course, OSS does not contradict paying programmers to write software.

I have no plans to dissolve Novamente LLC, for example ;-p ... we're
actually doing better than ever ...

And, I note that SIAI is now paying 2 programmers (one full time, one
3/5 time) to work on OpenCog specifically ...

And we will have a bunch of students getting paid by Google to code
for OpenCog this summer, under the Google Summer of Code program...

It is certainly true that a paid team of full-time programmers can
address certain sorts of issues faster and more efficiently than a
distributed team of part-timers.  My idea is not to replace the former
with the latter, but rather to make use of both, working toward
closely overlapping goals...

-- Ben G


On Sun, Apr 20, 2008 at 7:49 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 Until a true AGI is developed I think it will remain necessary to pay
  programmers to write programs, at least some of the time.  You can't
  always rely upon voluntary effort, especially when the problem you
  want to solve is fairly obscure.






  On 19/04/2008, Ben Goertzel [EMAIL PROTECTED] wrote:
Translation: We all (me included) now accept as reasonable that in order 
 to
 briefly earn a living wage, that we must develop radically new and 
 useful
 technology and then just give it away.
  
   ...
 Steve Richfield
  
The above is obviously a straw man statement ... but I think it
**is** true these days that open-sourcing one's code is a viable way
to get one's software vision realized, and is not necessarily
contradictory with making a profit.
  
This doesn't mean that OSS is the only path, nor that it's necessarily
an easy thing to make work...
  
  
-- Ben
  
  

   ---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;


   Powered by Listbox: http://www.listbox.com
  

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
On Fri, Apr 18, 2008 at 1:01 PM, Pei Wang [EMAIL PROTECTED] wrote:
 PREMISES:

  (1) AGI is one of the most complicated problem in the history of
  science, and therefore requires substantial funding for it to happen.


Potentially, though, massively distributed, collaborative open-source
software development could render your first premise false ...


  (2) Since all previous attempts failed, investors and funding agencies
  have enough reason to wait until a recognizable breakthrough to put
  their money in.

  (3) Since the people who have the money are usually not AGI
  researchers (so won't read papers and books), a breakthrough becomes
  recognizable to them only by impressive demos.

  (4) If the system is really general-purpose, then if it can give an
  impressive demo on one problem, it should be able to solve all kinds
  of problems to roughly the same level.

  (5) If a system already can solve all kinds of problems, then the
  research has mostly finished, and won't need funding anymore.

  CONCLUSION: AGI research will get funding when and only when the
  funding is no longer needed anymore.

  Q.E.D. :-(

  Pei

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
  Potentially, though, massively distributed, collaborative open-source
  software development could render your first premise false ...
 

   Though it is unlikely to do so, because collaborative open-source
 projects are best suited to situations in which the fundamental ideas behind
 the design has been solved.

I believe I've solved the fundamental issues behind the Novamente/OpenCog
design...

Time and effort will tell if I'm right ;-)

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
On Fri, Apr 18, 2008 at 5:35 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Pei:  I don't really want

  a big gang at now (that will only waste the time of mine and the
  others), but a small-but-good gang, plus more time for myself ---
  which means less group debates, I guess. ;-)

  Alternatively, you could open your problems for group discussion 
 think-tanking...   I'm surprised that none of you systembuilders do this.


That is essentially what I'm doing with OpenCog ... but it's a big job,
just preparing stuff in terms of documentation and code and designs
so that others have a prayer of understanding it ...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] The Strange Loop of AGI Funding: now logically proved!

2008-04-18 Thread Ben Goertzel
YKY,

   I believe I've solved the fundamental issues behind the Novamente/OpenCog
   design...

  It's hard to tell whether you have really solved the AGI problem, at
  this stage. ;)

Understood...

  Also, your AGI framework has a lot of non-standard, home-brew stuff
  (especially the knowledge representation and logic).  I bet there are
  some merits in your system, but is it really so compelling that
  everybody has to learn it and do it that way?

I don't claim that the Novamente/OpenCog design is the **only** way ... but I do
note that the different parts are carefully designed to interoperate together
in subtle ways, so replacing any one component w/ some standard system
won't work.

For instance, replacing PLN with some more popular but more limited
probabilistic
logic framework, would break a lot of other stuff...

  Creating a standard / common framework is not easy.  Right now I think
  we lack such a consensus.  So the theorists are not working together.

One thing that stuck out at the 2006 AGI Workshop and AGI-08
conference, was the commonality between several different approaches,
for instance

-- my Novamente approach
-- Nick Cassimatis's Polyscheme system
-- Stan Franklin's LIDA approach
-- Sam Adams (IBM) Joshua Blue
-- Alexei Samsonovich's BICA architecture

Not that these are all the same design ... there are very real differences
... but there are also a lot of deep parallels.   Novamente seems to
be more fully fleshed out than these overall, but each of these guys
has thought through specific aspects more deeply than I have.

Also, John Laird (SOAR creator) is moving SOAR in a direction that's a
lot closer to the Goertzel/Cassimatis/Franklin/Adams style system than
his prior approaches ...

All the above approaches are

-- integrative, involving multiple separate components tightly bound
together in a high-level cognitive architecture

-- reliant to some extent on formal inference (along with subsymbolic methods)

-- clearly testable/developable in a virtual worlds setting

I would bet that with appropriate incentives all of the above
researchers could be persuaded to collaborate on a common AI project
-- without it degenerating into some kind of useless
committee-think...

Let's call these approaches LIVE, for short -- Logic-incorporating,
Integrative, Virtually Embodied

On the other hand, when you look at

-- Pei Wang's approach, which is interesting but is fundamentally
committed to a particular form of uncertain logic that no other AGI
approach accepts

-- Selmer Bringsjord's approach, which is founded on the notion that
standard predicate  logic alone is The Answer

-- Hugo de Garis's approach which is based on brain emulation

you're looking at interesting approaches that are not really
compatible with the LIVE approach ... I'd say, you could not viably
bring these guys into a collaborative AI project based on the LIVE
approach...

So, I do think more collaboration and co-thinking could occur than
currently does ... but also that there are limits due to fundamentally
different understandings

OpenCog is general enough to support any approach falling within the
LIVE category, and a number of other sorts of approaches as well
(e.g. a variety of neural net based architectures).  But it is not
**completely** general and doesn't aim to me ... IMO, a completely
general AGI development
framework is just basically, say, C++ and Linux ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Ben Goertzel
  We may well see a variety of proto-AGI applications in different
  domains, sorta midway between narrow-AI and human-level AGI, including
  stuff like

  -- maidbots

  -- AI financial traders that don't just execute machine learning
  algorithms, but grok context, adapt to regime changes, etc.

  -- NL question answering systems that grok context and piece together
  info from different sources

  -- artificial scientists capable of formulating nonobvious hypotheses
  and validating them via data analysis, including doing automated data
  preprocessing, etc.

And not to forget, of course, smart virtual pets and avatars in games
and virtual worlds ;-))

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] An Open Letter to AGI Investors

2008-04-17 Thread Ben Goertzel
 Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-17 Thread Ben Goertzel
Hi Mark,

  This is, by the way, my primary complaint about Novamente -- far too much
 energy, mind-space, time, and effort has gone into optimizing and repeatedly
 upgrading the custom atom table that should have been built on top of
 existing tools instead of being built totally from scratch.

Really, work on the AtomTable has been a small percentage of work on
the Novamente Cognition Engine ... and, the code running the AtomTable is
now pretty much the same as it was in 2001 (though it was tweaked to make it
64-bit compatible, back in 2004 ... and there has been ongoing bug-removal
as well...).  We wrote some new wrappers for the AtomTable
last year (based on STL containers), but that didn't affect the
internals, just the API.

It's true that a highly-efficient, highly-customizable graph database could
potentially serve the role of the AtomTable, within the NCE or OpenCog.

But that observation is really not
such a big deal.  Potentially, one could just wrap someone else's graph DB
behind the 2007 AtomTable API, and this change would be completely transparent
to the AI processes using the AtomTable.

However, I'm not convinced this would be a good idea.  There are a lot of
useful specialized indices in the AtomTable, and replicating all this in some
other graph DB would wind up being a lot of work ... and we could use that
time/effort on other stuff instead

Using a relational DB rather than a graph DB is not appropriate for the NCE
design, however.

But we've been over this before...

And, this is purely a software implementation issue rather than an AI issue,
of course.  The NCE and OpenCog designs require **some** graph or
hypergraph DB which supports the manual and automated creation of
complex customized indices ... and supports refined cognitive control
over what lives on disk and what lives in RAM, rather than leaving this
up to some non-intelligent automated process.  Given these requirements,
the choice of how to realize them in software is not THAT critical ... and
what we have there now works


-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com


Re: [agi] database access fast enough?

2008-04-17 Thread Ben Goertzel
On Thu, Apr 17, 2008 at 2:42 PM, Mark Waser [EMAIL PROTECTED] wrote:

  Really, work on the AtomTable has been a small percentage of work on
  the Novamente Cognition Engine ... and, the code running the AtomTable is
  now pretty much the same as it was in 2001 (though it was tweaked to make
 it
  64-bit compatible, back in 2004 ... and there has been ongoing bug-removal
  as well...).
 

  And . . . and . . . and . . . :-)  It's far more than you're
 admitting to yourself.:-)

That's simply not true, but I know of no way to convince you.

The AtomTable work was full-time work for a two guys for a few months
in 2001, and since then it's been occasional part-time tweaking by two
people who have been full-time engaged on other projects.

  We wrote some new wrappers for the AtomTable
  last year (based on STL containers), but that didn't affect the
  internals, just the API.
 

  Which is what everything should have been designed around anyways -- so
 effectively, last year was a major breaking change that affected *all* the
 software written to the old API.

Yes, but calls to the AT were already well-encapsulated within the code,
so changing from the old API to the new has not been a big deal.

  Absolutely.  That's what I'm pushing for.  Could you please, please publish
 the 2007 AtomTable API?  That's actually far, far more important than the
 code behind it.  Please, please . . . . publish the spec today . . . .
 pretty please with a cherry on top?

It'll be done as part of the initial OpenCog release, which will be pretty
soon now ... I don't have a date yet though...

  However, I'm not convinced this would be a good idea.  There are a lot of
  useful specialized indices in the AtomTable, and replicating all this in
 some
  other graph DB would wind up being a lot of work ... and we could use that
  time/effort on other stuff instead
 

  Which (pardon me, but . . .  ) clearly shows that you're not a professional
 software engineer

I'm not but many other members of the Novamente team are

  My contention is that you all should be
 *a lot* further along than you are.  You have more talent than anyone else
 but are moving at a truly glacial pace.

90% of Novamente LLC's efforts historically have gone into various AI
consulting projects
that pay the bills.

Now, about 60% is going into consulting projects, and 40% is going
into the virtual
pet brain project

We have very rarely had funding to pay folks to work on AGI, so we've
worked on it
in bits and pieces here and there...

Sad, but true...

 I understand that you believe that
 this is primarily due to other reasons but *I am telling you* that A LOT of
 it is also your own fault due to your own software development choices.

You're wrong, but arguing the point over and over isn't getting us
anywhere.

  Worse, fundamentally, currently, you're locking *everyone* into *your*
 implementation of the atom table.

Well, that will not be the case in OpenCog.  The OpenCog architecture
will be such that other containers could be inserted if desired.

Why not let someone else decide whether
 or not it is worth their time and effort to implement those specialized
 indices on another graph DB of their choice?  If you would just open up the
 API and maybe accept some good enhancements (or, maybe even, if necessary,
 some changes) to it?

Yes, that's going to happen within OpenCog.

  Using a relational DB rather than a graph DB is not appropriate for the
 NCE
  design, however.
 

  Incorrect.  If the API is identical and the speed is identical, whether it
 is a relational db or a graph db *behind the scenes* is irrelevant.  Design
 to your API -- *NOT* to the underlying technology.  You keep making this
 mistake.

The speed will not be identical for an important subset of queries, because
of intrinsic limitations of the B-tree datastructures used inside RDB's.  We
discussed this before.


  Seriously -- I think that you're really going to be surprised at how fast
 OpenCog might take off if you'd just relax some control and concentrate on
 the specifications and the API rather than the implementation issues that
 you're currently wasting time on.

I am optimistic about the development speedup we'll see from OpenCog,
but not for the reason you cite.

Rather, I think that by opening it up in an intelligent way, we're simply
going to get a lot more people involved, contributing their code, their
time, and their ideas.  This will accelerate things considerably, if all
goes well.

I repeat that NO implementation time has been spent on the AtomTable
internals for quite some time now.  A few weeks was spent on the API
last year, by one person.  I'm not sure why you want to keep exaggerating
the time put into that component, when after all you weren't involved in
its development at all (and I didn't even know you when the bulk of
that development was being done!!)

I don't care if, in OpenCog, someone replaces the AtomTable internals
with something 

Re: [agi] Posting Strategies - A Gentle Reminder

2008-04-14 Thread Ben Goertzel
These things of course require a balance.

In many academic or corporate fora, radical innovation is frowned upon
so profoundly (in spite of sometimes being praised and desired, on the
surface, but in a confused and not fully sincere way), that it's continually
necessary to remind people of the need to open their minds and consider
the possibility that some of their assumptions are wrong.

OTOH, in **this** forum, we have a lot of openness and open-mindedness,
which is great ... but the downside is, people who THINK they have radical
new insights but actually don't, tend to get a LOT of
attention, often to the detriment of more interesting yet less radical on the
surface discussions.

I do find that most posters on this list seem to have put a lot of thought
(as well as a lot of feeling) into their ideas and opinions.  However, it's
frustrating when people re-tread issues over and over in a way that demonstrates
they've never taken the trouble to carefully study what's been done before.

I think it can often be super-valuable to approach some issue afresh, without
studying the literature first -- so as to get a brand-new view.  But
then, before
venting one's ideas in a public forum, one should check one's ideas against
the literature (in the idea-validation phase .. after the
idea-generation phase) to
see whether they're original, whether they're contradicted by well-thought-out
arguments, etc.

-- Ben G

-- Ben

On Mon, Apr 14, 2008 at 9:54 AM, Bob Mottram [EMAIL PROTECTED] wrote:
 Good advice.  There are of course sometimes people who are ahead of the
 field, but in conversation you'll usually find that the genuine inovators
 have a deep - bordering on obsessive - knowledge of the field that they're
 working in and are willing to demonstrate/test their claims to anyone even
 remotely interested.






 On 14/04/2008, Brad Paulsen [EMAIL PROTECTED] wrote:
 
 
  Dear Fellow AGI List Members:
 
  Just thought I'd remind the good members of this list about some
 strategies for dealing with certain types of postings.
 
  Unfortunately, the field of AI/AGI is one of those areas where anybody
 with a pulse and a brain thinks they can design a program that thinks.
 Must be easy, right?  I mean, I can do it so how hard can it be to put me
 in a can?  Well, that's what some very smart people in the 1940's, '50's
 and into the 1960's thought.  They were wrong.  Most of them now admit it.
 So, on AI-related lists, we have to be very careful about the kinds of
 conversations on which we spend our valuable time.  Here are some
 guidelines.  I realize most people here know this stuff already.  This is
 just a gentle reminder.
 
  If a posting makes grandiose claims, is dismissive of mainstream research,
 techniques, and institutions or the author claims to have special
 knowledge that has apparently been missed (or dismissed) by all of the
 brilliant scientific/technical minds who go to their jobs at major
 corporations and universities every day (and are paid for doing so), and
 also by every Nobel Laureate for the last 20 years, this posting should be
 ignored.  DO NOT RESPOND to these types of postings: positively or
 negatively.  The poster is, obviously, either irrational or one of the
 greatest minds of our time.  In the former case, you know they're full of
 it, I know they're full of it, but they will NEVER admit that.  You will
 never win an argument with an irrational individual.  In the latter case,
 stop and ask yourself: Why is somebody that fantastically smart posting to
 this mailing list?  He or she is, obviously, smarter than everyone here.
 Why does he/she need us to validate his or her accomplishments/knowledge by
 posting on this list?  He or she should have better things to do and,
 besides, we probably wouldn't be able to understand (appreciate) his/her
 genius anyhow.
 
  The only way to deal with postings like this is to IGNORE THEM.  Don't
 rise to the bait.  Like a bad cold, they will be irritating for a while, but
 they will, eventually, go away.
 
  Cheers,
 
  Brad
 
  

  agi | Archives | Modify Your Subscription


  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] How Bodies of Knowledge Grow

2008-04-10 Thread Ben Goertzel
FWIW, I'll note that a heavy focus on metrics and testing has been part of every
US government funded AI project in history ... and this focus has not
gotten them
very far, generally speaking ...

-- Ben G

On Thu, Apr 10, 2008 at 5:25 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 MW: I believe that I was also quite clear with my follow-on comment of a
 cart


  before the horse problem.  Once we know how to acquire and store
 knowledge, then we can develop metrics for testing it -- but, for now, it's
 too early to go after the problem. as well.
 

  You're basically agreeing with what I said you said, which wasn't meant to
 be disparaging. You're putting testing, or metrics for testing, later, - and
 I imagine few AI-ers would disagree with you.  I'm suggesting that won't
 work - and that a new cog. sci. synthesis is beginning - just beginning - to
 emerge here.  I don't mind tantrums, but there might as well be some point
 to them.




  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
 http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Big Dog

2008-04-10 Thread Ben Goertzel
Peruse the video:
 http://www.youtube.com/watch?v=W1czBcnX1Wwfeature=related

 Of course, they are only showing the best stuff.  And I am sure there
 is plenty of work left to do.  But from the variety of behaviors that
 are displayed, I would say that the problem of quadraped walking is
surprisingly well solved.   apparently it's way easier than biped
locomotion...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Unsupervised grammar mining from text [was GSoC: Learning Simple Grammars]

2008-04-05 Thread Ben Goertzel
I looked through the ADIOS papers...

It's interesting work, and it reminds me of a number of other things, including

-- Borzenko's work, http://proto-mind.com/SAHIN.pdf

-- Denis Yuret's work on mutual information based grammar learning,
from the late 90's

-- Robert Hecht-Nielsen's much-publicized work a couple years back, on
automated language learning and generation

-- Tony Smith's work on automated learning of function-word based
grammars from text, done in his MS thesis from University of Calgary
in the 90's

Looking at these various things together, it does seem clear that one
can extract a lot of syntactic structure from free text in an
unsupervised manner.

It is unclear whether one can get the full syntactic subtlety of
everyday English though.  Every researcher in this area seems to get
to a certain stage (mining the simpler aspects of English syntax), and
then never get any further.

However, I have another complaint to make.  Let's say you succeed with
this, and make an English-language-syntax recognizer that works, say,
as well as the link parser, by pure unsupervised learning.  That is
really cool but ... so what?

Syntax parsing is already not the bottleneck for AGI, we already have
decent parsers.  The bottleneck is semantic understanding.

Having a system that can generate random sentences is not very useful,
nor is having a bulky inelegant automatically learned formal-grammar
model of English.

If one wants to hand-craft mapping rules taking syntax parses into
logical relations, one is better off with a hand-crafted grammar than
a messier learned one.

If one wants to have the mapping from syntax into semantics be
learned, then probably one is better off having syntax be learned in a
coherent overall experiential-learning process -- i.e. as part of a
system learning how to interact in a world -- rather than having
syntax learned in an artificial, semantics-free manner via
corpus-mining.

In other words: suppose you could make ADIOS work for real ... how
would that help along the path of AGI?

-- Ben G



On Sat, Apr 5, 2008 at 8:46 AM, Evgenii Philippov [EMAIL PROTECTED] wrote:

  On Sat, Apr 5, 2008 at 7:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
For instance, I'll be curious whether ADIOS's automatically inferred
grammars can deal with recursive phrase structure, with constructs
like the person with whom I ate dinner, and so forth

  ADIOS papers have a lot of remarks like recusion is not implemented,
  but I think it IS able to deal with THIS kind of recusion... But this
  is TBD---I am not sure.



  e

  
  
  
On Sat, Apr 5, 2008 at 7:57 AM, Evgenii Philippov [EMAIL PROTECTED] 
 wrote:

  Hello folks,


  On Thu, Mar 27, 2008 at 11:06 PM, Ben Goertzel [EMAIL PROTECTED] 
 wrote:
In general, I personally have lost interest in automated inference 
 of grammars
from text corpuses, though I did play with that in the 90's (and 
 got bad results
like everybody else).

  Uh oh! My current top-priority is playing with ADIOS algorithm for
  unsupervised grammar learning, which is based on extended Hidden
  Markov Models. Its results are plainly fantastic---it is able to
  create a working grammar not only for English, but also for many other
  languages, plus languages with spaces removed, plus DNA structure,
  protein structure, etc etc etc. Some results are described in Zach
  Solan's papers and the algorithm itself is described in his
  dissertation.

  http://www.tau.ac.il/~zsolan/papers/ZachSolanThesis.pdf
  http://adios.tau.ac.il/

  And its grammars are completely comprehensible for a human. (See the
  homepage, papers and the thesis for diagrams.)

  Also, they can very easily be used for language generation, and Z
  Solan did a lot of experiments with this.

  It has no relation to Link Grammar though.


Automated inference of grammar from language used in embodied 
 situations
interests me a lot ... and cheating via using hand-created NLP 
 tools may
be helpful too...
  
But I sort of feel like automated inference of grammars from 
 corpuses may
be a HARDER problem than learning grammar based on embodied 
 experience...
which is hard enough...

  ADIOS solves this hard problem easily. Some or all modifications of
  ADIOS are memory-intensive though, I did not implement it completely
  yet.

  I am doing it in Java.

  Also, Google Scholar http://scholar.google.com/ shows no evidence of
  substantial subsequent work of other people in the direction of ADIOS.


OTOH we're talking about research here and nobody's intuition is 
 perfect ...
so what you're describing could potentially be a great GSOC project 
 mentored
by YOU not me .. I don't want to impose my own personal intuition 
 and taste

[agi] Fwd: [DIV10] opportunity for graduate studies in evolution of human creativity

2008-04-01 Thread Ben Goertzel
 the
attribute level because it reflects understanding at the conceptual
level, such as analogical transfer (e.g. of the concept HANDLE from
KNIFE to CUP), or the knowledge that two artifacts are complementary
(e.g. MORTAR and PESTLE). The program then postulates 'lineages', i.e.
patterns of relatedness, amongst the artifacts that takes into account
both externally driven change (e.g. trade) and internally driven
change (e.g. blending of different traditions) using as an initial
data set decorated ceramics from Easter Island. The program has the
potential to be used for other elements of culture (e.g. gestures or
languages); indeed to reconstruct the cultural evolution of the
various interacting facets of human worldviews.
In sum, the proposed research advances a promising and innovative
approach to the study of cultural evolution, with implications that
extend across the sciences, social sciences, and humanities. It
tackles questions that lie at the foundation of who we are and what
makes us distinctive.







-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Ben Goertzel
 it could contain inconsistencies,
 but you are going to have that problem with any inductive system.)  If you
 are going to be using a rational-based AGI method, then you are going to
 want some theories that exhibit critical reasoning.  These kinds of theories
 might turn out to be the keystone in developing more sophisticated models
 about the world and reevaluating less sophisticated models.
 
  Jim Bromer

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  
 Looking for last minute shopping deals? Find them fast with Yahoo! Search.
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente's next 15 minutes of fame...

2008-03-31 Thread Ben Goertzel
We haven't launched anything public yet (and I'm not sure when we will)
but the prototype experiment shown in that machinima was done in Second
Life, yeah ...

We have also experimented with other virtual worlds such as Multiverse...

Ben G

On Mon, Mar 31, 2008 at 2:38 PM, Rafael C.P. [EMAIL PROTECTED] wrote:
 Is it running inside Second Life already or it's another enviroment? (sorry
 I don't know SL very well)



 On Sat, Mar 29, 2008 at 11:40 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

  Nothing has been publicly released yet, it's still at the
  research-prototype stage ... I'll announce when we have some kind of
  product release...
 
  ben
 
 
 
 
  On Sat, Mar 29, 2008 at 5:39 PM, Jim Bromer [EMAIL PROTECTED] wrote:
   It sounds interesting.  Can anyone go and try it, or does it cost money
 or
   something.  Is it set up already?
   Jim Bromer
  
  
  
   On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  
   
   
   
   
  
 http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
   
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
   
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
   
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com
   
  
  

  
agi | Archives | Modify Your Subscription
 
 
 
  --
 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
 
  Modify Your Subscription: http://www.listbox.com/member/?;
 
 
 
  Powered by Listbox: http://www.listbox.com
 



 --
 =
 Rafael C.P.
 =

  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-31 Thread Ben Goertzel
  Thank you for your politeness and your insightful comments.  I am
  going to quit this group because I have found that it is a pretty bad
  sign when the moderator mocks an individual for his religious beliefs.

FWIW, I wasn't joking about your algorithm's putative
divine inspiration in my role as moderator, but rather in my role
as individual list participant ;-)

Sorry that my sense of humor got on your nerves.  I've had that effect
on people before!

Really though: if you're going to post messages in forums populated
by scientific rationalists, claiming divine inspiration for your ideas, you
really gotta expect **at minimum** some good-natured ribbing... !

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
My judgment as list moderator:

1)  Discussions of particular, speculative algorithms for solving SAT
are not really germane for this list

2)  Announcements of really groundbreaking new SAT algorithms would
certainly be germane to the list

3) Discussions of issues specifically regarding the integration of SAT solvers
into AGI architectures are highly relevant to this list

4) If you think some supernatural being placed an insight in your mind, you're
probably better off NOT mentioning this when discussing the insight in a
scientific forum, as it will just cause your idea to be taken way less seriously
by a vast majority of scientific-minded people...

-- Ben G, List Owner

On Sun, Mar 30, 2008 at 4:41 PM, Mark Waser [EMAIL PROTECTED] wrote:


 I agree with Richard and hereby formally request that Ben chime in.

 It is my contention that SAT is a relatively narrow form of Narrow AI and
 not general enough to be on an AGI list.

 This is not meant, in any way shape or form, to denigrate the work that you
 are doing.  It is very important work.

 It's just that you're performing the equivalent of presenting a biology
 paper at a physics convention.:-)




 - Original Message -
 From: Jim Bromer
 To: agi@v2.listbox.com
 Sent: Sunday, March 30, 2008 11:52 AM
 Subject: **SPAM** Re: [agi] Logical Satisfiability...Get used to it.





  On the contrary, Vladimir is completely correct in requesting that the
  discussion go elsewhere:  this has no relevance to the AGI list, and
  there are other places where it would be pertinent.
 
 
  Richard Loosemore
 
 

  If Ben doesn't want me to continue, I will stop posting to this group.
 Otherwise please try to understand what I said about the relevance of SAT to
 AGI and try to address the specific issues that I mentioned.  On the other
 hand, if you don't want to waste your time in this kind of discussion then
 do just that: Stay out of it.
 Jim Bromer


 Jim Bromer
  

  agi | Archives | Modify Your Subscription
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Logical Satisfiability...Get used to it.

2008-03-30 Thread Ben Goertzel
On Sun, Mar 30, 2008 at 5:09 PM, Mark Waser [EMAIL PROTECTED] wrote:
  4) If you think some supernatural being placed an insight in your mind,
   you're
   probably better off NOT mentioning this when discussing the insight in a
   scientific forum, as it will just cause your idea to be taken way less
   seriously
   by a vast majority of scientific-minded people...

  Awesome answer!

  However, only *some* religions believe in supernatural beings and I,
  personally, have never seen any evidence supporting such a thing.

I've got one in a jar in my basement ... but don't worry, I won't let him out
till the time is right ;-) ...

and so far, all his AI ideas have proved to be
absolute bullshit, unfortunately ... though he's done a good job of helping
me put hexes on my neighbors...


  Have you been having such experiences and been avoiding mentioning them
  because you're afraid for your reputation?

  Ben, I'm worried about you now.;-)




  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente's next 15 minutes of fame...

2008-03-29 Thread Ben Goertzel
Nothing has been publicly released yet, it's still at the
research-prototype stage ... I'll announce when we have some kind of
product release...

ben

On Sat, Mar 29, 2008 at 5:39 PM, Jim Bromer [EMAIL PROTECTED] wrote:
 It sounds interesting.  Can anyone go and try it, or does it cost money or
 something.  Is it set up already?
 Jim Bromer



 On Fri, Mar 28, 2008 at 6:54 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 
 
 
 
 http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 


  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Novamente's next 15 minutes of fame...

2008-03-28 Thread Ben Goertzel
http://technology.newscientist.com/article/mg19726495.700-virtual-pets-can-learn-just-like-babies.html

-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Ben Goertzel
  So if I tell you to handle an object, or a piece of business, like say
  removing a chair from the house - that word handle is open-ended and
  gives you vast freedom within certain parameters as to how to apply your
  hand(s) to that object. Your hands can be applied to move a given box, for
  example, in a vast if not infinite range of positions and trajectories. Such
  a general, open concept is of the essence of general intelligence, because
  it means that you are immediately ready to adapt to new kinds of situation -
  if your normal ways of handling boxes are blocked, you are ready to seek out
  or improvise some strange new contorted two-finger hand position to pick up
  the box - which also count as handling. (And you will have actually done a
  lot of this).

  So what is the meaning of handle? Well, to be precise, it doesn't have
  a/one meaning, and isn't meant to - it has a range of possible
  meanings/references, and you can choose which is most convenient in the
  circumstances.

Actually I'd make a stronger statement than that.

It's not just that we can CHOOSE the meanings of concepts from a fixed menu
of possibilities ... we CREATE the meanings of concepts as we use them ...
this is how and why concept-meanings continually change over time in
individual minds and in cultures...

This is parallel to how we create episodic memories as we re-live them,
rather than retrieving them as if from a database...

These creation processes do however seem to be realizable in digital
computer systems, based on my theoretical understanding ... though none
of us have done it yet, it's certainly loads of work given current software
tools...

Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Ben Goertzel
 normal ways of handling boxes are blocked, you are ready to seek out
 or improvise some strange new contorted two-finger hand position to pick up
 the box - which also count as handling. (And you will have actually done a
 lot of this).

 So what is the meaning of handle? Well, to be precise, it doesn't have
 a/one meaning, and isn't meant to - it has a range of possible
 meanings/references, and you can choose which is most convenient in the
 circumstances.


 The same principles apply to just about every word in language and every
 unit of logic and mathematics.

 But - and correct me - I don't think anyone in AI/AGI is using language or
 any logico-mathematical systems in this general, open-ended way - the way
 they are actually meant to be used - and the very foundation of General
 Intelligence.

 Language and the other systems are always used by AGI in specific ways to
 have specific meanings. YKY, typically, wanted a language for his system
 which had precise meanings. Even Ben, I suspect, may only employ words in an
 open way, in that their meanings can be changed with experience - but at
 any given point their meanings will have to be specific.

 To be capable of generalising as the human brain does - and of true AGI -
 you have to have a brain that simultaneously processes on at least two if
 not three levels, with two/three different sign systems - including both
 general and particular ones.



 John: Charles:  I don't think a General Intelligence could be built
 entirely
  out
  of
   narrow AI components, but it might well be a relatively trivial add-
  on.
   Just consider how much of human intelligence is demonstrably narrow
  AI
   (well, not artificial, but you know what I mean).  Object
  recognition,
   e.g.  Then start trying to guess how much of the part that we can't


   prove a classification for is likely to be a narrow intelligence
   component.  In my estimation (without factual backing) less than
  0.001
   of our intelligence is General Intellignece, possibly much less.
   
  
  John:  I agree that it may be 1%. 
  
 
  Oh boy, does this strike me as absurd. Don't have time for the theory
  right
  now, but just had to vent. Percentage estimates strike me as a bit
  silly,
  but if you want to aim for one, why not look at both your paragraphs,
  word
  by word. Don't  think might relatively etc. Now which of those
  words
  can only be applied to a single type of activity, rather than an open-
  ended
  set of activities? Which cannot be instantiated in an open-ended if not
  infinite set of ways? Which is not a very valuable if not key tool of a
  General Intelligence, that can adapt to solve problems across domains?
  Language IOW is the central (but not essential) instrument of human
  general
  intelligence - and I can't think offhand of a single world that is not a
  tool for generalising across domains, including Charles H. and John
  G..
 
  In fact, every tool you guys use - logic, maths etc. - is similarly
  general
  and functions in similar ways. The above strikes me as a 99% failure to
  understand the nature of general intelligence.
 
 
  Mike you are 100% potentially right with a margin of error of 110%. LOL!
 
  Seriously Mike how do YOU indicate approximations? And how are you
  differentiating general and specific? And declaring relative absolutes and
  convenient infinitudes... I'm trying to understand your argument.
 
  John
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;

  Powered by Listbox: http://www.listbox.com
 
 
 
  --
  No virus found in this incoming message.
  Checked by AVG.
  Version: 7.5.519 / Virus Database: 269.22.1/1345 - Release Date: 3/26/2008
  6:50 PM
 
 


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;

 Powered by Listbox: http://www.listbox.com


  
 Never miss a thing. Make Yahoo your homepage.
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Is there some kind of online software that lets a group of people
update a Mind Map
diagram collaboratively, in the manner of a Wiki page?

This would seem critical if a Mind Map is to really be useful for the purpose
you suggest...

-- Ben

On Wed, Mar 26, 2008 at 8:32 AM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Well ... I can take a shot at putting a diagram together.  Making Mind Maps
 is one way I learn any kind of material I want.

 If the topics in the list(s) are descriptive enough, I can take a shot at
 putting such a diagram together.
  It'd be less work to correct it than to make one, right?

 Hey - whatever helps.  For me, it's a win-win.  It would help me, and it
 would help accomplish what you guys are trying to do.

 Let me know,
  ~Aki



 On Tue, Mar 25, 2008 at 10:40 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  This kind of diagram would certainly be meaningful, but, it would be a
  lot of work to put together, even more so than a traditional TOC ...
 
 
 
 
  On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
   Hi Pei -
  
   What about having a tree like diagram that branches out into either:
   - the different paths / approaches to AGI (for instance: NARS,
 Novamente,
   and Richard's, etc.), with suggested readings at those leaves
- area of study, with suggested readings at those leaves
  
   Or possibly, a Mind Map diagram that shows AGI in the middle, with the
   approaches stemming from it, and then either sub fields, or a reading
 list
   and / or collection of links (though the links may become outdated,
 dead).
  
   Point is, would a diagram help map the field - which caters to the
   differing approaches, and which helps those wanting to chart a course to
   their own learning/study ?
  
   Thanks,
   ~Aki
  
  
  
  
On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED]
 wrote:
Ben,
   
It is a good start!
   
Of course everyone else will disagree --- like what Richard did and
I'm going to do. ;-)
   
I'll try to find the time to provide my list --- at this moment, it
will be more like a reading list than a textbook TOC. In the future,
it will be integrated into the E-book I'm working on
(http://nars.wang.googlepages.com/gti-summary).
   
Compared to yours, mine will contain less math and algorithms, but
more psychology and philosophy.
   
I'd like to see what Richard and others want to propose. We shouldn't
try to merge them into one wiki page, but several.
   
Pei
   
   
   
   
   
On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED]
 wrote:
 Hi all,

  A lot of students email me asking me what to read to get up to
 speed on
   AGI.

  So I started a wiki page called Instead of an AGI Textbook,


  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

  Unfortunately I did not yet find time to do much but outline a
 table
  of contents there.

  So I'm hoping some of you can chip in and fill in some relevant
  hyperlinks on the pages
  I've created ;-)

  For those of you too lazy to click the above link, here is the
  introductory note I put on the wiki page:


  

  I've often lamented the fact that there is no advanced undergrad
 level
  textbook for AGI, analogous to what Russell and Norvig is for
 Narrow
  AI.

  Unfortunately, I don't have time to write such a textbook, and no
 one
  else with the requisite knowledge and ability seems to have the
 time
  and inclination either.

  So, instead of a textbook, I thought it would make sense to outline
  here what the table of contents of such a textbook might look like,
  and to fill in each section within each chapter in this TOC with a
 few
  links to available online resources dealing with the topic of the
  section.

  However, all I found time to do today (March 25, 2008) is make the
  TOC. Maybe later I will fill in the links on each section's page,
 or
  maybe by the time I get around it some other folks will have done
 it.

  While nowhere near as good as a textbook, I do think this can be a
  valuable resource for those wanting to get up to speed on AGI
 concepts
  and not knowing where to turn to get started. There are some
 available
  AGI bibliographies, but a structured bibliography like this can
  probably be more useful than an unstructured and heterogeneous one.

  Naturally my initial TOC represents some of my own biases, but I
 trust
  that by having others help edit it, these biases will ultimately
 come
  out in the wash.

  Just to be clear: the idea here is not to present solely AGI
 material.
  Rather the idea is to present material that I think students would
 do
  well to know, if they want to work on AGI. This includes some AGI,
  some narrow AI, some

Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Thanks Mark ... let's see how it evolves...

I think the problem is not finding a publisher, but rather, finding
the time to contribute and refine the content

Maybe in a year or two there will be enough good content there that
someone with appropriate time and inclination and skill can shape it
into a textbook

-- Ben

On Wed, Mar 26, 2008 at 9:49 AM, Mark Waser [EMAIL PROTECTED] wrote:
 Hi Ben,

 I have a publisher who would love to publish the result of the wiki as a
  textbook if you are willing.

 Mark



  - Original Message -
  From: Ben Goertzel [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Tuesday, March 25, 2008 7:46 PM
  Subject: [agi] Instead of an AGI Textbook


   Hi all,
  
   A lot of students email me asking me what to read to get up to speed on
   AGI.
  
   So I started a wiki page called Instead of an AGI Textbook,
  
   
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
  
   Unfortunately I did not yet find time to do much but outline a table
   of contents there.
  
   So I'm hoping some of you can chip in and fill in some relevant
   hyperlinks on the pages
   I've created ;-)
  
   For those of you too lazy to click the above link, here is the
   introductory note I put on the wiki page:
  
  
   
  
   I've often lamented the fact that there is no advanced undergrad level
   textbook for AGI, analogous to what Russell and Norvig is for Narrow
   AI.
  
   Unfortunately, I don't have time to write such a textbook, and no one
   else with the requisite knowledge and ability seems to have the time
   and inclination either.
  
   So, instead of a textbook, I thought it would make sense to outline
   here what the table of contents of such a textbook might look like,
   and to fill in each section within each chapter in this TOC with a few
   links to available online resources dealing with the topic of the
   section.
  
   However, all I found time to do today (March 25, 2008) is make the
   TOC. Maybe later I will fill in the links on each section's page, or
   maybe by the time I get around it some other folks will have done it.
  
   While nowhere near as good as a textbook, I do think this can be a
   valuable resource for those wanting to get up to speed on AGI concepts
   and not knowing where to turn to get started. There are some available
   AGI bibliographies, but a structured bibliography like this can
   probably be more useful than an unstructured and heterogeneous one.
  
   Naturally my initial TOC represents some of my own biases, but I trust
   that by having others help edit it, these biases will ultimately come
   out in the wash.
  
   Just to be clear: the idea here is not to present solely AGI material.
   Rather the idea is to present material that I think students would do
   well to know, if they want to work on AGI. This includes some AGI,
   some narrow AI, some psychology, some neuroscience, some mathematics,
   etc.
  
   ***
  
  
   -- Ben
  
  
   --
   Ben Goertzel, PhD
   CEO, Novamente LLC and Biomind LLC
   Director of Research, SIAI
   [EMAIL PROTECTED]
  
   If men cease to believe that they will one day become gods then they
   will surely become worms.
   -- Henry Miller
  

  ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription:
   http://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  


  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/

 Modify Your Subscription: http://www.listbox.com/member/?;


 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Hi Stephen,

 Ben,
 Wikipedia has significant overlap with the topic list on the AGIRI Wiki.  I
 propose for discussion the notion that the AGIRI Wiki be content-compatible
 with Wikipedia along two dimensions:

 license - authors agree to the GNU Free Documentation License

I have no problem with that

 editorial standards - Wikipedia says that content should be sourced from one
 or more research papers or textbooks, not just from the personal knowledge
 of the author, or from some web page.

Well, I think it is appropriate that a wiki covering an in-development research
area should contain a mix of sourced and non-sourced contents, actually.

In many cases it's the non-sourced content that will be the most
valuable, because
it represents practical knowledge and experience of AGI researchers and
developers, which is too new or raw to have been put into the formal literature
yet.

I concede in
 advance that most AGIRI Wiki authors will find Wikipedia editorial standards
 burdensome,

To me this is a pretty major point.

The challenge with an AGI wiki right now is to get people to contribute quality
content at all ... so I'm not psyched about, right now at the starting
stage, making
them jump through hoops in order to do so.

but the benefit would be athat content from the AGIRI Wiki can
 be used to create new, or improve existing Wikipedia articles.

That would be the case so long as the license is in place, it doesn't require
everything to be sourced -- appropriate sourcing could always be
introduced at the time
of porting to Wikipedia.

As the author of a load of academic papers, I'm well aware of how
irritating and
time-consuming it is to properly reference sources.  If I have to do
that for text I place on
the AGIRI wiki, I'm not likely to contribute much to it, just like I
don't currently contribute
much to Wikipedia.  I just don't have the time

And if we
 can agree that on the  easy-to-achieve license, content from Wikipedia, e.g.
 my article on Hierarchical control systems can easily be imported into the
 AGIRI Wiki.

I don't see a problem with the license.

 Wikipedia is important to AGI, not only as an online encyclopedia that
 facilitates almost universal access to AGI related topics, but as a target
 for AI researchers that want to structure the text into a vast knowledge
 base.  Somewhere down the road to self-improvement, an AGI will be reading
 Wikipedia.

Along with the rest of the Web ...  for sure ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
Fair enough, Richard...

Again I'll emphasize that the idea of the Instead of an AGI Textbook
is not to teach any particular theory or design for AGI, but rather to convey
background knowledge that is useful for folks who wish to come to grips
with contemporary AGI theories and designs

I have articulated my own coherent body of thought regarding AGI as well,
but I consider it to best be presented at the research treatise or research
paper rather than textbook level...

-- Ben G


On Wed, Mar 26, 2008 at 12:55 PM, Richard Loosemore [EMAIL PROTECTED] wrote:


  A propos of the several branches of discussion about AGI textbooks on
  this thread...

  Knowing what I do about the structure and content of the book I am
  writing, I cannot imagine it being merged as just a set of branch points
  from other works, like the one growing from Ben's TOC.

  What I am doing is a coherent body of thought in its own right, with a
  radically different underlying philosophy, so it really needs to be a
  standalone project.



  Richard Loosemore



  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Fwd: [agi] Instead of an AGI Textbook

2008-03-26 Thread Ben Goertzel
 BTW I improved the hierarchical organization of the TOC a bit, to
 remove the impression that it's just a random grab-bag of topics...


 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook

 ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Instead_of_an_AGI_Textbook Challenge !!

2008-03-26 Thread Ben Goertzel
OK... I just burned an hour inserting more links and content into

http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook

I'm burnt out on it for a while, there's too much other stuff on my plate

However, I have a challenge for y'all

There are something like 400 people subscribed to this list...

If 25 of you spend 30 minutes each, during the next week, adding
relevant content to the non-textbook wiki page ... then at the end
of the week we will have a pretty nice knowledge resource for
newbies to AGI.

And we will probably all learn something from following up each
others' references ...

And then I'll save a lot of time during the next year, because when
someone emails me and asks me what they should read to get
up to speed on the general thinking in the AGI field, I'll just point
them to the non-textbook ;-)

-- Ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Re: Instead_of_an_AGI_Textbook Challenge !!

2008-03-26 Thread Ben Goertzel
Ah, one more note...

Due to its location on the AGIRI wiki, the Instead_of_an_AGI_Textbook
automatically links into the Mind Ontology

http://www.agiri.org/wiki/Mind_Ontology

that I created in a fit of mania one weekend a couple years ago.

So, just remember that if you decide to add content to the non-textbook,
rather than just links, you can link it into the Mind Ontology, expand
the Mind Ontology, etc.

The idea of the Mind Ontology was to create a unified common vocabulary
for AGI thinkers/researchers...

It didn't really work because almost no one paid attention, but it was a sort
of fun weekend ;-)

-- Ben


On Wed, Mar 26, 2008 at 10:43 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 OK... I just burned an hour inserting more links and content into

  http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook

  I'm burnt out on it for a while, there's too much other stuff on my plate

  However, I have a challenge for y'all

  There are something like 400 people subscribed to this list...

  If 25 of you spend 30 minutes each, during the next week, adding
  relevant content to the non-textbook wiki page ... then at the end
  of the week we will have a pretty nice knowledge resource for
  newbies to AGI.

  And we will probably all learn something from following up each
  others' references ...

  And then I'll save a lot of time during the next year, because when
  someone emails me and asks me what they should read to get
  up to speed on the general thinking in the AGI field, I'll just point
  them to the non-textbook ;-)

  -- Ben




  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
 Now, let me ask you a question:  Do you believe that all AI / AGI
 researchers are toiling over all this for the challenge, or purely out of
 interest?  I doubt that as well.  Surely there are those elements as drivers
 - BUT SO IS MONEY.

Aki, you don't seem to understand the psychology of the
AGI researcher very well.

Firstly, academic AGI researchers are not in it for the $$, and are unlikely
to profit from their creations no matter how successful.  Yes, spinoffs from
academia to industry exist, but the point is that academic work is motivated
by love of science and desire for STATUS more so than desire for money.

Next, Singularitarian AGI researchers, even if in the business domain (like
myself), value the creation of AGI far more than the obtaining of material
profits.

I am very interested in deriving $$ from incremental steps on the path to
powerful AGI, because I think this is one of the better methods available
for funding AGI RD work.

But deriving $$ from human-level AGI really is not a big motivator of
mine.  To me, once human-level AGI is obtained, we have something of
dramatically more interest than accumulation of any amount of wealth.

Yes, I assume that if I succeed in creating a human-level AGI, then huge
amounts of $$ for research will come my way, along with enough personal $$ to
liberate me from needing to manage software development contracts
or mop my own floor.  That will be very nice.  But that's just not the point.

I'm envisioning a population of cockroaches constantly fighting over
crumbs of food on the floor.  Then a few of the cockroaches -- let's
call them the Cockroach Robot Club --  decide to
spend their lives focused on creating a superhuman robot which will
incidentally allow cockroaches to upload into superhuman form with
superhuman intelligence.  And the other cockroaches insist that
Cockroach Robot Club's
motivation in doing this must be a desire
to get more crumbs of food.  After all,
just **IMAGINE** how many crumbs of food you'll be able to get with
that superhuman robot on your side!!!  Buckets full of crumbs!!!  ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
Hi Aki,

 Even as a pure scientist, you can
 accomplish more in research by producing wealth, than depending on gov't
 grants.  I say gov't grants because private investment is probably years
 away from now.  The topic of financing got a lot of attention at AGI 08.


Well, if you're an AGI researcher and believe that government funding isn't
going to push AGI forward ... and that unfunded or lightly-funded
open-source initiatives like
OpenCog won't work either ... then  there are two approaches, right?

1)
You can try to do like Jeff Hawkins, and make a pile of $$ doing something
AGI-unrelated, and then use the ensuing $$ for AGI

2)
You can try to make $$ from stuff that's along the incremental path to AGI


I'm trying approach 2  but it has its pitfalls.  Yet so of course does
approach 1 --
Hawkins succeeded and so have others whom I know, but it's a tiny minority
of those who have tried... being a great AGI researcher does not necessarily
make you great at business, nor even at narrow-AI biz applications...

There are no easy answers to the problem of being ahead of your time ...
yet it's those of us who are willing to push ahead in spite of being
out of synch
with society's priorities, that ultimately shift society's priorities
(and in this case,
may shift way more than that...)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente study

2008-03-25 Thread Ben Goertzel
Hi,

The PLN book should be out by that date ... I'm currently putting in
some final edits to the manuscript...

Also, in April and May I'll be working on a lot of documentation
regarding plans for OpenCog.  While this doesn't include all
Novamente's proprietary stuff, it will certainly tell you enough to
give you a way better understanding of what Novamente, as well as
OpenCog, is all about...

-- Ben

On Tue, Mar 25, 2008 at 1:28 PM, Derek Zahn [EMAIL PROTECTED] wrote:

 Ben,

  It seems to me that Novamente is widely considered the most promising and
 advanced AGI effort around (at least of the ones one can get any detailed
 technical information about), so I've been planning to put some significant
 effort into understanding it with a view toward deciding whether I think
 you're on the right track or not (with as little hand-waving, faith, or
 bigotry as possible in my conclusion).  To do that properly, I am waiting
 for your book on Probabilistic Logic Networks to be published.  Amazon says
 July 2008... is that date correct?

  Thanks!

  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Hi all,

A lot of students email me asking me what to read to get up to speed on AGI.

So I started a wiki page called Instead of an AGI Textbook,

http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

Unfortunately I did not yet find time to do much but outline a table
of contents there.

So I'm hoping some of you can chip in and fill in some relevant
hyperlinks on the pages
I've created ;-)

For those of you too lazy to click the above link, here is the
introductory note I put on the wiki page:




I've often lamented the fact that there is no advanced undergrad level
textbook for AGI, analogous to what Russell and Norvig is for Narrow
AI.

Unfortunately, I don't have time to write such a textbook, and no one
else with the requisite knowledge and ability seems to have the time
and inclination either.

So, instead of a textbook, I thought it would make sense to outline
here what the table of contents of such a textbook might look like,
and to fill in each section within each chapter in this TOC with a few
links to available online resources dealing with the topic of the
section.

However, all I found time to do today (March 25, 2008) is make the
TOC. Maybe later I will fill in the links on each section's page, or
maybe by the time I get around it some other folks will have done it.

While nowhere near as good as a textbook, I do think this can be a
valuable resource for those wanting to get up to speed on AGI concepts
and not knowing where to turn to get started. There are some available
AGI bibliographies, but a structured bibliography like this can
probably be more useful than an unstructured and heterogeneous one.

Naturally my initial TOC represents some of my own biases, but I trust
that by having others help edit it, these biases will ultimately come
out in the wash.

Just to be clear: the idea here is not to present solely AGI material.
Rather the idea is to present material that I think students would do
well to know, if they want to work on AGI. This includes some AGI,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.

***


-- Ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Java spreading activation library released

2008-03-25 Thread Ben Goertzel
Hi Stephen,

I think this approach makes sense.

In Novamente/OpenCog, we don't use spreading activation, but we use an
economic attention allocation mechanism that is similar in spirit
(though subtly
different in dynamics).

The motivation is similar: You just can't use complex, abstract
reasoning methods
for everything, because they're too expensive.  So this sort of simple
heuristic approach
is useful in many cases, as an augmentation to more precise methods.

-- Ben

On Tue, Mar 25, 2008 at 7:53 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 While programming my bootstrap English dialog system, I needed a spreading
 activation library for the purpose of enriching the discourse context with
 conceptually related terms.  For example given that there is a
 human-habitable room that both speakers know of, then it is reasonable to
 assume that on the table has meaning on the piece of furniture in the
 room rather than the meaning subject to negotiation.  This assumption can
 be deductively concluded by an inference engine given the room as a fact,
 and rules concluding the typical objects that are found in rooms.  But
 performing theorem proving during utterance comprehension is not cognitively
 plausible, and would take too long for real-time performance.   Suppose that
 offline deductive inference provides justifications (e.g. proof traces) to
 support learned links between rooms and tables, then spreading activation is
 a well known algorithm for searching semantic graphs for relevant linked
 nodes.

 A literature search provided much useful information regarding spreading
 activation, also known as marker passing, especially about natural language
 disambiguation, which is my topic of interest.  Because there are no general
 purpose spreading activation Java libraries available, I wrote one and just
 released it on the Texai SourceForge project site.  The download includes
 Javadoc, an overview document, source code, all required jars (Java
 libraries), unit tests and examples, and GraphViz illustrations of sample
 graphs.  Performance is acceptable: 20,000 nodes can be activated in 24 ms
 with one thread on my 2.8 GHz CPU.  Furthermore the code is multi-threaded
 and it gets about a 30% speed increase by using two CPU cores.  Even if you
 are not interested in spreading activation, the Java code is a clear example
 of using a CyclicBarrier and CountdownLatch to control worker threads with a
 driver.

 A practice I recommend to you all is to improve Wikipedia articles on AI
 topics of interest.  Therefore I elaborated the existing article on
 spreading activation to include the algorithm and its variations.

 Cheers.
 -Steve
  Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860


  
 Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
 now.
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


<    2   3   4   5   6   7   8   9   10   11   >