Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
One thing I don't get, YKY, is why you think you are going to take
textbook methods that have already been shown to fail, and somehow
make them work.  Can't you see that many others have tried to use
FOL and ILP already, and they've run into intractable combinatorial
explosion problems?

Some may argue that my approach isn't radical **enough** (and in spite
of my innate inclination toward radicalism, I'm trying hard in my AGI work
to be no more radical than is really needed, out of a desire to save time/
effort by reusing others' insights wherever  possible) ... but at least I'm
introducing a host of clearly novel technical ideas.

What you seem to be suggesting is just to implement material from
textbooks on a large knowledge base.

Why do you think you're gonna make it work?  Because you're gonna
build a bigger KB than Cyc has built w/ their 20 years of effort and
tens to hundreds of million of dollars of US gov't funding???

-- Ben G

On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More brain scanning and language

2008-06-03 Thread Vladimir Nesov
On Tue, Jun 3, 2008 at 11:08 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 We can tell what parts of the brain tend to be involved in what sorts
 of activities, from fMRI.  Not much else.

 Puzzling out complex neural functions often involves combining fMRI
 data from humans with data from single-neuron recordings in other
 animals.  But we can generally only measure from a few dozen neurons
 at a time even in invasive animal studies...

 As an example, no one yet knows how the brain represents 3D shapes ...
 is it a literal 3D map of an object? some kind of symbolic
 representation? some combination? something inbetween?  fMRI or other
 brain imaging tools don't tell us, yet... I think we'll need better
 tools ...


Grid cells ( http://www.scholarpedia.org/wiki/index.php?title=Grid_cells
) is a very impressive feature. You can infer a lot from findings like
this, about the way (low-level) knowledge forms in the brain.
Presumably representations of 3D scenes include their grid
projections, and some kind of structural (causal) skeleton of the
scene.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).

I wonder why you don't join Stephen Reed on the texai project?  Is it
because you don't like the open-source nature of his project?

ben

On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

 What you seem to be suggesting is just to implement material from
 textbooks on a large knowledge base.

 Why do you think you're gonna make it work?  Because you're gonna
 build a bigger KB than Cyc has built w/ their 20 years of effort and
 tens to hundreds of million of dollars of US gov't funding???

 -- Ben G

 On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 Also, YKY, I can't help but note that your currently approach seems
 extremely similar to Texai (which seems quite similar to Cyc to me),
 more so than to OpenCog Prime (my proposal for a Novamente-like system
 built on OpenCog, not yet fully documented but I'm actively working on
 the docs now).

 I wonder why you don't join Stephen Reed on the texai project?  Is it
 because you don't like the open-source nature of his project?

You have built an AGI enterprise (at least, on the way to it).  Often
the *people* matter more than the technology.  I *need* to collaborate
with the community in order to win.  And vice versa.  Texai is closer
to my theory but you have a bigger community.  I don't have the
resources to rebuild the infrastructure that you have, eg the virtual
reality embodiment etc.

Opensource is such a thorny issue.  I don't have a clear idea yet...

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
Hi Ben,

Note that I did not pick FOL as my starting point because I wanted to
go against you, or be a troublemaker.  I chose it because that's what
the textbooks I read were using.  There is nothing personal here.
It's just like Chinese being my first language because I was born in
China.  I don't speak bad English just to sound different.

I think the differences in our approaches are equally superficial.  I
don't think there is a compelling reason why your formalism is
superior (or inferior, for that matter).

You have domain-specific heuristics;  I'm planning to have
domain-specific heuristics too.

The question really boils down to whether we should collaborate or
not.  And if we want meaningful collaboration, everyone must exert a
little effort to make it happen.  It cannot be one-way.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 1) representing uncertainties in a way that leads to tractable, meaningful
 logical manipulations.  Indefinite probabilities achieve this.  I'm not saying
 they're the only way to achieve this, but I'll argue that single-number,
 Walley-interval, fuzzy, or full-pdf approaches are not adequate for various
 reasons.

First of all, the *tractability* of your algorithm depends on
heuristics that you design, which are separable from the underlying
probabilistic logic calculus.  In your mind, these 2 things may be
mixed up.

Indefinite probabilities DO NOT imply faster inference.
Domain-specific heuristics do that.

Secondly, I have no problem at all, with using your indefinite
probability approach.

It's a laudable achievement what you've accomplished.

Thirdly, probabilistic logics -- of *any* flavor -- should
[approximately] subsume binary logic if they are sound.  So there is
no reason why your logic is so different that it cannot be expressed
in FOL.

Fourthly, the approach that I'm more familiar with is interval
probability.  I acknowledge that you have gone further in this
direction, and that's a good thing.

 2) using inference rules that lead to relatively high-confidence uncertainty
 propagation.  For instance term logic deduction is better for uncertain
 inference than modus ponens deduction, as detailed analysis reveals

I believe term logic is translatable to FOL -- Fred Sommers mentioned
that in his book.

 3) propagating uncertainties meaningfully through abstract logical
 formulae involving nested quantifiers (we do this in a special way in PLN
 using third-order probabilities; I have not seen any other conceptually
 satisfactory solution)

Again, that's well done.

But are you saying that the same cannot be achieved using FOL?

 4) most critically perhaps, using uncertain truth values within inference
 control to help pare down the combinatorial explosion

Uncertain truth values DO NOT imply faster inference.  In fact, they
slow down inference wrt binary logic.

If your inference algorithm is faster than resolution, and it's sound
(so it subsumes binary logic), then you have found a faster FOL
inference algorithm.  But that's not true;  what you're doing is
domain-specific heuristics.

 How these questions are answered matters a LOT, and my colleagues
 and I spent years working on this stuff.  It's not a matter of converting
 between equivalent formalisms.

I think one can do
indefinite probability + FOL + domain-specific heuristics
just as you can do
indefinite probability + term logic + domain-specific heuristics
but it may cost an amount of effort that you're unwilling to pay.

This is a very sad situation...
YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

Calm down =)

I'll use domain-specific heuristics just as you do.  There's nothing
wrong with textbooks.

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

Yes, I acknowledge that you have novel ideas.  But do you really think
I'm so dumb that I ONLY use textbook ideas?  I try to integrate
existing methods.  My style of innovation is kind of subtle.

You have done something new, but not so new as to be in a totally
different dimension.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel

 As we have discussed a while back on the OpenCog mail list, I would like to
 see a RDF interface to some level of the OpenCog Atom Table.  I think that
 would suit both YKY and myself.  Our discussion went so far as to consider
 ways to assign URI's to appropriate atoms.

Yes, I still think that's a good idea and I'm fairly sure it will
happen this year... probably not too long after the code is considered
really ready for release...

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
 First of all, the *tractability* of your algorithm depends on
 heuristics that you design, which are separable from the underlying
 probabilistic logic calculus.  In your mind, these 2 things may be
 mixed up.

 Indefinite probabilities DO NOT imply faster inference.
 Domain-specific heuristics do that.

Not all heuristics for inference control are narrowly domain-specific

Some may be generally applicable across very broad sets of  domains,
say across all domains satisfying certain broad mathematical
properties such as similar theorems tend to have similar proofs.

So, I agree that indefinite probabilities themselves don't imply
faster inference.

However, we have some heuristics for (relatively) fast inference
control that we believe will apply across any domains satisfying
certain broad mathematical properties ... and that won't work with
traditional representations of uncertainty


 Secondly, I have no problem at all, with using your indefinite
 probability approach.

 It's a laudable achievement what you've accomplished.

 Thirdly, probabilistic logics -- of *any* flavor -- should
 [approximately] subsume binary logic if they are sound.  So there is
 no reason why your logic is so different that it cannot be expressed
 in FOL.

Yes of course it can be expressed in FOL ... it can be expressed in
Morse Code too, but I don't see a point to it ;-)  ... it could also be realized
via a mechanical contraption made of TinkerToys ... like Danny
Hillis's

http://www.ohgizmo.com/wp-content/uploads/2006/12/tinkertoycomputer_1.jpg

;-)


 But are you saying that the same cannot be achieved using FOL?


If you attach indefinite probabilities to FOL propositions, and create
indefinite probability formulas corresponding to standard FOL rules,
you will have a subset of PLN

But you'll have a hard time applying Bayes rule to FOL propositions
without being willing to assign probabilities to terms ... and you'll
have a hard time applying it to FOL variable expressions without doing
something that equates to assigning probabilities to propositions w.
unbound variables ... and like I said, I haven't seen any other
adequate way of propagating pdf's through quantifiers than the one we
use in PLN, though Halpern's book describes a lot of inadequate ways
;-)

 4) most critically perhaps, using uncertain truth values within inference
 control to help pare down the combinatorial explosion

 Uncertain truth values DO NOT imply faster inference.  In fact, they
 slow down inference wrt binary logic.

 If your inference algorithm is faster than resolution, and it's sound
 (so it subsumes binary logic), then you have found a faster FOL
 inference algorithm.  But that's not true;  what you're doing is
 domain-specific heuristics.

As noted above, the truth is somewhere inbetween.

You can find inference control heuristics that exploit general
mathematical properties of domains -- so they don't apply to ALL
domains, but nor are they specialized to any particular domain.

Evolution is like this in fact -- it's no good at optimizing random
fitness functions, but it's good at optimizing fitness functions
satisfying certain mathematical properties, regardless of the specific
domain they refer to

 I think one can do
indefinite probability + FOL + domain-specific heuristics
 just as you can do
indefinite probability + term logic + domain-specific heuristics
 but it may cost an amount of effort that you're unwilling to pay.

well we do both in PLN ... PLN is not a pure term logic...

 This is a very sad situation...

Oh ... I thought it was funny ... I suppose I'm glad I have a perverse
sense of humour ;-D

ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Stephen Reed
Hi Ben.
Thanks for suggesting that YKY collaborate with Texai because of our similar 
approaches to knowledge representation.  I believe that Cyc's lack of AGI 
progress is not due to their choice of FOL but rather that Cycorp emphasizes 
the hand-crafting of commonsense knowledge about things while disfavoring skill 
acquisition.

Texai will test the hypothesis that Cyc-style FOL (i.e. a RDF compatible 
subset) can represent procedures sufficient to support a mechanism that learns 
knowledge and skills, by being taught by mentors using natural language.  My 
initial bootstrap subject domain choices are:

* lexicon acquisition (e.g. mapping WordNet synsets to OpenCyc-style 
terms)
* grammar rule acquisition
* Java program synthesis - to support skill acquisition and executionI 
believe that the crisp (i.e. certain or very near certain) KR for these domains 
will facilitate the use of FOL inference (e.g. subsumption) when I need it to 
supplement the current Texai spreading activation techniques for word sense 
disambiguation and relevance reasoning.

I expect that OpenCog will focus on domains that require probabilistic 
reasoning, e.g. pattern recognition, which I am postponing until Texai is far 
enough along that expert mentors can teach it the skills for probabilistic 
reasoning.

---

As we have discussed a while back on the OpenCog mail list, I would like to see 
a RDF interface to some level of the OpenCog Atom Table.  I think that would 
suit both YKY and myself.  Our discussion went so far as to consider ways to 
assign URI's to appropriate atoms.

 
Cheers,
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 1:59:54 AM
Subject: Re: [agi] OpenCog's logic compared to FOL?

Also, YKY, I can't help but note that your currently approach seems
extremely similar to Texai (which seems quite similar to Cyc to me),
more so than to OpenCog Prime (my proposal for a Novamente-like system
built on OpenCog, not yet fully documented but I'm actively working on
the docs now).

I wonder why you don't join Stephen Reed on the texai project?  Is it
because you don't like the open-source nature of his project?

ben

On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 One thing I don't get, YKY, is why you think you are going to take
 textbook methods that have already been shown to fail, and somehow
 make them work.  Can't you see that many others have tried to use
 FOL and ILP already, and they've run into intractable combinatorial
 explosion problems?

 Some may argue that my approach isn't radical **enough** (and in spite
 of my innate inclination toward radicalism, I'm trying hard in my AGI work
 to be no more radical than is really needed, out of a desire to save time/
 effort by reusing others' insights wherever  possible) ... but at least I'm
 introducing a host of clearly novel technical ideas.

 What you seem to be suggesting is just to implement material from
 textbooks on a large knowledge base.

 Why do you think you're gonna make it work?  Because you're gonna
 build a bigger KB than Cyc has built w/ their 20 years of effort and
 tens to hundreds of million of dollars of US gov't funding???

 -- Ben G

 On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin)
 [EMAIL PROTECTED] wrote:
 Hi Ben,

 Note that I did not pick FOL as my starting point because I wanted to
 go against you, or be a troublemaker.  I chose it because that's what
 the textbooks I read were using.  There is nothing personal here.
 It's just like Chinese being my first language because I was born in
 China.  I don't speak bad English just to sound different.

 I think the differences in our approaches are equally superficial.  I
 don't think there is a compelling reason why your formalism is
 superior (or inferior, for that matter).

 You have domain-specific heuristics;  I'm planning to have
 domain-specific heuristics too.

 The question really boils down to whether we should collaborate or
 not.  And if we want meaningful collaboration, everyone must exert a
 little effort to make it happen.  It cannot be one-way.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one 

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel

 You have done something new, but not so new as to be in a totally
 different dimension.

 YKY

I have some ideas more like that too but I've postponed trying to sell them
to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff
like PLN ...

Look for some stuff on the applications of hypersets and division algebras
to endowing AGIs with free will and reflective awareness, maybe in
early 09 ...  ;)

-- Ben


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] modus ponens

2008-06-03 Thread YKY (Yan King Yin)
Modus ponens can be defined in a few ways.

If you take the binary logic definition:
A - B  means  ~A v B
you can translate this into probabilities but the result is a mess.  I
have analysed this in detail but it's complicated.  In short, this
definition is incompatible with probability calculus.

Instead I simply use
   A - B  meaning  P(B|A) = p
where p is the probability.  You can change p into an indefinite
probability or interval.

Is your modus ponens different from this?

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] More brain scanning and language

2008-06-03 Thread Mike Tintner
Thanks. I must confess to my usual confusion/ignorance here - but perhaps I 
should really have talked of solid rather than 3-D mapping.


When you sit in a familiar chair, you have, I presume, a solid mapping (or 
perhaps the word should be moulding)  - distributed over your body, of how 
it can and will fit into that chair. And I'm presuming that the maps in the 
brain may have a similar solid structure. And when you're in a familiar 
room, you may also have brain maps [or moulds] that tell you automatically 
what is likely to be in front of you, at back, and on each side.


Does your sense of 3-D mapping equate to this?


Bob/JAR.. MT: What are the implications for computing - how would it have 
to

change - if
the brain uses literal 3D maps - and they turn out to be a necessity?
[Computers, I take it, can't currently produce them?]



2D mapping has been achievable for a while, but 3D mapping is a fairly
recent phenomena because it's not until recent years that enough
processing power has been available to handle this kind of task in
anything like real time.  To a large extent the DARPA urban challenge
was all about 3D mapping and the accompanying sensor technologies
needed to support it.



DARPA challenges are mostly 2.5D, which is a much simpler problem. On  the 
other hand, 3D mapping is pretty cheap if you have decent  algorithms. The 
sensors are dirt cheap, so it is mostly knowing what  to do with the data 
once you have it.


J. Andrew Rogers


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] modus ponens

2008-06-03 Thread Ben Goertzel
I mean this form

http://en.wikipedia.org/wiki/Modus_ponens

i.e.

A implies B
A
|-
B

Probabilistically, this means you have

P(B|A)
P(A)

and want to infer from these

P(B)

under the most direct interpretation...

ben


On Wed, Jun 4, 2008 at 12:08 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Modus ponens can be defined in a few ways.

 If you take the binary logic definition:
A - B  means  ~A v B
 you can translate this into probabilities but the result is a mess.  I
 have analysed this in detail but it's complicated.  In short, this
 definition is incompatible with probability calculus.

 Instead I simply use
   A - B  meaning  P(B|A) = p
 where p is the probability.  You can change p into an indefinite
 probability or interval.

 Is your modus ponens different from this?

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
Ben,

If we don't work out the correspondence (even approximately) between
FOL and term logic, this conversation would not be very fruitful.  I
don't even know what you're doing with PLN.  I suggest we try to work
it out here step by step.  If your approach really makes sense to me,
you will gain another helper =)   Also, this will be good for your
project's documentation.

I have some examples:

Eng:  Some philosophers are wise
TL:  +Philosopher+Wise
FOL:  philosopher(X) - wise(X)

Eng:  Romeo loves Juliet
TL:  +-Romeo* + (Loves +-Juliet*)
FOL:  loves(romeo, juliet)

Eng:  Women often have long hair
TL:  ?
FOL:  woman(X) - long_hair(X)

I know your term logic is slightly different from Fred Sommers'.  Can
you fill in the TL parts and also attach indefinite probabilities?

On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

Re assigning probabilties to terms...

Term in term logic is completely different from term in FOL.  I
guess terms in term logic roughly correspond to predicates or
propositions in FOL.  Terms in FOL seem to have no counterpart in term
logic..

Anyway there should be no confusion here.  Propositions are the ONLY
things that can have truth values.  This applies to term logic as well
(I just refreshed my memory of TL).  When truth values go from { 0, 1
} to [ 0, 1 ], we get single-value probabilistic logic.  All this has
a very solid and rigorous foundation, based on so-called model theory.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-03 Thread Steve Richfield
Vladimir,

On 6/3/08, Vladimir Nesov [EMAIL PROTECTED] wrote:

 On Tue, Jun 3, 2008 at 6:59 AM, Steve Richfield
 [EMAIL PROTECTED] wrote:
 
  Note that modern processors are ~3 orders of magnitude faster than a
 KA10,
  and my 10K architecture would provide another 4 orders of magnitude, for
 a
  net improvement over the KA10 of ~7 orders of magnitude. Perhaps another
  order of magnitude would flow from optimizing the architecture to the
  application rather than emulating Pentiums or KA10s. That leaves us just
 one
  order of magnitude short, and we can easily make that up by using just 10
 of
  the 10K architecture processors. In short, we could emulate human-scale
  systems in a year or two with adequate funding. By that time, process
  improvements would probably allow us to make such systems on single
 wafers,
  at a manufacturing cost of just a few thousand dollars.
 

 Except that you still wouldn't know what to do with all that. ;-)


... which gets to my REAL source of frustration.

Intel isn't making 10K processors because no one is ordering them, because
of the lack of understanding of how our brain works. A scanning UV
fluorescence microscope could answer many of the outstanding questions, but
it would be VERY limited without a 10K processor to reconstruct the
diagrams. So, for the lack of a few million dollars, both computer science
and neuroscience are stymied in the same respective holes that they have
been in for most of the last 40 years.

From my viewpoint, AI is an oxymoron, because of this proof by exhibition
that there is no intelligence to make artificially! It appears that the
world is just too stupid to help, when such small bumps can
stop entire generations of research in multiple disciplines.

Meanwhile, drug companies are redirecting ~100% of medical research funding
into molecular biology, nearly all of which leads nowhere.

The present situation appears to be entirely too stable. There seems to be
no visible hope past this, short of some rich person throwing a lot of money
at it - and they are all too busy to keep up on forums like this one.

Are we on the same page here?

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
Propositions are not the only things that can have truth values...

I don't have time to carry out a detailed mathematical discussion of
this right now...

We're about to (this week) finalize the PLN book draft ... I'll send
you a pre-publication PDF early next week and then you can read it and
we can argue this stuff after that ;-)

ben

On Wed, Jun 4, 2008 at 1:01 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Ben,

 If we don't work out the correspondence (even approximately) between
 FOL and term logic, this conversation would not be very fruitful.  I
 don't even know what you're doing with PLN.  I suggest we try to work
 it out here step by step.  If your approach really makes sense to me,
 you will gain another helper =)   Also, this will be good for your
 project's documentation.

 I have some examples:

 Eng:  Some philosophers are wise
 TL:  +Philosopher+Wise
 FOL:  philosopher(X) - wise(X)

 Eng:  Romeo loves Juliet
 TL:  +-Romeo* + (Loves +-Juliet*)
 FOL:  loves(romeo, juliet)

 Eng:  Women often have long hair
 TL:  ?
 FOL:  woman(X) - long_hair(X)

 I know your term logic is slightly different from Fred Sommers'.  Can
 you fill in the TL parts and also attach indefinite probabilities?

 On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

 Re assigning probabilties to terms...

 Term in term logic is completely different from term in FOL.  I
 guess terms in term logic roughly correspond to predicates or
 propositions in FOL.  Terms in FOL seem to have no counterpart in term
 logic..

 Anyway there should be no confusion here.  Propositions are the ONLY
 things that can have truth values.  This applies to term logic as well
 (I just refreshed my memory of TL).  When truth values go from { 0, 1
 } to [ 0, 1 ], we get single-value probabilistic logic.  All this has
 a very solid and rigorous foundation, based on so-called model theory.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/4/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 Propositions are not the only things that can have truth values...

Terms in term logic can have truth values.  But such terms
correspond to propositions in FOL.  There is absolutely no confusion
here.

 I don't have time to carry out a detailed mathematical discussion of
 this right now...

 We're about to (this week) finalize the PLN book draft ... I'll send
 you a pre-publication PDF early next week and then you can read it and
 we can argue this stuff after that ;-)

Thanks alot =)

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-03 Thread J Storrs Hall, PhD
Strongly disagree. Computational neuroscience is moving as fast as any field 
of science has ever moved. Computer hardware is improving as fast as any 
field of technology has ever improved. 

I would be EXTREMELY surprised if neuron-level simulation were necessary to 
get human-level intelligence. With reasonable algorithmic optimization, and a 
few tricks our hardware can do the brain can't (e.g. store sensory experience 
verbatim and review it as often as necessary into learning algorithms) we 
should be able to knock 3 orders of magnitude or so off the pure-neuro HEPP 
estimate -- which puts us at ten high-end graphics cards, e.g. less than the 
price of a car.  (or just wait till 2015 and get one high-end PC).

Figuring out the algorithms is the ONLY thing standing between us and AI.

Josh

On Tuesday 03 June 2008 12:16:54 pm, Steve Richfield wrote:
 ... for the lack of a few million dollars, both computer science
 and neuroscience are stymied in the same respective holes that they have
 been in for most of the last 40 years.
 ...
 Meanwhile, drug companies are redirecting ~100% of medical research funding
 into molecular biology, nearly all of which leads nowhere.
 
 The present situation appears to be entirely too stable. There seems to be
 no visible hope past this, short of some rich person throwing a lot of money
 at it - and they are all too busy to keep up on forums like this one.
 
 Are we on the same page here?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re : [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Bruno Frandemiche
hello ben
if i can have a pdf draf,i think you very much
bruno


- Message d'origine 
De : Ben Goertzel [EMAIL PROTECTED]
À : agi@v2.listbox.com
Envoyé le : Mardi, 3 Juin 2008, 18h33mn 02s
Objet : Re: [agi] OpenCog's logic compared to FOL?

Propositions are not the only things that can have truth values...

I don't have time to carry out a detailed mathematical discussion of
this right now...

We're about to (this week) finalize the PLN book draft ... I'll send
you a pre-publication PDF early next week and then you can read it and
we can argue this stuff after that ;-)

ben

On Wed, Jun 4, 2008 at 1:01 AM, YKY (Yan King Yin)
[EMAIL PROTECTED] wrote:
 Ben,

 If we don't work out the correspondence (even approximately) between
 FOL and term logic, this conversation would not be very fruitful.  I
 don't even know what you're doing with PLN.  I suggest we try to work
 it out here step by step.  If your approach really makes sense to me,
 you will gain another helper =)   Also, this will be good for your
 project's documentation.

 I have some examples:

 Eng:  Some philosophers are wise
 TL:  +Philosopher+Wise
 FOL:  philosopher(X) - wise(X)

 Eng:  Romeo loves Juliet
 TL:  +-Romeo* + (Loves +-Juliet*)
 FOL:  loves(romeo, juliet)

 Eng:  Women often have long hair
 TL:  ?
 FOL:  woman(X) - long_hair(X)

 I know your term logic is slightly different from Fred Sommers'.  Can
 you fill in the TL parts and also attach indefinite probabilities?

 On 6/3/08, Ben Goertzel [EMAIL PROTECTED] wrote:

 If you attach indefinite probabilities to FOL propositions, and create
 indefinite probability formulas corresponding to standard FOL rules,
 you will have a subset of PLN

 But you'll have a hard time applying Bayes rule to FOL propositions
 without being willing to assign probabilities to terms ... and you'll
 have a hard time applying it to FOL variable expressions without doing
 something that equates to assigning probabilities to propositions w.
 unbound variables ... and like I said, I haven't seen any other
 adequate way of propagating pdf's through quantifiers than the one we
 use in PLN, though Halpern's book describes a lot of inadequate ways
 ;-)

 Re assigning probabilties to terms...

 Term in term logic is completely different from term in FOL.  I
 guess terms in term logic roughly correspond to predicates or
 propositions in FOL.  Terms in FOL seem to have no counterpart in term
 logic..

 Anyway there should be no confusion here.  Propositions are the ONLY
 things that can have truth values.  This applies to term logic as well
 (I just refreshed my memory of TL).  When truth values go from { 0, 1
 } to [ 0, 1 ], we get single-value probabilistic logic.  All this has
 a very solid and rigorous foundation, based on so-called model theory.

 YKY


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com


__
Do You Yahoo!?
En finir avec le spam? Yahoo! Mail vous offre la meilleure protection possible 
contre les messages non sollicités 
http://mail.yahoo.fr Yahoo! Mail 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:


  I believe that the crisp (i.e. certain or very near certain) KR for these
 domains will facilitate the use of FOL inference (e.g. subsumption) when I
 need it to supplement the current Texai spreading activation techniques for
 word sense disambiguation and relevance reasoning.

 I expect that OpenCog will focus on domains that require probabilistic
 reasoning, e.g. pattern recognition, which I am postponing until Texai is
 far enough along that expert mentors can teach it the skills for
 probabilistic reasoning.



Your approach is sensible, indeed similar to mine -- I'm also experimenting
with crisp logic only.  But there are 2 problems:

1.  Probabilistic inference cannot be grafted onto crisp logic easily.
The changes may be so great that much of the original work will be rendered
useless.

2.  You think we can do program synthesis with crisp logic only?  This has
profound implications if true...

YKY



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/3/08, Matt Mahoney [EMAIL PROTECTED] wrote:

 Do you have any insights on how this learning will be done?

That research area is known as ILP (inductive logic programming).
It's very powerful in the sense that almost anything (eg, any Prolog
program) can be learned that way.  But the problem is that the
combinatorial explosion is so great that you must use heuristics and
biases.  So far no one has applied it to large-scale commonsense
learning.  Some Cyc people have experimented with it recently.

  Cyc put a lot of effort into a natural language interface and failed.  What 
 approach will you use that they have not tried?  FOL requires a set of 
 transforms, e.g.

 All men are mortal - forall X, man(X) - mortal(X) (hard)
 Socrates is a man - (man(Socrates) (hard)
 - mortal(Socrates) (easy)
 - Socrates is mortal (hard).

 We have known for a long time how to solve the easy parts.  The hard parts 
 are AI-complete.  You have to solve AI before you can learn the knowledge 
 base.  Then after you build it, you won't need it.  What is the point?


We don't need 100% perfect NLP ability to learn the KB.  An NL
interface that can accept a simple subset of English will do.

YKY


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] CONSCIOUSNESS AS AN ARCHITECTURE OF COMPUTATION

2008-06-03 Thread Ed Porter
JOHN ROSE  
I suppose the optimal approach to AGI has to involve some degree of
connectionism.  But to find isomorphic structures to connectionist graphs
that are more efficient.  Many things in nature cannot be evolved, for
example few if any animals have wheels. Evolved structures go so far until
probabilistic limits are hit. Are there structural, algorithmic and
mathematic systems that are more optimal than massively dense interconnected
graphs waiting to be discovered or, are they actually known but not applied
to engineered cognition and consciousness? I say yes. Has activation
dynamics been studied enough, - anyone have a literature reference? 

Then main reason I take this approach is because of resource constraints.
I'm not saying that it's not worth building connectionist prototypes. In
fact I'm starting to think that way where I haven't before.

ED PORTER 
I am not an expert at computational efficiency, but I think graph structures
like semantic nets, are probably close to as efficient as possible given the
type of connectionism they are representing and the type of computing that
is to be done on them, which include, importantly, selective spreading
activation.

JOHN ROSE  
I agree on your description of consciousness. But it would be nice to have a
compact system, a minimized essence, that optimal consciousness engine if
one exists.

Computation is all that there is. But I often try to imagine something that
is not computation. It depends on different things, and goes into subatomic
physics, string theory, etc.. I think that there are aspects of computation
that we don't understand, and definitely things that I don't understand but
are known among well versed individuals.

ED PORTER 
Although I think my theory of consciousness is as good as any other I have
read, it is far from certain, and far from complete, and not necessarily
correct in every details. 

I don't think the richness of human consciousness comes from a minimized
essence, but rather from the complicated full-blown richness of the
computation inside our brains. 

Since our senses can  only sense --- and our minds can only think --- in
terms of computation (in which I am included representation) --- at least at
the moment, I cannot think of what it would mean for something to not be
computation, except perhaps nothingness, which we can think of as the
absence of computation.


-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, June 03, 2008 1:13 AM
To: agi@v2.listbox.com
Subject: RE: [agi] CONSCIOUSNESS AS AN ARCHITECTURE OF COMPUTATION

I suppose the optimal approach to AGI has to involve some degree of
connectionism.  But to find isomorphic structures to connectionist graphs
that are more efficient.  Many things in nature cannot be evolved, for
example few if any animals have wheels. Evolved structures go so far until
probabilistic limits are hit. Are there structural, algorithmic and
mathematic systems that are more optimal than massively dense interconnected
graphs waiting to be discovered or, are they actually known but not applied
to engineered cognition and consciousness? I say yes. Has activation
dynamics been studied enough, - anyone have a literature reference? 

Then main reason I take this approach is because of resource constraints.
I'm not saying that it's not worth building connectionist prototypes. In
fact I'm starting to think that way where I haven't before.

I agree on your description of consciousness. But it would be nice to have a
compact system, a minimized essence, that optimal consciousness engine if
one exists.

Computation is all that there is. But I often try to imagine something that
is not computation. It depends on different things, and goes into subatomic
physics, string theory, etc.. I think that there are aspects of computation
that we don't understand, and definitely things that I don't understand but
are known among well versed individuals.

John


_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Monday, June 02, 2008 2:46 PM
To: agi@v2.listbox.com
Subject: RE: [agi] CONSCIOUSNESS AS AN ARCHITECTURE OF
COMPUTATION



JOHN ROSE  
So you are saying that consciousness is activations in
response to patterns, including activation history. 

ED PORTER  
Yes. But I am also saying the following:

-EVERYTHING -- INCLUDING CONSCIOUSNESS --- IS NOTHING BUT
COMPUTATION

To those who say it is a cop out to say consciousness is
computation, I challenge you to describe any aspect of reality, either that
in the mind, or that described by current scientific understanding, that is
anything other than information and its 

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Stephen Reed
YKY said:


1. Probabilistic inference cannot be grafted onto crisp logic easily.  The 
changes may be so great that much of the original work will be rendered useless.

Agreed.   However, I hope that by the time probabilistic inference is taught to 
Texai by mentors, it will be easy to supersede useless skills with correct ones.


2.  You think we can do program synthesis with crisp logic only?  This has 
profound implications if true...

All of the work to date on program generation, macro processing, application 
configuration via parameters, compilation, assembly, and program optimization 
has used crisp knowledge representation (i.e. non-probabilistic data 
structures).  Dynamic, feedback based optimizing compilers, such as the Java 
HotSpot VM, do keep track of program path statistics in order to decide when to 
inline methods for example.  But on the whole, the traditional program 
development life cycle is free of probabilistic inference.

I have a hypothesis that program design (to satisfy requirements), and in 
general engineering design, can be performed using crisp knowledge 
representation - with the provision that I will use cognitively-plausible 
spreading activation instead of, or to cache, time-consuming deductive 
backchaining.  My current work will explore this hypothesis with regard to 
composing simple programs that compose skills from more primitive skills.   I 
am adapting Gerhard Wickler's Capability Description Language to match 
capabilities (e.g. program composition capabilities) with tasks (e.g. clear a 
StringBuilder object).  CDL conveniently uses a crisp FOL knowledge 
representation.   Here is a Texai behavior language file that contains 
capability descriptions for primitive Java compositions.  Each of these 
primitive capabilities is implemented by a Java object that can be persisted in 
the Texai KB as RDF statements.

Like yourself, I find the profound implications of automatic programming 
fascinating.  I can only hope that this fascination has guided me down the 
right path to AGI, rather than down a dead end.  I've written a brief blog post 
on this and related AI-hard problems.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 12:20:19 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?


On 6/3/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
I believe that the crisp (i.e. certain or very near certain) KR for these 
domains will facilitate the use of FOL inference (e.g. subsumption) when I need 
it to supplement the current Texai spreading activation techniques for word 
sense disambiguation and relevance reasoning.

I expect that OpenCog will focus on domains that require probabilistic 
reasoning, e.g. pattern recognition, which I am postponing until Texai is far 
enough along that expert mentors can teach it the skills for probabilistic 
reasoning.
 
 
Your approach is sensible, indeed similar to mine -- I'm also experimenting 
with crisp logic only.  But there are 2 problems:
 
1.  Probabilistic inference cannot be grafted onto crisp logic easily.  The 
changes may be so great that much of the original work will be rendered useless.
 
2.  You think we can do program synthesis with crisp logic only?  This has 
profound implications if true...
YKY 


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: [agi] Did this message get completely lost?

2008-06-03 Thread John G. Rose

 From: Brad Paulsen [mailto:[EMAIL PROTECTED]
 
 John wrote:
   A rock is either conscious or not conscious.
 
 Excluding the middle, are we?
 

Conscious, not conscious or null?

 I don't want to put words into Ben  company's mouths, but I think what
 they are trying to do with PLN is to implement a system that expressly
 *includes the middle*.  In theory (but not necessarily in practice) the
 clue to creating the first intelligent machine may be to *exclude the
 ends*!  Scottish philosopher and economist David Hume argued way back in
 the 18th century that all knowledge is based on past observation.
 Because of this, we can never be 100% certain of *anything*.  While Hume
 didn't put it in such terms, as I understand his thinking, it comes down
 to *everything* is a probability or all knowledge is fuzzy
 knowledge.  There is no such thing as 0.  There is no such thing as 1.
 
 For example, let's say you are sitting at a table holding a pencil in
 your hand.  In the past, every time you let go of the pencil in this
 situation (or a similar situation), it dropped to the table.  The cause
 and effect for this behavior is so well documented that we call the
 underlying principal the *law* of gravity.  But, even so, can you say
 with probability 1.0 that the *next* time you let go of that pencil in a
 similar situation that it will, in fact, drop to the table?  Hume said
 you can't.  As those ads for stock brokerage firms on TV always say in
 their disclaimers, Past performance is no guarantee of future
 performance.
 
 Of course, we are constantly predicting the future based on our
 knowledge of past events (or others' knowledge of past events which we
 have learned and believe to be correct).  I will, for instance, give you
 very favorable odds if you are willing to bet against the pencil hitting
 the table when dropped.  Unless you enjoy living life on the edge, your
 predictions won't stray very far from past experiences (or learned
 knowledge about past experiences).  But, in the end, it's all
 probability and fuzziness.  It is all belief, baby.

Yes Hume and Kant actually were making contributions to AGI but didn't' know 
it. Although I suppose at the time there imaginations where rich and varied 
enough to where those possibilities were not totally unthinkable.

 
 Regarding the issue of consciousness and the rock, there are several
 possible scenarios to consider here.  First, the rock may be conscious
 but only in a way that can be understood by other rocks.  The rock may
 be conscious but it is unable to communicate with humans (and vice
 versa) so we assume it's not conscious.  The rock is truly conscious and
 it thinks we're not conscious so it pretends to be just like it thinks
 we are and, as a result, we're tricked into thinking it's not
 conscious.  Finally, if a rock falls in the forest, does it make a
 sound?  Consciousness may require at least two actors.  Think about it.
 What good would consciousness do you if there was no one else around to
 appreciate it?  Would you, in that case, in fact be conscious?
 
 Most humans will treat a rock as if it were not conscious because, in
 the past, that assumption has proven to be efficacious for predictions
 involving rocks.  I know of no instance where someone was able to talk a
 rock that was in the process of falling on him or her to change
 direction by appealing to the rock, one conscious entity to another.
 And maybe they should have.  There is, after all, based on past
 experience, only a 0.9995 probability that a rock is not conscious.
 

Actually on further thought about this conscious rock, I want to take that 
particular rock and put it through some further tests to absolutely verify with 
a high degree of confidence that there may not be some trace amount of 
consciousness lurking inside. So the tests that I would conduct are - 

Verify the rock is in a solid state at close to absolute zero but not at 
absolute zero.
The rock is not in the presence of a high frequency electromagnetic field.
The rock is not in the presence of high frequency physical vibrational 
interactions.
The rock is not in the presence of sonic vibrations.
The rock is not in the presence of subatomic particle bombardment, radiation, 
or being hit by a microscopic black hole.
The rock is not made of nano-robotic material.
The rock is not an advanced, non-human derived, computer.
The rock contains minimal metal content.
The rock does not contain holograms.
The rock does not contain electrostatic echoes.
The rock is a solid, spherical structure, with no worm holes :)
The rock...

You see what I'm getting at. In order to be 100% sure. Any failed tests of the 
above would require further scientific analysis and investigation to achieve 
proper non-conscious certification.

John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 

RE: [agi] CONSCIOUSNESS AS AN ARCHITECTURE OF COMPUTATION

2008-06-03 Thread John G. Rose
 From: Ed Porter [mailto:[EMAIL PROTECTED]
 ED PORTER 
 I am not an expert at computational efficiency, but I think graph
 structures
 like semantic nets, are probably close to as efficient as possible given
 the
 type of connectionism they are representing and the type of computing
 that
 is to be done on them, which include, importantly, selective spreading
 activation.


Uhm have you checked this out? Is there any evidence this? It would make it
easier if this was in fact the case.


 ED PORTER 
 Although I think my theory of consciousness is as good as any other I
 have
 read, it is far from certain, and far from complete, and not necessarily
 correct in every details.
 
 I don't think the richness of human consciousness comes from a
 minimized
 essence, but rather from the complicated full-blown richness of the
 computation inside our brains.


So there is a scaling to consciousness magnitude? And there are
consciousness properties that are stronger or weaker depending on
computational richness?

 
 Since our senses can  only sense --- and our minds can only think --- in
 terms of computation (in which I am included representation) --- at
 least at
 the moment, I cannot think of what it would mean for something to not be
 computation, except perhaps nothingness, which we can think of as the
 absence of computation.


Nothingness or maybe big bang singularity or event horizon conditions? Or
some type of subatomic particle that has peculiar properties. Those are all
not very helpful for what we are doing, but I still think there maybe
something else... there has to be something else applicable or maybe totally
inapplicable to AGI. So it may be a waste of time thinking on that one...

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-03 Thread Matt Mahoney
--- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:
 Actually on further thought about this conscious rock, I
 want to take that particular rock and put it through some
 further tests to absolutely verify with a high degree of
 confidence that there may not be some trace amount of
 consciousness lurking inside. So the tests that I would
 conduct are - 
 
 Verify the rock is in a solid state at close to absolute
 zero but not at absolute zero.
 The rock is not in the presence of a high frequency
 electromagnetic field.
 The rock is not in the presence of high frequency physical
 vibrational interactions.
 The rock is not in the presence of sonic vibrations.
 The rock is not in the presence of subatomic particle
 bombardment, radiation, or being hit by a microscopic black
 hole.
 The rock is not made of nano-robotic material.
 The rock is not an advanced, non-human derived, computer.
 The rock contains minimal metal content.
 The rock does not contain holograms.
 The rock does not contain electrostatic echoes.
 The rock is a solid, spherical structure, with no worm
 holes :)
 The rock...
 
 You see what I'm getting at. In order to be 100% sure.
 Any failed tests of the above would require further
 scientific analysis and investigation to achieve proper
 non-conscious certification.

You forgot a test. The postions of the atoms in the rock encode 10^25 bits of 
information representing the mental states of 10^10 human brains at 10^15 bits 
each. The data is encrypted with a 1000 bit key, so it appears statistically 
random. How would you prove otherwise?

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


RE: Are rocks conscious? (was RE: [agi] Did this message get completely lost?)

2008-06-03 Thread John G. Rose
 From: Matt Mahoney [mailto:[EMAIL PROTECTED]
 Subject: Are rocks conscious? (was RE: [agi] Did this message get
 completely lost?)
 
 --- On Tue, 6/3/08, John G. Rose [EMAIL PROTECTED] wrote:
  Actually on further thought about this conscious rock, I
  want to take that particular rock and put it through some
  further tests to absolutely verify with a high degree of
  confidence that there may not be some trace amount of
  consciousness lurking inside. So the tests that I would
  conduct are -
 
  Verify the rock is in a solid state at close to absolute
  zero but not at absolute zero.
  The rock is not in the presence of a high frequency
  electromagnetic field.
  The rock is not in the presence of high frequency physical
  vibrational interactions.
  The rock is not in the presence of sonic vibrations.
  The rock is not in the presence of subatomic particle
  bombardment, radiation, or being hit by a microscopic black
  hole.
  The rock is not made of nano-robotic material.
  The rock is not an advanced, non-human derived, computer.
  The rock contains minimal metal content.
  The rock does not contain holograms.
  The rock does not contain electrostatic echoes.
  The rock is a solid, spherical structure, with no worm
  holes :)
  The rock...
 
  You see what I'm getting at. In order to be 100% sure.
  Any failed tests of the above would require further
  scientific analysis and investigation to achieve proper
  non-conscious certification.
 
 You forgot a test. The postions of the atoms in the rock encode 10^25
 bits of information representing the mental states of 10^10 human brains
 at 10^15 bits each. The data is encrypted with a 1000 bit key, so it
 appears statistically random. How would you prove otherwise?
 


Actually you are on to something. Since there are patterns in the rock,
molecular, granular, electronic, subatomic the rock has string of bits that
represent time frame samples of consciousness recordings. So I mean if they
were played with the right equipment in a certain way you might be able to
extract short consciousness clip recordings. 

Hey wait a sec - is a string of bits that represents consciousness,
conscious? Or does there have to be a victrola. Does time have to be a
variable? Hm...

John




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread YKY (Yan King Yin)
On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:


  All of the work to date on program generation, macro processing,
 application configuration via parameters, compilation, assembly, and program
 optimization has used crisp knowledge representation (i.e. non-probabilistic
 data structures).  Dynamic, feedback based optimizing compilers, such as the
 Java HotSpot VM, do keep track of program path statistics in order to decide
 when to inline methods for example.  But on the whole, the traditional
 program development life cycle is free of probabilistic inference.


How about these scenarios:

1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few'
times, write it out directly.  -- this requires fuzziness

2.  The gain of using algorihtm X on this problem is likely to be small.
-- requires probability


 I have a hypothesis that program design (to satisfy requirements), and in
 general engineering design, can be performed using crisp knowledge
 representation - with the provision that I will use cognitively-plausible
 spreading activation instead of, or to cache, time-consuming deductive
 backchaining.  My current work will explore this hypothesis with regard to
 composing simple programs that compose skills from more primitive skills.
 I am adapting Gerhard Wickler's Capability Description 
 Languagehttp://www.aiai.ed.ac.uk/oplan/cdl/index.htmlto match capabilities 
 (e.g. program composition capabilities) with tasks
 (e.g. clear a StringBuilder object).  CDL conveniently uses a crisp FOL
 knowledge representation.   
 Herehttp://texai.svn.sourceforge.net/viewvc/texai/BehaviorLanguage/data/method-definitions.bl?view=markupis
  a Texai behavior language file that contains capability descriptions for
 primitive Java compositions.  Each of these primitive capabilities is
 implemented by a Java object that can be persisted in the Texai KB as RDF
 statements.



Maybe you mean spreading activation is used to locate candidate facts /
rules, over which actual deductions are attempted?  That sounds very
promising.  One question is how to learn the association between nodes.

YKY



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Did this message get completely lost?

2008-06-03 Thread Brad Paulsen



John G. Rose wrote:

You see what I'm getting at. In order to be 100% sure. Any failed tests of the 
above would require further scientific analysis and investigation to achieve 
proper non-conscious certification.



Not exactly (to start with, you can *never* be 100% sure, try though you 
might  :-) ).  Take all of the investigations into rockness since the 
dawn of homo sapiens and we still only have a 0.9995 probability that 
rocks are not conscious.  Everything is belief.  Even hard science.  
That was the nub of Hume's intellectual contribution.  It doesn't mean 
we can't be sure enough.  It just means that we can never be 100% sure 
of *anything*.


Of course, there's belief and then there's BELIEF.  To me (and to Hume), 
it's not a difference in kind.  It's just that the leap from 
observational evidence to empirical (natural) belief is a helluvalot 
shorter than is the leap from observational evidence to supernatural belief.


Cheers,

Brad

Today's words-to-live-by: Everything in moderation.  Including 
moderation. ;-)


P.S.  Hmmm.  The Thunderbird e-mail client spell checker recognizes the 
word homo but not the word sapiens.  It gets better.  Here's 
WordWeb's definition of sapiens: Of or relating to or characteristic of 
Homo sapiens.  Oh.  Now I get it!  NOT.  Sigh...  Isn't there some sort 
of dictionary-writing rule that says you're not allowed to use the word 
you're defining in the definition of that word?  I smell a project!  
Let's build a dictionary that contains nothing but circular 
definitions.  For example: definition - Of or relating to or 
characteristic of defining something.

From: Brad Paulsen [mailto:[EMAIL PROTECTED]

John wrote:
A rock is either conscious or not conscious.

Excluding the middle, are we?




Conscious, not conscious or null?

  

I don't want to put words into Ben  company's mouths, but I think what
they are trying to do with PLN is to implement a system that expressly
*includes the middle*.  In theory (but not necessarily in practice) the
clue to creating the first intelligent machine may be to *exclude the
ends*!  Scottish philosopher and economist David Hume argued way back in
the 18th century that all knowledge is based on past observation.
Because of this, we can never be 100% certain of *anything*.  While Hume
didn't put it in such terms, as I understand his thinking, it comes down
to *everything* is a probability or all knowledge is fuzzy
knowledge.  There is no such thing as 0.  There is no such thing as 1.

For example, let's say you are sitting at a table holding a pencil in
your hand.  In the past, every time you let go of the pencil in this
situation (or a similar situation), it dropped to the table.  The cause
and effect for this behavior is so well documented that we call the
underlying principal the *law* of gravity.  But, even so, can you say
with probability 1.0 that the *next* time you let go of that pencil in a
similar situation that it will, in fact, drop to the table?  Hume said
you can't.  As those ads for stock brokerage firms on TV always say in
their disclaimers, Past performance is no guarantee of future
performance.

Of course, we are constantly predicting the future based on our
knowledge of past events (or others' knowledge of past events which we
have learned and believe to be correct).  I will, for instance, give you
very favorable odds if you are willing to bet against the pencil hitting
the table when dropped.  Unless you enjoy living life on the edge, your
predictions won't stray very far from past experiences (or learned
knowledge about past experiences).  But, in the end, it's all
probability and fuzziness.  It is all belief, baby.



Yes Hume and Kant actually were making contributions to AGI but didn't' know 
it. Although I suppose at the time there imaginations where rich and varied 
enough to where those possibilities were not totally unthinkable.

  

Regarding the issue of consciousness and the rock, there are several
possible scenarios to consider here.  First, the rock may be conscious
but only in a way that can be understood by other rocks.  The rock may
be conscious but it is unable to communicate with humans (and vice
versa) so we assume it's not conscious.  The rock is truly conscious and
it thinks we're not conscious so it pretends to be just like it thinks
we are and, as a result, we're tricked into thinking it's not
conscious.  Finally, if a rock falls in the forest, does it make a
sound?  Consciousness may require at least two actors.  Think about it.
What good would consciousness do you if there was no one else around to
appreciate it?  Would you, in that case, in fact be conscious?

Most humans will treat a rock as if it were not conscious because, in
the past, that assumption has proven to be efficacious for predictions
involving rocks.  I know of no instance where someone was able to talk a
rock that was in the 

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Stephen Reed
YKY said:

How about these scenarios:
 
1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few' 
times, write it out directly.  -- this requires fuzziness
 
2.  The gain of using algorithm X on this problem is likely to be small.  -- 
requires probability


Agreed.  When Texai gets to this point I would incorporate an open source fuzzy 
logic library such as JFuzzyLogic. I believe I can interface the Texai KB to a 
fuzzy logic library without too much difficulty.


Maybe you mean spreading activation is used to locate candidate facts / rules, 
over which actual deductions are attempted?  That sounds very promising.  One 
question is how to learn the association between nodes.


To be clear, I would do the opposite.  Offline backchaining, deductive 
inference could be performed to cache conclusions for common inference 
problems.  The cache is implemented via spreading activation links between the 
antecedent terms of the rules and the consequent terms of the conclusions.  
Humans do not perform modus ponens deduction from first principles for 
commonsense problem solving.  I believe that spreading activation can be 
employed to perform machine problem solving (e.g. executing a learned 
procedure) in a cognitively plausible fashion without real-time theorem proving.

Cheers.
-Steve

Stephen L. Reed


Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860



- Original Message 
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, June 3, 2008 5:29:07 PM
Subject: Re: [agi] OpenCog's logic compared to FOL?


On 6/4/08, Stephen Reed [EMAIL PROTECTED] wrote:
 
All of the work to date on program generation, macro processing, application 
configuration via parameters, compilation, assembly, and program optimization 
has used crisp knowledge representation (i.e. non-probabilistic data 
structures).  Dynamic, feedback based optimizing compilers, such as the Java 
HotSpot VM, do keep track of program path statistics in order to decide when to 
inline methods for example.  But on the whole, the traditional program 
development life cycle is free of probabilistic inference.
 
How about these scenarios:
 
1.  If a task is to be repeated 'many' times, use a loop.  If only 'a few' 
times, write it out directly.  -- this requires fuzziness
 
2.  The gain of using algorihtm X on this problem is likely to be small.  -- 
requires probability
 
I have a hypothesis that program design (to satisfy requirements), and in 
general engineering design, can be performed using crisp knowledge 
representation - with the provision that I will use cognitively-plausible 
spreading activation instead of, or to cache, time-consuming deductive 
backchaining.  My current work will explore this hypothesis with regard to 
composing simple programs that compose skills from more primitive skills.   I 
am adapting Gerhard Wickler's Capability Description Language to match 
capabilities (e.g. program composition capabilities) with tasks (e.g. clear a 
StringBuilder object).  CDL conveniently uses a crisp FOL knowledge 
representation.   Here is a Texai behavior language file that contains 
capability descriptions for primitive Java compositions.  Each of these 
primitive capabilities is implemented by a Java object that can be persisted in 
the Texai KB as RDF statements.
 
 
Maybe you mean spreading activation is used to locate candidate facts / rules, 
over which actual deductions are attempted?  That sounds very promising.  One 
question is how to learn the association between nodes.
YKY


 
agi | Archives  | Modify Your Subscription  


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


Re: [agi] Neurons

2008-06-03 Thread Steve Richfield
Josh,

On 6/3/08, J Storrs Hall, PhD [EMAIL PROTECTED] wrote:

 Strongly disagree. Computational neuroscience is moving as fast as any
 field
 of science has ever moved.


Perhaps you are seeing something that I am not. There are ~200 different
types of neurons, but no one seems to understand what the ~200 different
things are that they have to do. Sure some simple nets are working, but I
just don't see the expected leap from this.

Computer hardware is improving as fast as any
 field of technology has ever improved.


We have already discussed here how architecture (of commercially available
processors) has been in a state of arrested development for ~35 years, with
~1:1 in performance just waiting to be collected.

I would be EXTREMELY surprised if neuron-level simulation were necessary to
 get human-level intelligence.


So would I. My point was that some additional understanding, a wiring
diagram, etc., would go a LONG way to getting over some of the humps that
doubtless lie ahead. The history of AI is littered with those who have
underestimated the problems.

With reasonable algorithmic optimization, and a
 few tricks our hardware can do the brain can't (e.g. store sensory
 experience
 verbatim and review it as often as necessary into learning algorithms) we
 should be able to knock 3 orders of magnitude or so off the pure-neuro HEPP
 estimate -- which puts us at ten high-end graphics cards, e.g. less than
 the
 price of a car.  (or just wait till 2015 and get one high-end PC).


The point of agreement with BOTH of our various estimates is that computer
horsepower is NOT a barrier.

Figuring out the algorithms is the ONLY thing standing between us and AI.


Back to those ~200 different types of neurons. There are probably some cute
tricks buried down in their operation, and you probably need to figure out
substantially all ~200 of those tricks to achieve human intelligence. If I
were an investor, this would sure sound pretty scary to me without SOME sort
of insurance like scanning capability, and maybe some simulations.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com


[agi] teme-machines

2008-06-03 Thread David Hart
Hi All,

An excellent 20-minute TED talk from Susan Blackmore (she's a brilliant
speaker!)

http://www.ted.com/talks/view/id/269

I considered posting to the singularity list instead, but Blackmore's
theoretical talk is much more germane to AGI than any other
singularity-related technology.

-dave



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=103754539-40ed26
Powered by Listbox: http://www.listbox.com