Re: [agi] Dangerous Knowledge

2008-09-29 Thread Matt Mahoney
--- On Sun, 9/28/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:

> Recently, someone on this list (I apologize for not making a
> note of this person's name) raised the question whether we might
> find a "shortcut" to AGI. 

That was me.

> The author went on to opine that, because the problems associated
> with achieving AGI had been considered by some of the world's most
> brilliant minds at various times over thousands of years of human history
> and because the problem, nonetheless, remains unsolved, it was extremely
> unlikely such a shortcut would ever be found.

I mean that a more productive approach would be to try to understand why the 
problem is so hard. Two other hard problems with high payoffs come to mind. One 
is energy. The other is certain classes of problems in computer science.

Early engineers hacked away at the energy problem by designing complex machines 
that could power themselves, known as perpetual motions. Hundreds of clever 
designs were tried, but all failed when built. In trying to understand these 
failures, physicists developed the laws of thermodynamics and the principle of 
conservation of energy. As a result, engineers became more productive because 
they designed power generation systems consistent with these laws rather than 
waste their time with ever more intricate mechanisms for extracting free 
energy. We did this, even though there is no proof that conservation of energy 
is correct. Rather, it is a theory consistent with thousands of experiments. 
(In fact the law was wrong. Einstein later modified it to include mass).

The second example concerns certain hard problems such as Boolean 
satisfiability, subset-sum, and the traveling salesman problem. Early 
programmers worked on these one at a time, trying to find the most efficient 
solutions. The great insight was the introduction of complexity classes and the 
notion of NP-completeness: that a solution to any one of these problems (and 
there are thousands) could be used to solve all the others. This is not a proof 
that the problems are hard. We have not proven that P != NP. But it does mean 
that if we can prove that a problem is NP-complete, then we are probably better 
off looking at a different approach. The result is that programmers are more 
productive.

I make a distinction between unlikely events, like finding a violation of the 
laws of thermodynamics or proving P = NP, and provably impossible events (with 
high perceived payoffs) such as solving the halting problem or recursive data 
compression. A lot of people waste time on impossible problems too, because 
they don't understand the proofs.

I put AI in the former category. My interest is to understand why AI is so 
hard, so that we can put our effort where it will be most productive. There is 
a lot more work to be done in this area. Based on what I have done so far, I 
estimate that automating the economy using AI will cost US $1 quadrillion over 
30 years, largely for the cost of software and customized job training 
(including the indirect costs of mistakes made by partially trained AI). I 
estimate the hardware cost of natural language modeling is similar to that of 
vision, in the range of tens or hundreds of gigabytes and 1 teraflop for real 
time performance (meaning 10 years to compress 1 GB of text). We would need 
10^10 of these. The total complexity would be 10^17 to 10^18 bits of knowledge 
that needs to be extracted from human brains.

I have looked at shortcuts. The most promising of these at the moment seems to 
be recursive self improvement (RSI), the idea that a program could rewrite its 
own code to achieve some goal more efficiently. The idea is that if humans can 
produce superhuman intelligence, then so can they, only faster. I question 
that. Intelligence is not a point on a line. Computers have been smarter than 
humans in some areas for 50 years. Exactly what threshold has to be crossed?

If RSI is practical, then we should have working mathematical, software, or 
physical models of it. I have created a trivial self improving program in C 
(see http://www.mattmahoney.net/rsi.pdf ) and proved that all such systems grow 
extremely slowly (O(log n)) in algorithmic complexity. However, this proof is 
for a narrow, formal definition of RSI. A study of more general forms of RSI is 
needed. (Any suggestions?)

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-29 Thread Ben Goertzel
>
>
> I mean that a more productive approach would be to try to understand why
> the problem is so hard.



IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do
with Santa Fe Institute style
complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their
environments ...

Characterizing what these emergent structures/dynamics are is hard, and then
figuring out how to make these
structures/dynamics emerge from computationally feasible knowledge
representation and creation structures/
dynamics is hard ...

It's hard for much the reason that systems biology is hard: it rubs against
the grain of the reductionist
approach to science that has become prevalent ... and there's insufficient
data to do it fully rigorously so
you gotta cleverly and intuitively fill in some big gaps ... (until a few
decades from now, when better bio
data may provide a lot more info for cog sci, AGI and systems biology...

-- Ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Terren Suydam

Hi Ben,

If Richard Loosemore is half-right, how is he half-wrong? 

Terren

--- On Mon, 9/29/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] Dangerous Knowledge
To: agi@v2.listbox.com
Date: Monday, September 29, 2008, 6:50 PM






I mean that a more productive approach would be to try to understand why the 
problem is so hard. 

IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do 
with Santa Fe Institute style

complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments ...


Characterizing what these emergent structures/dynamics are is hard, and then 
figuring out how to make these 
structures/dynamics emerge from computationally feasible knowledge 
representation and creation structures/

dynamics is hard ...

It's hard for much the reason that systems biology is hard: it rubs against the 
grain of the reductionist
approach to science that has become prevalent ... and there's insufficient data 
to do it fully rigorously so

you gotta cleverly and intuitively fill in some big gaps ... (until a few 
decades from now, when better bio
data may provide a lot more info for cog sci, AGI and systems biology...

-- Ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
I don't want to recapitulate that whole long tedious thread again!!

However, a brief summary of my response to Loosemore's arguments is here:

http://opencog.org/wiki/OpenCogPrime:FAQ#What_about_the_.22Complex_Systems_Problem.3F.22

(that FAQ is very incomplete which is why it hasn't been publicized yet ...
but it does already
address this particular issue...)

ben

On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:

>
> Hi Ben,
>
> If Richard Loosemore is half-right, how is he half-wrong?
>
> Terren
>
> --- On *Mon, 9/29/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote:
>
> From: Ben Goertzel <[EMAIL PROTECTED]>
> Subject: Re: [agi] Dangerous Knowledge
> To: agi@v2.listbox.com
> Date: Monday, September 29, 2008, 6:50 PM
>
>
>
>>
>> I mean that a more productive approach would be to try to understand why
>> the problem is so hard.
>
>
>
> IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do
> with Santa Fe Institute style
> complexity ...
>
> Intelligence is not fundamentally grounded in any particular mechanism but
> rather in emergent structures
> and dynamics that arise in certain complex systems coupled with their
> environments ...
>
> Characterizing what these emergent structures/dynamics are is hard, and
> then figuring out how to make these
> structures/dynamics emerge from computationally feasible knowledge
> representation and creation structures/
> dynamics is hard ...
>
> It's hard for much the reason that systems biology is hard: it rubs against
> the grain of the reductionist
> approach to science that has become prevalent ... and there's insufficient
> data to do it fully rigorously so
> you gotta cleverly and intuitively fill in some big gaps ... (until a few
> decades from now, when better bio
> data may provide a lot more info for cog sci, AGI and systems biology...
>
> -- Ben
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>
>
>  --
>   *agi* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/> | 
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments 

Characterizing what these emergent structures/dynamics are is hard, 

Ben,

Maybe you could indicate how complexity might help solve any aspect of 
*general* intelligence - how it will help in any form of crossing domains, such 
as analogy, metaphor, creativity, any form of resourcefulness  etc.-  giving 
some example.   Personally,  I don't think it has any connection  - and it 
doesn't sound from your last sentence, as if you actually see a connection :). 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 12:45 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben: the reason AGI is so hard has to do with Santa Fe Institute style
> complexity ...
>
> Intelligence is not fundamentally grounded in any particular mechanism but
> rather in emergent structures
> and dynamics that arise in certain complex systems coupled with their
> environments
>
> Characterizing what these emergent structures/dynamics are is hard,
>
> Ben,
>
> Maybe you could indicate how complexity might help solve any aspect of
> *general* intelligence - how it will help in any form of crossing domains,
> such as analogy, metaphor, creativity, any form of resourcefulness  etc.-
> giving some example.
>
>
>
Personally,  I don't think it has any connection  - and it doesn't sound
> from your last sentence, as if you actually see a connection :).
>


You certainly draw some odd conclusions from the wording of peoples'
sentences.  I not only see a connection, I wrote a book on this subject,
published by Plenum Press in 1997: "From Complexity to Creativity."

Characterizing these things at the conceptual and even mathematical level is
not as hard at realizing them at the software level... my 1997 book was
concerned with the former.

I don't have time today to cut and paste extensively from there to satisfy
your curiosity, but you're free to read the thing ;-) ... I still agree with
most of it ...

To give a brief answer to one of your questions: analogy is mathematically a
matter of finding mappings that match certain constraints.   The traditional
AI approach to this would be to search the constrained space of mappings
using some search heuristic.  A complex systems approach is to embed the
constraints into a dynamical system and let the dynamical system evolve into
a configuration that embodies a mapping matching the constraints.  Based on
this, it is provable that complex systems methods can solve **any** analogy
problem, given appropriate data, and using for example asymmetric Hopfield
nets (as described in Amit's book on Attractor Neural Networks back in the
80's).  Whether they are the most resource-efficient way to solve such
problems is another issue.  OpenCog and the NCE seek to hybridize
complex-systems methods with probabilistic-logic methods, thus alienating
almost everybody ;=>

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Terren Suydam

Right, was just looking for exactly that kind of summary, not to rehash 
anything! Thanks.

Terren

--- On Tue, 9/30/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] Dangerous Knowledge
To: agi@v2.listbox.com
Date: Tuesday, September 30, 2008, 12:42 PM


I don't want to recapitulate that whole long tedious thread again!!

However, a brief summary of my response to Loosemore's arguments is here:

http://opencog.org/wiki/OpenCogPrime:FAQ#What_about_the_.22Complex_Systems_Problem.3F.22


(that FAQ is very incomplete which is why it hasn't been publicized yet ... but 
it does already
address this particular issue...)

ben

On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam <[EMAIL PROTECTED]> wrote:





Hi Ben,

If Richard Loosemore is half-right, how is he half-wrong? 

Terren

--- On Mon, 9/29/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:


From: Ben Goertzel <[EMAIL PROTECTED]>
Subject: Re: [agi] Dangerous Knowledge


To: agi@v2.listbox.com
Date: Monday, September 29, 2008, 6:50 PM








I mean that a more productive approach would be to try to understand why the 
problem is so hard. 

IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do 
with Santa Fe Institute style



complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments ...




Characterizing what these emergent structures/dynamics are is hard, and then 
figuring out how to make these 
structures/dynamics emerge from computationally feasible knowledge 
representation and creation structures/



dynamics is hard ...

It's hard for much the reason that systems biology is hard: it rubs against the 
grain of the reductionist
approach to science that has become prevalent ... and there's insufficient data 
to do it fully rigorously so



you gotta cleverly and intuitively fill in some big gaps ... (until a few 
decades from now, when better bio
data may provide a lot more info for cog sci, AGI and systems biology...

-- Ben







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  



  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first 
overcome "  - Dr Samuel Johnson










  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Jim Bromer
From: "Ben Goertzel" <[EMAIL PROTECTED]>
To give a brief answer to one of your questions: analogy is
mathematically a matter of finding mappings that match certain
constraints.   The traditional AI approach to this would be to search
the constrained space of mappings using some search heuristic.  A
complex systems approach is to embed the constraints into a dynamical
system and let the dynamical system evolve into a configuration that
embodies a mapping matching the constraints.  Based on this, it is
provable that complex systems methods can solve **any** analogy
problem, given appropriate data, and using for example asymmetric
Hopfield nets (as described in Amit's book on Attractor Neural
Networks back in the 80's).  Whether they are the most
resource-efficient way to solve such problems is another issue.
OpenCog and the NCE seek to hybridize complex-systems methods with
probabilistic-logic methods, thus alienating almost everybody ;=>
-- Ben G
--

The problem is that you are still missing what should be the main
focus of your efforts.  It's not whether or not your program does good
statistical models, or uses probability nets, or hybrid technology of
some sort, or that you have solved some mystery to analogy that was
not yet understood.

An effective program has to be able to learn how to structure its
interrelated and interactive knowledge effectively according to both
the meaning of realtively sophisticated linguistic (or linguistic like
communication) and to its own experience with other less sophisticated
data experiences (like sensory input of various kinds.)

The most important thing that is missing is the answer to the
question: how does the program learn about ideological structure?  If
it weren't for ambiguity (in all of its various forms) then this
knowledge would be easy for a programmer to acquire through gradual
experience.  But sophisticated input like language and making sense of
less sophisticated input, like simple sensory input, is highly
ambiguous and confusing to the AI programmer.

It is as if you are revving up the engine and trying to show off by
the roar of your engine, the flames and smoke shooting out the
exhaust, and the squeals and smoke of your tires burning, but then
that is all there is to it.  You will just be spinning your wheels
until you deal with the problem of ideological structure in the
complexity of highly ambiguous content.

So far, it seems like very few people have any idea what I am talking
about, because they almost never mention the problem as I see it.
Very few people have actually responded intelligibly to this kind of
criticism, and for those who do, their answer is usually limited to
explaining that this is what we are all trying to get at, or that this
was done in the old days, and then dropping it.  So I will understand
if you don't reply to this.

Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:08 PM, Jim Bromer <[EMAIL PROTECTED]> wrote:

> From: "Ben Goertzel" <[EMAIL PROTECTED]>
> To give a brief answer to one of your questions: analogy is
> mathematically a matter of finding mappings that match certain
> constraints.   The traditional AI approach to this would be to search
> the constrained space of mappings using some search heuristic.  A
> complex systems approach is to embed the constraints into a dynamical
> system and let the dynamical system evolve into a configuration that
> embodies a mapping matching the constraints.  Based on this, it is
> provable that complex systems methods can solve **any** analogy
> problem, given appropriate data, and using for example asymmetric
> Hopfield nets (as described in Amit's book on Attractor Neural
> Networks back in the 80's).  Whether they are the most
> resource-efficient way to solve such problems is another issue.
> OpenCog and the NCE seek to hybridize complex-systems methods with
> probabilistic-logic methods, thus alienating almost everybody ;=>
> -- Ben G
> --
>
> The problem is that you are still missing what should be the main
> focus of your efforts.  It's not whether or not your program does good
> statistical models, or uses probability nets, or hybrid technology of
> some sort, or that you have solved some mystery to analogy that was
> not yet understood.



I am getting really, really tired of a certain conversational pattern that
often occurs on this list!!

It goes like this...

Person A asks some question about topic T, which is a small
part of the overall AGI problem

Then, I respond to them about topic T

Then, Person B says "You are focusing on the wrong thing,
which shows you don't understand the AGI problem."

But of course, all that I did to bring on that criticism is to
answer someone's question about a specific topic, T ...

Urrggghh...

My response to Tintner's question had nothing to do with the main
focus of my efforts.  It was an attempt to compactly answer
his question ... it may have failed, but that's what it was...



>
>
> An effective program has to be able to learn how to structure its
> interrelated and interactive knowledge effectively according to both
> the meaning of realtively sophisticated linguistic (or linguistic like
> communication) and to its own experience with other less sophisticated
> data experiences (like sensory input of various kinds.)
>

Yes.  Almost everyone working in the field agrees with this.


>
> The most important thing that is missing is the answer to the
> question: how does the program learn about ideological structure?  If
> it weren't for ambiguity (in all of its various forms) then this
> knowledge would be easy for a programmer to acquire through gradual
> experience.  But sophisticated input like language and making sense of
> less sophisticated input, like simple sensory input, is highly
> ambiguous and confusing to the AI programmer.
>
> It is as if you are revving up the engine and trying to show off by
> the roar of your engine, the flames and smoke shooting out the
> exhaust, and the squeals and smoke of your tires burning, but then
> that is all there is to it.  You will just be spinning your wheels
> until you deal with the problem of ideological structure in the
> complexity of highly ambiguous content.
>
> So far, it seems like very few people have any idea what I am talking
> about, because they almost never mention the problem as I see it.
> Very few people have actually responded intelligibly to this kind of
> criticism, and for those who do, their answer is usually limited to
> explaining that this is what we are all trying to get at, or that this
> was done in the old days, and then dropping it.  So I will understand
> if you don't reply to this.



On the contrary, I strongly suspect
nearly everyone working in the AGI field thoroughly
understands the problem you are talking about, although they may
not use your chosen terminology ("ideological structure" is a weird
phrase in this context).

But I don't quite understand your use of verbiage in the phrase

"
ideological structure in the
complexity of highly ambiguous content.
"

What is is that you really mean here?  Just that an AGI has to
pragmatically understand
the relationships between concepts, as implied by ambiguous, complex
uses of language and as related to the relevance of concepts to the
nonlinguistic world?

I believe that OpenCogPrime will be able to do this, but I don't have
a one-paragraph explanation of how.  A complex task requires a complex
solution.  My proposed solution is documented online.

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
> And if you look at your "brief answer" para, you will find that while you
> talk of mappings and constraints, (which are not necessarily AGI at all),
> you make no mention in any form of how complexity applies to the crossing of
> hitherto unconnected "domains" [or matrices, frames etc], which, of course,
> are.
>


That is true that I did not mention that in  my brief email ... but I have
mentioned this in prior publications and just have no more time for
quasi-recreational emailing today!! sorry...

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben: analogy is mathematically a matter of finding mappings that match certain 
constraints.   The traditional AI approach to this would be to search the 
constrained space of mappings using some search heuristic.  A complex systems 
approach is to embed the constraints into a dynamical system and let the 
dynamical system evolve into a configuration that embodies a mapping matching 
the constraints.

Ben,

If you are to arrive at a surprising analogy or solution to a creative problem, 
 the first task is to find out a new domain that "maps" on to or is relevant to 
the given domain, and by definition you have no rules for where to search. If 
for example you had to solve Kauffman's practical problem - how do I 
hide/protect a loose computer cord so that no one trips over it? - which 
domains do you start with (that connect to computer cords), and where do you 
end? Books? Bricks? Tubes? Cellotape? Warning signs? There are actually an 
infinity (or practically endless set)  of possibilities. And there are no 
pre-applicable rules  about which domains to search, or what constitutes 
"hiding/protecting" - and therefore the "constraints" of the problem, or indeed 
how much evidence to consider, and what  constitutes evidence.And  "hiding 
computer cords and other household objects" is not a part of any formal subject 
or branch of reasoning.

Ditto if you, say, are an adman and have to find a new analogy for your beer 
being "as cool as a --- " (must be new/surprising aka cabbages and kings, and 
preferably in form as well as content, e.g. as cool as a tool in a pool as a 
rule [1st attempt] ).

Doesn't complexity only apply when you have some formulae or rules to start 
with? But you don't with analogy. That's the very nature of the problem

That's why I asked you to give me a problem example. {Can you remember a 
problem example of analogy or otherwise crossing domains from your book - just 
one? )

Nor can I see how maths applies to problems such as these, or any crossing of 
domains, other than to prove that there are infinite possibilities. Which 
branch of maths actually deals with analogies? 

And the statement:

"it is provable that complex systems methods can solve **any** analogy problem, 
given appropriate data" 

seems outrageous. You can prove mathematically that you can solve the creative 
problem of the "engram" (how info. is laid down in the brain)? That you can 
solve any of  the problems of discovery and invention currently being faced by 
science and technology? A mind-reading machine, say? Or did you mean problems 
where you are given "appropriate data", i.e. "the answers/clues/rules"? Those 
aren't problems of analogy or creativity. 

I don't know about you, but a lot of computer guys don't actually understand 
what analogy is. Hofstadter's  oft-cited "xyy is to xyz as abb is to a--?" for 
example  is NOT an analogy. It is logic.

And if you look at your "brief answer" para, you will find that while you talk 
of mappings and constraints, (which are not necessarily AGI at all), you make 
no mention in any form of how complexity applies to the crossing of hitherto 
unconnected "domains" [or matrices, frames etc], which, of course, are.


.








  Ben,
Ben: the reason AGI is so hard has to do with Santa Fe Institute style
complexity ...

Intelligence is not fundamentally grounded in any particular mechanism but 
rather in emergent structures
and dynamics that arise in certain complex systems coupled with their 
environments 

Characterizing what these emergent structures/dynamics are is hard, 

Ben,

Maybe you could indicate how complexity might help solve any aspect of 
*general* intelligence - how it will help in any form of crossing domains, such 
as analogy, metaphor, creativity, any form of resourcefulness  etc.-  giving 
some example.  

 
Personally,  I don't think it has any connection  - and it doesn't sound 
from your last sentence, as if you actually see a connection :). 



  You certainly draw some odd conclusions from the wording of peoples' 
sentences.  I not only see a connection, I wrote a book on this subject, 
published by Plenum Press in 1997: "From Complexity to Creativity."

  Characterizing these things at the conceptual and even mathematical level is 
not as hard at realizing them at the software level... my 1997 book was 
concerned with the former.

  I don't have time today to cut and paste extensively from there to satisfy 
your curiosity, but you're free to read the thing ;-) ... I still agree with 
most of it ...

  To give a brief answer to one of your questions: analogy is mathematically a 
matter of finding mappings that match certain constraints.   The traditional AI 
approach to this would be to search the constrained space of mappings using 
some search heuristic.  A complex systems approach is to embed the constraints 
into a dynamical system and let the dynamical system evolve into a 
configuration that embodies a ma

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
It doesn't have any application...

My proof has two steps

1)
Hutter's paper
The Fastest and Shortest Algorithm for All Well-Defined Problems
http://www.hutter1.net/ai/pfastprg.htm

2)
I can simulate Hutter's algorithm (or *any* algorithm)
using an attractor neural net, e.g. via Mikhail Zak's
neural nets with Lipschitz-discontinuous threshold
functions ...


This is all totally useless as it requires infeasibly much computing power
... but at least, it's funny, for those of us who get the joke ;-)

ben



On Tue, Sep 30, 2008 at 3:38 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Can't resist, Ben..
>
>  "it is provable that complex systems methods can solve **any** analogy
> problem, given appropriate data"
>
> Please indicate how your proof applies to the problem of developing an AGI
> machine. (I'll allow you to specify as much "appropriate data" as you like
> - any data,  of course, *currently* available).
>
>
> --
>   *agi* | Archives 
>  | 
> ModifyYour Subscription
> 
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"Nothing will ever be attempted if all possible objections must be first
overcome "  - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Can't resist, Ben..

"it is provable that complex systems methods can solve **any** analogy problem, 
given appropriate data" 

Please indicate how your proof applies to the problem of developing an AGI 
machine. (I'll allow you to specify as much "appropriate data" as you like - 
any data,  of course, *currently* available).




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben,

Well, funny perhaps to some. But nothing to do with AGI -  which has nothing to 
with "well-defined problems." 

The one algorithm or rule that can be counted on here is that AGI-ers won't 
deal with the problem of AGI -  how to cross domains (in ill-defined, 
ill-structured problems). Applies to Richard too. But the reasons for this 
general avoidance aren't complex :)

  Ben,
  It doesn't have any application...

  My proof has two steps

  1)
  Hutter's paper

  The Fastest and Shortest Algorithm for All Well-Defined Problems
  http://www.hutter1.net/ai/pfastprg.htm

  2)
  I can simulate Hutter's algorithm (or *any* algorithm)
  using an attractor neural net, e.g. via Mikhail Zak's
  neural nets with Lipschitz-discontinuous threshold
  functions ...


  This is all totally useless as it requires infeasibly much computing power 
... but at least, it's funny, for those of us who get the joke ;-)

  ben




  On Tue, Sep 30, 2008 at 3:38 PM, Mike Tintner <[EMAIL PROTECTED]> wrote:

Can't resist, Ben..

"it is provable that complex systems methods can solve **any** analogy 
problem, given appropriate data" 

Please indicate how your proof applies to the problem of developing an AGI 
machine. (I'll allow you to specify as much "appropriate data" as you like - 
any data,  of course, *currently* available).




  agi | Archives  | Modify Your Subscription  




  -- 
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  "Nothing will ever be attempted if all possible objections must be first 
overcome "  - Dr Samuel Johnson




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 4:18 PM, Mike Tintner <[EMAIL PROTECTED]>wrote:

>  Ben,
>
> Well, funny perhaps to some. But nothing to do with AGI -  which has
> nothing to with "well-defined problems."
>
>

I wonder if you are misunderstanding his use of terminology.

How about the problem of gathering as much money as possible while upsetting
people as little as possible?

That could be well defined in various ways, and would require AGI to solve
as far as I can see...



> The one algorithm or rule that can be counted on here is that AGI-ers
> won't deal with the problem of AGI -  how to cross domains (in ill-defined,
> ill-structured problems).
>


I suggestion the OpenCogPrime design can handle this, and it's outlined in
detail at

http://www.opencog.org/wiki/OpenCogPrime:WikiBook

You are not offering any counterarguments to my suggestion, perhaps (I'm not
sure)
because you lack the technical expertise or the time to read about the
design
in detail.

At least, Richard Loosemore did provide a counterargument, which I disagreed
with ... but you provide
no counterargument, you just repeat that you don't believe the design
addresses the problem ...
and I don't know why you feel that way except that it intuitively doesn't
seem to "feel right"
to you...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Mike Tintner
Ben,

I must assume you are being genuine here - and don't perceive that you have not 
at any point   illustrated how complexity might lead to the solution of any 
given general (domain-crossing) problem of AGI.

Your OpenCog design also does not illustrate how it is to solve problems - how 
it is, for example, to solve the problems of concept, especially speculative 
concept,, formation. There are no examples in the relevant passages. General 
statements of principle but no practical examples. [Otherwise offhand I can't 
see any sections that relate to crossing domains].

You  rarely give examples  - i.e. you do not ground your theories - your novel 
ideas, (as we have discussed before). [You give standard textbook examples of 
problems, of course,  in other, unrelated discussions]. 

You have already provided one very suitable example of a general AGI problem -  
how is your pet having learnt one domain - to play "fetch", - to use that 
knowledge to cross into another domain -  to learn/discover the game of 
"hide-and-seek."?  But I have repeatedly asked you to give me your ideas how 
your system will deal with this problem. And you have always avoided it. I 
don't think, frankly, you have an idea how it will make the connection in an 
AGI way. I am extremely confident you couldn't begin to explain how a complex 
approach will make the cross-domain connection between fetching and 
hiding/seeking. (What *is* the connection BTW?)

If it is any consolation - this reluctance to deal with AGI problems is 
universal among AGI-ers. Richard. Pei. Minsky...

Check how often in the past few years cross-domain problems have been dealt 
with on this group. Masses of programming, logical and mathematical problems, 
of course, in great, laudable detail. But virtually none that relate to 
crossing domains.

One thing is for sure - if you don't discuss and deal with the problems of AGI 
- and lots and lots of examples - you will never get any better at them. The 
answers won't magically pop up. No one ever got better at a skill by *not* 
practising it.

P.S. As for :

"gather as much money as possible while upsetting as few people as pos [or as 
little]" - it is a massively open-ended [and indeed GI] problem that can be 
instantiated in a virtual infinity of moneymaking domains [from stockmarkets, 
to careers, small jobs, prostitution and virtually any area of the economy] 
with a virtual infinity of constructions of  "upsetting." . Please explain how 
a complex AGII program, which by definition would not be pre-prepared for such 
a problem ,  would tightly define it or even *want* to .

And note your first instinct - rather than asking- how can we deal with this 
open-ended problem in an open-ended AGI way - you immediately talk about trying 
to define it in a closed-ended, tightly defined, basically *narrow* AI way. 
That again is a typical, pretty universal instinct among AGI-ers.

{Remember Levitt's "What people need is not a quarter-inch drill, but 
quarter-inch holes" -  AGI should be first & foremost not about how you 
construct certain logical programs, but how you solve certain problems - and 
then work out what programs you need.]






  Ben,

Well, funny perhaps to some. But nothing to do with AGI -  which has 
nothing to with "well-defined problems." 


  I wonder if you are misunderstanding his use of terminology.

  How about the problem of gathering as much money as possible while upsetting 
people as little as possible?

  That could be well defined in various ways, and would require AGI to solve as 
far as I can see...

   
The one algorithm or rule that can be counted on here is that AGI-ers won't 
deal with the problem of AGI -  how to cross domains (in ill-defined, 
ill-structured problems). 


  I suggestion the OpenCogPrime design can handle this, and it's outlined in 
detail at

  http://www.opencog.org/wiki/OpenCogPrime:WikiBook

  You are not offering any counterarguments to my suggestion, perhaps (I'm not 
sure)
  because you lack the technical expertise or the time to read about the design
  in detail.

  At least, Richard Loosemore did provide a counterargument, which I disagreed 
with ... but you provide
  no counterargument, you just repeat that you don't believe the design 
addresses the problem ...
  and I don't know why you feel that way except that it intuitively doesn't 
seem to "feel right"
  to you...

  -- Ben G




--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Trent Waddington
On Wed, Oct 1, 2008 at 8:03 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Your OpenCog design also does not illustrate how it is to solve problems -
> how it is, for example, to solve the problems of concept, especially
> speculative concept,, formation.

http://www.opencog.org/wiki/OpenCogPrime:WikiBook#Speculative_Concept_Formation

But you're right, it's not currently at the "Speculative Concept
Formation For Dummies" stage.. you have to use your imagination.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
> You have already provided one very suitable example of a general AGI
> problem -  how is your pet having learnt one domain - to play "fetch", - to
> use that knowledge to cross into another domain -  to learn/discover the
> game of "hide-and-seek."?  But I have repeatedly asked you to give me your
> ideas how your system will deal with this problem. And you have always
> avoided it. I don't think, frankly, you have an idea how it will make the
> connection in an AGI way. I am extremely confident you couldn't begin to
> explain how a complex approach will make the cross-domain connection between
> fetching and hiding/seeking.
>


You are wrong, but persistently arguing with you is not seeming
worthwhile...

What you're talking about is called "transfer learning", and was one of the
technical topics Joel Pitt and I talked about during his visit to my house a
few weeks ago.  We were discussing a particular technical approach to this
problem using PLN abduction -- which is implicit in the OpenCogPrime design
and the PLN book, but not spelled out in detail.

However, I don't have time to write down our ideas in detail for this list
right now.

The examples we were talking about were stuff like ... if an agent has
learned to play tag, how can it then generalize this knowledge to make it
easier for it to learn to play hide-and-seek ... simple stuff like that ...
and then, if it has learned to play hide-and-seek, how can it then
generalize this knowledge to learn how to hide valued items so its friends
can't find them ... etc.  Simple examples of transfer learning,
admittedly... but we did sketch out specifics of how to do this sorta stuff
using PLN...

This is stuff Joel may get to in 2009 in the OpenCog project, if things go
well... right now he's working on fixing up the PLN implementation...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-10-01 Thread Jim Bromer
I had said:
The problem is that you are still missing what should be the main
focus of your efforts.  It's not whether or not your program does good
statistical models, or uses probability nets, or hybrid technology of
some sort, or that you have solved some mystery to analogy that was
not yet understood.
--
Ben answered:
I am getting really, really tired of a certain conversational pattern that
often occurs on this list!!

It goes like this...
Person A asks some question
[Ben goes on]...
Then, Person B says "You are focusing on the wrong thing,
which shows you don't understand the AGI problem."

Urrggghh...
--

I did not say that you do not understand the AGI problem.  I meant
that while you may gain a little traction with your project, I believe
you will be spinning your wheels until you deal directly and
effectively with the problem of how the program can learn about
ideological structure from within a highly ambiguous context.

To be candid I don't think that the emotionality of your response was
caused by "a certain conversational pattern," and it was not at all
reasonable for you to assume that I was saying that you "don't
understand the AGI problem," as you put it.  I was saying that most
people don't have any idea what I mean when I talk about things like
interrelated ideological structures in an ambiguous environment, and
that this issue was central to the contemporary problem, but that
still does not generalize to an opinion that anyone who does not
understand what I am talking about does not understand anything about
Artificial Intelligence or AGI.

It is interesting that while you made an exaggerated accusation that I
dissed you, your greatest exaggeration was actually directed at
yourself; ie that you don't understand the AGI problem.  At any rate,
that remark came from your mind, not mine.

If you ever change your mind, and wish to discuss this with me
sometime, let me know.
Jim Bromer


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: [agi] Dangerous Knowledge

2008-10-01 Thread Ben Goertzel
  I was saying that most
> people don't have any idea what I mean when I talk about things like
> interrelated ideological structures in an ambiguous environment, and
> that this issue was central to the contemporary problem,



Maybe the reason people don't know what you mean, is that your manner
of phrasing the issue is so unusual?

Could you elaborate the problem you refer to, perhaps using some
examples?

It's easier to explain how an AGI design would deal with a certain example
situation or issue, than how it will address some general,
hard-to-disambiguate
verbal description of a problem area...

-- Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


RE: [agi] Dangerous Knowledge - Update

2008-10-01 Thread John G. Rose
> From: Brad Paulsen [mailto:[EMAIL PROTECTED]
> 
> Sorry, but in my drug-addled state I gave the wrong URI for the
> Dangerous
> Knowledge videos on YouTube.  The one I gave was just to the first part
> of
> the Cantor segment.  All of the segments can be reached from the link
> below.  You can recreate this link by searching, in YouTube, on the key
> words Dangerous Knowledge.
> 
> http://www.youtube.com/results?search_query=Dangerous+Knowledge&search_t
> ype=&aq=-1&oq=
> 

Just watched this video and I like the latter end of part 7 where they show 
Godel's normal neat paperwork and then the devoid sketchy papers where he was 
trying to figure out the continuum hypothesis. And then in his study when his 
hands started getting all stretched out and warped.

Let this be a lesson to people, working on the continuum hypothesis, 
incompleteness and potentially even AGI is dangerous to your health and could 
result in insanity or death. This should only be performed by qualified and 
highly trained individuals, unless of course you make a pact with Faust.

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com