Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
OK, understood...

On Dec 4, 2007 9:32 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Benjamin Goertzel wrote:
> >> Thus: building a NL parser, no matter how good it is, is of no use
> >> whatsoever unless it can be shown to emerge from (or at least fit with)
> >> a learning mechanism that allows the system itself to generate its own
> >> understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
> >> MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
> >> issue is dealt with, a NL parser will arise naturally, and any previous
> >> work on non-developmental, hand-built parsers will be completely
> >> discarded. You were trumpeting the importance of work that I know will
> >> be thrown away later, and in the mean time will be of no help in
> >> resolving the important issues.
> >
> > Richard, you discount the possibility that said NL parser will play a key
> > role in the adaptive emergence of a system that can generate its own
> > linguistic understanding.  I.e., you discount the possibility that, with the
> > right learning mechanism and instructional environment, hand-coded
> > rules may serve as part of the initial seed for a learning process that will
> > eventually generate knowledge obsoleting these initial hand-coded
> > rules.
> >
> > It's fine that you discount this possibility -- I just want to point out 
> > that
> > in doing so, you are making a bold and unsupported theoretical hypothesis,
> > rather than stating an obvious or demonstrated fact.
> >
> > Vaguely similarly, the "grammar" of child language is largely thrown
> > away in adulthood, yet it was useful as scaffolding in leading to the
> > emergence of adult language.
>
> The problem is that this discussion has drifted away from the original
> context in which I made the remarks.
>
> I do *not* discount the possibility that an ordinary NL parser may play
> a role in the future.
>
> What I was attacking was the idea that a NL parser that does a wonderful
> job today (but which is built on a formalism that ignores all the issues
> involved in getting an adaptive language-understanding system working)
> is IPSO FACTO going to be a valuable step in the direction of a full
> adaptive system.
>
> It was the linkage that I dismissed.  It was the idea that BECAUSE the
> NL parser did such a great job, therefore it has a very high probability
> of being a great step on the road to a full adaptive (etc) language
> understanding system.
>
> If the NL parser completely ignores those larger issues I am justified
> in saying that it is a complete crap shoot whether or not this
> particular parser is going to be of use in future, more complete
> theories of language.
>
> But that is not the same thing as making a blanket dismissal of all
> parsers, saying they cannot be of any use as (as you point out) seed
> material in the design of a complete system.
>
> I was objecting to Ed's pushing this particular NL parser in my face and
> insisting that I should respect it as a substantial step towards full
> AGI   .   and my objection was that I find models like that all show
> and no deep substance precisely because they ignore the larger issues
> and go for the short-term gratification of a parser that works really well.
>
> So I was not taking the position you thought I was.
>
>
>
>
> Richard Loosemore
>
>
>
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72155184-923590


[agi] Re: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Benjamin Goertzel
> What makes anyone think OpenCog will be different?  Is it more
> understandable?  Will there be long-term aficionados who write
> books on how to build systems in OpenCog?  Will the developers
> have experience, or just adolescent enthusiasm?  I'm watching
> the experiment to find out.

Well, OpenCog has more than one possible development avenue
associated with it.

On the one hand, I have some quite specific AGI design ideas which
I intend to publish next year (major aspects of the Novamente AGI
design), which are suited to be implemented within OpenCog.
As I believe these ideas are capable to lead to the development
of AGI at the human-level and beyond (though there are many
moderate-sized research problems that must be solved along the way,
and yes I realize the possibility that one of these blows up and
becomes a show-stopper, but I'm betting that won't happen and
I've certainly thought about it a lot...) ... thus I believe OpenCog
has big potential in this regard, if folks choose to develop it in that
way.

On the other hand OpenCog may also be quite valuable as
a platform for the development of other folks' AGI ideas, potentially
ones quite different from my own.  I don't know what will develop
and neither do any of us, I would suppose...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72154694-0749bf


RE: [agi] None of you seem to be able ...

2007-12-04 Thread John G. Rose
> From: Benjamin Goertzel [mailto:[EMAIL PROTECTED]
> 
> As an example of a creative leap (that is speculative and may be wrong,
> but is
> certainly creative), check out my hypothesis of emergent social-
> psychological
> intelligence as related to mirror neurons and octonion algebras:
> 
> http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf
> 
> I happen to think the real subtlety of intelligence happens on the
> emergent level,
> and not on the level of the particulars of the system that gives rise
> to the emergent
> phenomena.  That paper conjectures some example phenomena that I believe
> occur on the emergent level of intelligent systems.
> 

This paper really takes the reader though a detailed walk of a really nice
application of octonionic structure applied to the mind. The concept of
mirrorhouses is really creative and thought provoking especially applied in
this way. I like thinking about a mind in this sort of crystallographic
structure yet there is no way I could comb through the details like this.
This type of methodology has so many advantages such as - 

* being visually descriptive yet highly complex
* modular and building block friendly
* computers love this sort of structure, it's what they do best
* there is an enormous amount of math existing related to this already
worked out
* scalable, extremely extensible, systematic
* it fibrillates out to sociologic systems
* etc..

Even if this phenomena is not emergent or partially emergent, (I favor
partially at this point as crystal clear can be a prefecture of emergence),
you can build AGI based on optimal emergent structures that the human brain
might be coalescing in a perfect world, and also come up with new and better
ones that the human brain hasn't got to yet either by building them directly
or baking new ones in a programmed complex system.

John
 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72153159-51ae59


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
Matt,

Perhaps your are right.  

But one problem is that big Google-like compuplexes in the next five to ten
years will be powerful enough to do AGI and they will be much more efficient
for AGI search because the physical closeness of their machines will make it
possible for them to perform the massive interconnected needed for powerful
AGI much more efficiently.

Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 9:18 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]


--- Ed Porter <[EMAIL PROTECTED]> wrote:

> >MATT MAHONEY=> My design would use most of the Internet (10^9 P2P
> nodes).
> ED PORTER=> That's ambitious.  Easier said than done unless you have a
> Google, Microsoft, or mass popular movement backing you.

It would take some free software that people find useful.  The Internet has
been transformed before.  Remember when there were no web browsers and no
search engines?  You can probably think of transformations that would make
the
Internet more useful.  Centralized search is limited to a few big players
that
can keep a copy of the Internet on their servers.  Google is certainly
useful,
but imagine if it searched a space 1000 times larger and if posts were
instantly added to its index, without having to wait days for its spider to
find them.  Imagine your post going to persistent queries posted days
earlier.
 Imagine your queries being answered by real human beings in addition to
other
peers.

I probably won't be the one writing this program, but where there is a need,
I
expect it will happen.


> In a message passing network, the critical parameter is the ratio of
> messages
> out to messages in.  The ratio cannot exceed 1 on average.
> ED PORTER=> Thanks for the info.  By "unmaintainable" what do you
mean?
> 
> I don't understand why more messages coming in than going out creates a
> problem, unless most of what nodes do is relay message, which is not what
> they do in my system.

I meant the other way, which would flood the network with duplicate
messages. 
But I believe the network would be stable against this, even in the face of
spammers and malicious nodes, because most nodes would be configured to
ignore
duplicates and any messages that it deemed irrelevant.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72151542-9bffdb

RE: [agi] None of you seem to be able ...

2007-12-04 Thread Ed Porter
RICHARD LOOSEMOORE> There is a high prima facie *risk* that intelligence
involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect),


ED PORTER=> Richard, "prima facie" means obvious on its face.  The above
statement and those that followed it below may be obvious to you, but it is
not obvious to a lot of us, and at least I have not seen (perhaps because of
my own ignorance, but perhaps not) any evidence that it is obvious.
Apparently Ben also does not find your position to be obvious, and Ben is no
dummy.

Richard, did you ever just consider that it might be "turtles all the way
down", and by that I mean experiential patterns, such as those that could be
represented by Novamente atoms (nodes and links) in a gen/comp hierarchy
"all the way down".  In such a system each level is quite naturally derived
from levels below it by learning from experience.  There is a lot of dynamic
activity, but much of it is quite orderly, like that in Hecht-Neilsen's
Confabulation.  There is no reason why there has to be a "GLOBAL-LOCAL
DISCONNECT" of the type you envision, i.e., one that is totally impossible
to architect in terms of until one totally explores global-local disconnect
space (just think how large an exploration space that might be).

So if you have prima facie evidence to support your claim (other than your
paper which I read which does not meet that standard), then present it.  If
you make me eat my words you will have taught me something sufficiently
valuable that I will relish the experience.

Ed Porter




-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 9:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Benjamin Goertzel wrote:
> On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>> Benjamin Goertzel wrote:
>> [snip]
>>> And neither you nor anyone else has ever made a cogent argument that
>>> emulating the brain is the ONLY route to creating powerful AGI.  The
closest
>>> thing to such an argument that I've seen
>>> was given by Eric Baum in his book "What Is
>>> Thought?", and I note that Eric has backed away somewhat from that
>>> position lately.
>> This is a pretty outrageous statement to make, given that you know full
>> well that I have done exactly that.
>>
>> You may not agree with the argument, but that is not the same as
>> asserting that the argument does not exist.
>>
>> Unless you were meaning "emulating the brain" in the sense of emulating
>> it ONLY at the low level of neural wiring, which I do not advocate.
> 
> I don't find your nor Eric's nor anyone else's argument that
brain-emulation
> is the "golden path" very strongly convincing...
> 
> However, I found Eric's argument by reference to the compressed nature of
> the genome, more convincing than your argument via the hypothesis of
> irreducible emergent complexity...
> 
> Sorry if my choice of words was not adequately politic.  I find your
argument
> interesting, but it's certainly just as speculative as the various AGI
theories
> you dismiss  It basically rests on a big assumption, which is that the
> complexity of human intelligence is analytically irreducible within
pragmatic
> computational constraints.  In this sense it's less an argument than a
> conjectural
> assertion, albeit an admirably bold one.

Ben,

This is even worse.

The argument I presented was not a "conjectural assertion", it made the 
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect), and

   2) Because of the unique and unusual nature of complexity there is 
only a vanishingly small chance that we will be able to find a way to 
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to 
ignore this risk and simply continue with an "engineering" approach 
(pretending that complexity is insignificant), then the *only* evidence 
we would ever get that irreducibility was preventing us from building a 
complete intelligence would be the fact that we would simply run around 
in circles all the time, wondering why, when we put large systems 
together, they didn't quite make it, and

   4) Therefore we need to adopt a "Precautionary Principle" and treat 
the problem as if irreducibility really is significant.


Whether you like it or not - whether you've got too much invested in the 
contrary point of view to admit it, or not - this is a perfectly valid 
and coherent argument, and your attempt to try to push it into some 
lesser realm of a "conjectural assertion" is profoundly insulting.




Richard Loosemore


-
This

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
The particular NL parser paper in question, Collins's "Convolution Kernels
for Natural Language"
(http://l2r.cs.uiuc.edu/~danr/Teaching/CS598-05/Papers/Collins-kernels.pdf)
is actually saying something quite important that extends way beyond parsers
and is highly applicable to AGI in general.  

It is actually showing that you can do something roughly equivalent to
growing neural gas (GNG) in a space with something approaching 500,000
dimensions, but you can do it without normally having to deal with more than
a few of those dimensions at one time.  GNG is an algorithm I learned about
from reading Peter Voss that allows one to learn how to efficiently
represent a distribution in a relatively high dimensional space in a totally
unsupervised manner.  But there really seem to be no reason why there should
be any limit to the dimensionality of the space in which the Collin's
algorithm works, because it does not use an explicit vector representation,
nor, if I recollect correctly, a Euclidian distance metric, but rather a
similarity metric which is generally much more appropriate for matching in
very high dimensional spaces.

But what he is growing are not just points representing where data has
occurred in a high dimensional space, but sets of points that define
hyperplanes for defining the boundaries between classes.  My recollection is
that this system learns automatically from both labeled data (instances of
correct parse trees) and randomly generated deviations from those instances.
His particular algorithm matches tree structures, but with modification it
would seem to be extendable to matching arbitrary nets.  Other versions of
it could be made to operate, like GNG, in an unsupervised manner.

If you stop and think about what this is saying and generalize from it, it
provides an important possible component in an AGI tool kit. What it shows
is not limited to parsing, but it would seem possibly applicable to
virtually any hierarchical or networked representation, including nets of
semantic web RDF triples, and semantic nets, and predicate logic
expressions.  At first glance it appears it would even be applicable to
kinkier net matching algorithms, such as an Augmented transition network
(ATN) matching.

So if one reads this paper with a mind to not only what it specifically
shows, but to what how what it shows could be expanded, this paper says
something very important.  That is, that one can represent, learn, and
classify things in very high dimensional spaces -- such as 10^1
dimensional spaces -- and do it efficiently provided the part of the space
being represented is sufficiently sparsely connected.

I had already assumed this, before reading this paper, but the paper was
valuable to me because it provided a mathematically rigorous support for my
prior models, and helped me better understand the mathematical foundations
of my own prior intuitive thinking.  

It means that systems like Novemente can deal in very high dimensional
spaces relatively efficiently. It does not mean that all processes that can
be performed in such spaces will be computationally cheap (for example,
combinatorial searches), but it means that many of them, such as GNG like
recording of experience, and simple indexed based matching can scale
relatively well in a sparsely connected world.

That is important, for those with the vision to understand.

Ed Porter

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 8:59 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

> Thus: building a NL parser, no matter how good it is, is of no use
> whatsoever unless it can be shown to emerge from (or at least fit with)
> a learning mechanism that allows the system itself to generate its own
> understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
> MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
> issue is dealt with, a NL parser will arise naturally, and any previous
> work on non-developmental, hand-built parsers will be completely
> discarded. You were trumpeting the importance of work that I know will
> be thrown away later, and in the mean time will be of no help in
> resolving the important issues.

Richard, you discount the possibility that said NL parser will play a key
role in the adaptive emergence of a system that can generate its own
linguistic understanding.  I.e., you discount the possibility that, with the
right learning mechanism and instructional environment, hand-coded
rules may serve as part of the initial seed for a learning process that will
eventually generate knowledge obsoleting these initial hand-coded
rules.

It's fine that you discount this possibility -- I just want to point out
that
in doing so, you are making a bold and unsupported theoretical hypothesis,
rather than stating an obvious or demonstrated fact.

Vaguely similarly, the "grammar" of child lang

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)

> The argument I presented was not a "conjectural assertion", it made the
> following coherent case:
>
>1) There is a high prima facie *risk* that intelligence involves a
> significant amount of irreducibility (some of the most crucial
> characteristics of a complete intelligence would, in any other system,
> cause the behavior to show a global-local disconnect), and

The above statement contains two fuzzy terms -- "high" and "significant" ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.

>2) Because of the unique and unusual nature of complexity there is
> only a vanishingly small chance that we will be able to find a way to
> assess the exact degree of risk involved, and
>
>3) (A corollary of (2)) If the problem were real, but we were to
> ignore this risk and simply continue with an "engineering" approach
> (pretending that complexity is insignificant),

The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.

Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...

> then the *only* evidence
> we would ever get that irreducibility was preventing us from building a
> complete intelligence would be the fact that we would simply run around
> in circles all the time, wondering why, when we put large systems
> together, they didn't quite make it, and

No.  Experimenting with AI systems could lead to evidence that would
support the irreducibility hypothesis more directly than that.  I doubt they
will but it's possible.  For instance, we might discover that creating more and
more intelligent systems inevitably presents more and more complex
parameter-tuning problems, so that parameter-tuning appears to be the
bottleneck.  This would suggest that some kind of highly expensive evolutionary
or ensemble approach as you're suggesting might be necessary.

>4) Therefore we need to adopt a "Precautionary Principle" and treat
> the problem as if irreducibility really is significant.
>
>
> Whether you like it or not - whether you've got too much invested in the
> contrary point of view to admit it, or not - this is a perfectly valid
> and coherent argument, and your attempt to try to push it into some
> lesser realm of a "conjectural assertion" is profoundly insulting.

The form of the argument is coherent and valid; but the premises involve
fuzzy quantifiers whose values you are apparently setting by
intuition, and whose
specific values sensitively impact the truth value of the conclusion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72135696-ff196d


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Richard Loosemore

Benjamin Goertzel wrote:

Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows the system itself to generate its own
understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
issue is dealt with, a NL parser will arise naturally, and any previous
work on non-developmental, hand-built parsers will be completely
discarded. You were trumpeting the importance of work that I know will
be thrown away later, and in the mean time will be of no help in
resolving the important issues.


Richard, you discount the possibility that said NL parser will play a key
role in the adaptive emergence of a system that can generate its own
linguistic understanding.  I.e., you discount the possibility that, with the
right learning mechanism and instructional environment, hand-coded
rules may serve as part of the initial seed for a learning process that will
eventually generate knowledge obsoleting these initial hand-coded
rules.

It's fine that you discount this possibility -- I just want to point out that
in doing so, you are making a bold and unsupported theoretical hypothesis,
rather than stating an obvious or demonstrated fact.

Vaguely similarly, the "grammar" of child language is largely thrown
away in adulthood, yet it was useful as scaffolding in leading to the
emergence of adult language.


The problem is that this discussion has drifted away from the original 
context in which I made the remarks.


I do *not* discount the possibility that an ordinary NL parser may play 
a role in the future.


What I was attacking was the idea that a NL parser that does a wonderful 
job today (but which is built on a formalism that ignores all the issues 
involved in getting an adaptive language-understanding system working) 
is IPSO FACTO going to be a valuable step in the direction of a full 
adaptive system.


It was the linkage that I dismissed.  It was the idea that BECAUSE the 
NL parser did such a great job, therefore it has a very high probability 
of being a great step on the road to a full adaptive (etc) language 
understanding system.


If the NL parser completely ignores those larger issues I am justified 
in saying that it is a complete crap shoot whether or not this 
particular parser is going to be of use in future, more complete 
theories of language.


But that is not the same thing as making a blanket dismissal of all 
parsers, saying they cannot be of any use as (as you point out) seed 
material in the design of a complete system.


I was objecting to Ed's pushing this particular NL parser in my face and 
insisting that I should respect it as a substantial step towards full 
AGI   .   and my objection was that I find models like that all show 
and no deep substance precisely because they ignore the larger issues 
and go for the short-term gratification of a parser that works really well.


So I was not taking the position you thought I was.




Richard Loosemore





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72135004-3fc959


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Matt Mahoney

--- Ed Porter <[EMAIL PROTECTED]> wrote:

> >MATT MAHONEY=> My design would use most of the Internet (10^9 P2P
> nodes).
> ED PORTER=> That's ambitious.  Easier said than done unless you have a
> Google, Microsoft, or mass popular movement backing you.

It would take some free software that people find useful.  The Internet has
been transformed before.  Remember when there were no web browsers and no
search engines?  You can probably think of transformations that would make the
Internet more useful.  Centralized search is limited to a few big players that
can keep a copy of the Internet on their servers.  Google is certainly useful,
but imagine if it searched a space 1000 times larger and if posts were
instantly added to its index, without having to wait days for its spider to
find them.  Imagine your post going to persistent queries posted days earlier.
 Imagine your queries being answered by real human beings in addition to other
peers.

I probably won't be the one writing this program, but where there is a need, I
expect it will happen.


> In a message passing network, the critical parameter is the ratio of
> messages
> out to messages in.  The ratio cannot exceed 1 on average.
> ED PORTER=> Thanks for the info.  By "unmaintainable" what do you mean?
> 
> I don't understand why more messages coming in than going out creates a
> problem, unless most of what nodes do is relay message, which is not what
> they do in my system.

I meant the other way, which would flood the network with duplicate messages. 
But I believe the network would be stable against this, even in the face of
spammers and malicious nodes, because most nodes would be configured to ignore
duplicates and any messages that it deemed irrelevant.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72132605-fe415f


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Richard Loosemore

Benjamin Goertzel wrote:

On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Benjamin Goertzel wrote:
[snip]

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.

This is a pretty outrageous statement to make, given that you know full
well that I have done exactly that.

You may not agree with the argument, but that is not the same as
asserting that the argument does not exist.

Unless you were meaning "emulating the brain" in the sense of emulating
it ONLY at the low level of neural wiring, which I do not advocate.


I don't find your nor Eric's nor anyone else's argument that brain-emulation
is the "golden path" very strongly convincing...

However, I found Eric's argument by reference to the compressed nature of
the genome, more convincing than your argument via the hypothesis of
irreducible emergent complexity...

Sorry if my choice of words was not adequately politic.  I find your argument
interesting, but it's certainly just as speculative as the various AGI theories
you dismiss  It basically rests on a big assumption, which is that the
complexity of human intelligence is analytically irreducible within pragmatic
computational constraints.  In this sense it's less an argument than a
conjectural
assertion, albeit an admirably bold one.


Ben,

This is even worse.

The argument I presented was not a "conjectural assertion", it made the 
following coherent case:


  1) There is a high prima facie *risk* that intelligence involves a 
significant amount of irreducibility (some of the most crucial 
characteristics of a complete intelligence would, in any other system, 
cause the behavior to show a global-local disconnect), and


  2) Because of the unique and unusual nature of complexity there is 
only a vanishingly small chance that we will be able to find a way to 
assess the exact degree of risk involved, and


  3) (A corollary of (2)) If the problem were real, but we were to 
ignore this risk and simply continue with an "engineering" approach 
(pretending that complexity is insignificant), then the *only* evidence 
we would ever get that irreducibility was preventing us from building a 
complete intelligence would be the fact that we would simply run around 
in circles all the time, wondering why, when we put large systems 
together, they didn't quite make it, and


  4) Therefore we need to adopt a "Precautionary Principle" and treat 
the problem as if irreducibility really is significant.



Whether you like it or not - whether you've got too much invested in the 
contrary point of view to admit it, or not - this is a perfectly valid 
and coherent argument, and your attempt to try to push it into some 
lesser realm of a "conjectural assertion" is profoundly insulting.





Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72132038-3654d5


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Benjamin Goertzel
> Thus: building a NL parser, no matter how good it is, is of no use
> whatsoever unless it can be shown to emerge from (or at least fit with)
> a learning mechanism that allows the system itself to generate its own
> understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
> MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
> issue is dealt with, a NL parser will arise naturally, and any previous
> work on non-developmental, hand-built parsers will be completely
> discarded. You were trumpeting the importance of work that I know will
> be thrown away later, and in the mean time will be of no help in
> resolving the important issues.

Richard, you discount the possibility that said NL parser will play a key
role in the adaptive emergence of a system that can generate its own
linguistic understanding.  I.e., you discount the possibility that, with the
right learning mechanism and instructional environment, hand-coded
rules may serve as part of the initial seed for a learning process that will
eventually generate knowledge obsoleting these initial hand-coded
rules.

It's fine that you discount this possibility -- I just want to point out that
in doing so, you are making a bold and unsupported theoretical hypothesis,
rather than stating an obvious or demonstrated fact.

Vaguely similarly, the "grammar" of child language is largely thrown
away in adulthood, yet it was useful as scaffolding in leading to the
emergence of adult language.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72129171-2bf67a


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Richard Loosemore


Ed Porter wrote:

Richard,

It is not clear how valuable your 25 years of hard won learning is if
 it causes you to dismiss valuable scientific work that seems to have
 eclipsed the importance of anything I or you have published as 
"trivial exercises in public relations" without giving any reason 
whatsoever for the particular dismissal.


I welcome criticism in this forum provided it is well reasoned and 
without venom.  But to dismiss a list of examples I give to support 
an argument as "trivial exercises in public relations" without any 
justification other than the fact that in general a certain numbers 
of published papers are inaccurate and/or overblown, is every bit as 
dishonest as calling someone a liar with regard to a particular 
statement based on nothing more than the knowledge some people are 
liars.


In my past exchanges with you, sometimes your responses have been 
helpful. But I have noticed that although you are very quick to 
question me (and others), if I question you, rather than respond 
directly to my arguments you often don't respond to them at all -- 
such as your recent refusal to justify your allegation that my whole 
framework, presumably for understanding AGI, was wrong (a pretty 
insulting statement which should not be flung around without some 
justification).  Or if you do respond to challenges, you often 
dismiss them as invalid without any substantial evidence, or you 
substantially change the subject, such as by focusing on one small 
part of my argument that I have not yet fully supported, while 
refusing to acknowledge the major support I have shown for the major 
thrust of my argument.


When you argue like that there really is no purpose in continuing the
 conversation.  What's the point.  Under those circumstance your not 
dealing with someone who is likely to tell you anything of worth. 
Rather you are only likely to hear lame defensive arguments from 
somebody who is either incapable of properly defending or unwilling 
to properly defend their arguments, and, thus, is unlikely to 
communicate anything of value in the exchange.


Your 25 years of experience doesn't mean squat about how much you 
truly understand AGI unless you are capable of being more 
intellectually honest, both with yourself and with others -- and 
unless you are capable of actually reasonably defending your 
understandings, head-on, against reasoned questioning and countering 
evidence.  To dismiss counter evidence cited against your arguments 
as "trivial exercises in public relations" without any specific 
justification is not a reasonable defense, and the fact that you so 
often result to such intellectually dishonest tactics to defend your

 stated understandings relating to AGI really does call into question
 the quality of those understandings.

In summary, don't go around attacking other people's statements 
unless you are willing to defend those attacks in an intellectually 
honest manner.


I confess, I would rather that I had not so quickly dismissed those
researchers you mentioned - mostly because my motivation at the time was
to dismiss the exaggerated value that *you* placed on these results.

But let me explain the reason why I still feel that it was valid to
dismiss them.

They are examples of a category of research that addresses issues that
are completely compromised by the lack of solutions to other issues.
Thus: building a NL parser, no matter how good it is, is of no use
whatsoever unless it can be shown to emerge from (or at least fit with)
a learning mechanism that allows the system itself to generate its own
understanding (or, at least, acquisition) of grammar IN THE CONTEXT OF A
MECHANISM THAT ALSO ACCOMPLISHES REAL UNDERSTANDING. When that larger
issue is dealt with, a NL parser will arise naturally, and any previous
work on non-developmental, hand-built parsers will be completely
discarded. You were trumpeting the importance of work that I know will
be thrown away later, and in the mean time will be of no help in
resolving the important issues.

Now, I am harsh about these researchers not because they in particular
were irresponsible, but because they are part of a tradition in which
everyone is looking for cheap results that superficially appear good to
peer reviewers, so they can get things published, so they can get more
research grants, so they can get higher salaries. There is an
appallingly high incidence of research that is carried out because it
fits the ideal paper-publication template, not because the work itself
addresses important issues. This is a kind of low-level academic 
corruption, and I will continue to call it what it is, even if you don't 
have the slightest idea that this corruption exists.


It was towards *that* issue that my criticism was directed.

I would have been perfectly happy to explain this to you before, but
instead of appreciating where I was coming from, you launched into a
tirade about my dishonesty and stupidity in rejecting pap

Re: [agi] "How to tepresent things" problem

2007-12-04 Thread Richard Loosemore

Dennis Gorelik wrote:

Richard,


3) A way to represent things - and in particular, uncertainty - without
getting buried up to the eyeballs in (e.g.) temporal logics that nobody
believes in.


Conceptually the way of representing things is described very well.
It's Neural Network -- set of nodes (concepts), when every node can be
connected with the set of other nodes. Every connection has it's own
weight.

Some nodes are connected with external devices.
For example, one node can be connected with one word in text
dictionary (that is an external device).


Do you see any problems with such architecture?


Many, unfortunately.

Too many to list all of them.  A couple are:  you need special extra 
mechanisms to handle the difference between generic nodes and instance 
nodes (in a basic neural net there is no distinction between these two, 
so the system cannot represent even the most basic of situations), and 
you need extra mechanisms to handle the dynamic creation/assignment of 
new nodes, because new things are being experienced all the time.


These extra mechanisms are so important that is arguable that the 
behavior of the system is dominated by *them*, not by the mere fact that 
the design started out as a neural net.


Having said that, I believe in neural nets as a good conceptual starting 
point.


It is just that you need to figure out all that machinery - and no one 
has, so there is a "representation" problem in my previous list of problems.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72127217-41d988


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
On Dec 4, 2007 8:38 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Benjamin Goertzel wrote:
> [snip]
> > And neither you nor anyone else has ever made a cogent argument that
> > emulating the brain is the ONLY route to creating powerful AGI.  The closest
> > thing to such an argument that I've seen
> > was given by Eric Baum in his book "What Is
> > Thought?", and I note that Eric has backed away somewhat from that
> > position lately.
>
> This is a pretty outrageous statement to make, given that you know full
> well that I have done exactly that.
>
> You may not agree with the argument, but that is not the same as
> asserting that the argument does not exist.
>
> Unless you were meaning "emulating the brain" in the sense of emulating
> it ONLY at the low level of neural wiring, which I do not advocate.

I don't find your nor Eric's nor anyone else's argument that brain-emulation
is the "golden path" very strongly convincing...

However, I found Eric's argument by reference to the compressed nature of
the genome, more convincing than your argument via the hypothesis of
irreducible emergent complexity...

Sorry if my choice of words was not adequately politic.  I find your argument
interesting, but it's certainly just as speculative as the various AGI theories
you dismiss  It basically rests on a big assumption, which is that the
complexity of human intelligence is analytically irreducible within pragmatic
computational constraints.  In this sense it's less an argument than a
conjectural
assertion, albeit an admirably bold one.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72126612-7f96e4


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
>MATT MAHONEY=> My design would use most of the Internet (10^9 P2P
nodes).
ED PORTER=> That's ambitious.  Easier said than done unless you have a
Google, Microsoft, or mass popular movement backing you.

>> ED PORTER=> I mean, what would motivate the average American, or even
the average computer geek turn over part of his computer to it?...
>MATT MAHONEY=> The value is the ability to post messages that can be
found by search, without having to create a website.  Information has
negative value; people will trade CPU resources for the ability to
advertise.
ED PORTER=>It sounds theoretically possible.  But actually making it
happen in a world with so much competition for mind and machine share might
be quite difficult.  Again it is something that would probably require a
major force of the type I listed above to make it happen.


>> ED PORTER=> Are you saying that as a system becomes bigger it
naturally becomes unstable, or what?
>MATT MAHONEY=> 
When a system's Lyapunov exponent (or its discrete approximation) becomes
positive, it becomes unmaintainable.  This is solved by reducing its
interconnectivity.  For example, in software we use scope, data abstraction,
packages, protocols, etc. to reduce the degree to which one part of the
program can affect another.  This allows us to build larger programs.

In a message passing network, the critical parameter is the ratio of
messages
out to messages in.  The ratio cannot exceed 1 on average.
ED PORTER=> Thanks for the info.  By "unmaintainable" what do you mean?

I don't understand why more messages coming in than going out creates a
problem, unless most of what nodes do is relay message, which is not what
they do in my system.

The unruly chaotic side of AGI is not something I have thought much about.
I have tried to design my system to largely avoid it.  So this is something
I don't know much about, although I have thought about net congestion a fair
amount which can be very dynamic, and that sounds like it is a related to
what you are talking about.  

I have tried to design my system as a largly asynchronous messaging system
so most processes are relatively loosely linked, as browsers and servers
generally are on the internet.  As such, the major type of instability I
have worried about is that of network traffic congestion, such as if all of
a sudden many nodes want to talk to the same node, both for computer nodes
and pattern nodes.

I WOULD BE INTERESTED IN ANY THOUGHTS ON THE OTHER TYPES OF DYNAMIC
INSTABILITIES A HIERARCHICAL MEMORY SYSTEM -- WITH PROBABILISTIC INDEX-BASED
SPREADING ACTIVATION -- MIGHT HAVE.

Matt, it sounds as if OpenCog ever tries to build a large P2P network "you
the man".

Ed Porter


-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 7:42 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

--- Ed Porter <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> IN my Mon 12/3/2007 8:17 PM post to John Rose from which your are probably
> quoting below I discussed the bandwidth issues.  I am assuming nodes
> directly talk to each other, which is probably overly optimistic, but
still
> are limited by the fact that each node can only receive somewhere roughly
> around 100 128 byte messages a second.  Unless you have a really big P2P
> system, that just isn't going to give you much bandwidth.  If you had 100
> million P2P nodes it would.  Thus, a key issue is how many participants is
> an AGI-at-Home P2P system going to get.  

My design would use most of the Internet (10^9 P2P nodes).  Messages would
be
natural language text strings, making no distinction between documents,
queries, and responses.  Each message would have a header indicating the ID
and time stamp of the originator and any intermediate nodes through which
the
message was routed.  A message could also have attached files.  Each node
would have a cache of messages and its own policy on which messages it
decides
to keep or discard.

The goal of the network is to route messages to other nodes that store
messages with matching terms.  To route an incoming message x, it matches
terms in x to terms in stored messages and sends copies to nodes that appear
in those headers, appending its own ID and time stamp to the header of the
outgoing copies.  It also keeps a copy, so that the receiving nodes knows
that
they know it has a copy of x (at least temporarily).

The network acts as a distributed database with a distributed search
function.
 If X posts a document x and Y posts a query y with matching terms, then the
network acts to route x to Y and y to X.

> I mean, what would motivate the average American, or even the average
> computer geek turn over part of his computer to it?  It might not be an
easy
> sell for more than several hundred or several thousand people, at least
> until it could do something cool, like index their videos for them, be a
> funny chat 

Re: [agi] None of you seem to be able ...

2007-12-04 Thread Richard Loosemore

Benjamin Goertzel wrote:
[snip]

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.


This is a pretty outrageous statement to make, given that you know full 
well that I have done exactly that.


You may not agree with the argument, but that is not the same as 
asserting that the argument does not exist.


Unless you were meaning "emulating the brain" in the sense of emulating 
it ONLY at the low level of neural wiring, which I do not advocate.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72125338-4c83ae


Re: [agi] Solution to "Grounding" problem

2007-12-04 Thread Richard Loosemore

Dennis Gorelik wrote:

Richard,


1) Grounding Problem (the *real* one, not the cheap substitute that
everyone usually thinks of as the symbol grounding problem).


Could you describe, what *real* grounding problem is?

It would be nice to consider an example.

Say, we are trying to build AGI for the purpose of running intelligent
chat-bot.

What would be the grounding problem in this case?


I'll do my best.

The grounding problem has to do with exactly who is doing the 
interpreting of the AGI's internal symbols.


If the system is built in such a way that it builds its own symbols as 
part of the process of using them, then by definition it is grounded 
because it was the one that made the symbols.


But if we write down a bunch of symbols - deciding the format in which 
the symbols are represented, and stuff at least some of them with 
content - then there is a very big question about whether the mechanisms 
that browse on those symbols will actually be *using* them as if their 
meaning was the same as the meaning we originally intended.  Meaning, 
you see, is implicit in the way the symbols are used, so there is no 
particular reason why the way the symbols are actually used by the 
system should match up with the originally intended meaning that we 
impose when we look at the symbols.


The way this most often manifests itself is when the AI system delivers 
results in natural language that are simply an expression of our imposed 
meanings.


Main difficulty:  this entire problem is extremely subtle, and most 
people simply don't get what the problem is, so they think it is about 
connecting the AGI to its environment in some way.  It takes a fair bit 
of effort to get you head around the real problem (I have only sketched 
a pale shadow of it in this post, for example).


Hope that makes enough sense.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72122415-b8477a


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Benjamin Goertzel
> More generally, I don't perceive any readiness to recognize that  the brain
> has the answers to all the many unsolved problems of AGI  -

Obviously the brain contains answers to many of the unsolved problems of
AGI (not all -- e.g. not the problem of how to create a stable goal system
under recursive self-improvement).   However, current neuroscience does
NOT contain these answers.

And neither you nor anyone else has ever made a cogent argument that
emulating the brain is the ONLY route to creating powerful AGI.  The closest
thing to such an argument that I've seen
was given by Eric Baum in his book "What Is
Thought?", and I note that Eric has backed away somewhat from that
position lately.

> I think it
> should be obvious that AGI isn't going to happen - and none of the unsolved
> problems are going to be solved - without major creative leaps. Just look
> even at the ipod & iphone -  major new technology never happens without such
> leaps.

The above sentence is rather hilarious to me.

If the Ipod and Iphone are your measure for "creative leaps" then
there have been
loads and loads of major creative leaps in AGI and narrow-AI research.

Anyway it seems to me that you're not just looking for creative leaps,
you're looking
for creative leaps that match your personal intuition.  Perhaps the
real problem is that
your personal intuition about intelligence is largely off-base ;-)

As an example of a creative leap (that is speculative and may be wrong, but is
certainly creative), check out my hypothesis of emergent social-psychological
intelligence as related to mirror neurons and octonion algebras:

http://www.goertzel.org/dynapsyc/2007/mirrorself.pdf

I happen to think the real subtlety of intelligence happens on the
emergent level,
and not on the level of the particulars of the system that gives rise
to the emergent
phenomena.  That paper conjectures some example phenomena that I believe
occur on the emergent level of intelligent systems.

Loosemore agrees with me on the importance of emergence, but he feels
there is a fundamental
irreducibility that makes it pragmatically impossible to figure out
via science, math
and intuition which concrete structures/dynamics will give rise to the
right emergent
structures, without doing a massive body of simulation experiments.  I
think he overstates
the degree of irreducibility.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72114408-ae9503


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Matt Mahoney
--- Ed Porter <[EMAIL PROTECTED]> wrote:

> Matt,
> 
> IN my Mon 12/3/2007 8:17 PM post to John Rose from which your are probably
> quoting below I discussed the bandwidth issues.  I am assuming nodes
> directly talk to each other, which is probably overly optimistic, but still
> are limited by the fact that each node can only receive somewhere roughly
> around 100 128 byte messages a second.  Unless you have a really big P2P
> system, that just isn't going to give you much bandwidth.  If you had 100
> million P2P nodes it would.  Thus, a key issue is how many participants is
> an AGI-at-Home P2P system going to get.  

My design would use most of the Internet (10^9 P2P nodes).  Messages would be
natural language text strings, making no distinction between documents,
queries, and responses.  Each message would have a header indicating the ID
and time stamp of the originator and any intermediate nodes through which the
message was routed.  A message could also have attached files.  Each node
would have a cache of messages and its own policy on which messages it decides
to keep or discard.

The goal of the network is to route messages to other nodes that store
messages with matching terms.  To route an incoming message x, it matches
terms in x to terms in stored messages and sends copies to nodes that appear
in those headers, appending its own ID and time stamp to the header of the
outgoing copies.  It also keeps a copy, so that the receiving nodes knows that
they know it has a copy of x (at least temporarily).

The network acts as a distributed database with a distributed search function.
 If X posts a document x and Y posts a query y with matching terms, then the
network acts to route x to Y and y to X.

> I mean, what would motivate the average American, or even the average
> computer geek turn over part of his computer to it?  It might not be an easy
> sell for more than several hundred or several thousand people, at least
> until it could do something cool, like index their videos for them, be a
> funny chat bot, or something like that.

The value is the ability to post messages that can be found by search, without
having to create a website.  Information has negative value; people will trade
CPU resources for the ability to advertise.

> In addition to my last email, I don't understand what your were saying below
> about complexity.  Are you saying that as a system becomes bigger it
> naturally becomes unstable, or what?

When a system's Lyapunov exponent (or its discrete approximation) becomes
positive, it becomes unmaintainable.  This is solved by reducing its
interconnectivity.  For example, in software we use scope, data abstraction,
packages, protocols, etc. to reduce the degree to which one part of the
program can affect another.  This allows us to build larger programs.

In a message passing network, the critical parameter is the ratio of messages
out to messages in.  The ratio cannot exceed 1 on average.  Each node can have
 its own independent policy of prioritizing messages, but will probably send
messages at a nearly constant maximum rate regardless of the input rate.  This
reaches equilibrium at a ratio of 1, but it would also allow rare but
"important" messages to propagate to a large number of nodes.  All critically
balanced complex systems are subject to rare but significant events, for
example software (state changes and failures), evolution (population
explosions, plagues, and mass extinctions), and gene regulatory networks (cell
differentiation).


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72111983-b0ec39


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Mike Tintner


Dennis:
MT:none of you seem able to face this to my mind obvious truth.


Who do you mean under "you" in this context?
Do you think that everyone here agrees with Matt on everyting?

Quite the opposite is true -- almost every AI researcher has his own

unique set of believes.


I'm delighted to be corrected, if wrong. My hypothesis was that in 
processing ideas -  especially in searching for analogies - the brain will 
search through v. few examples in any given moment, all or almost all of 
them relevant,  where computers will search blindly through vast numbers. 
(I'm just reading a neuroeconomics book which puts the ratio of computer 
communication speed to that of the brain at 30 million to one). It seems to 
me that the brain's principles of search are fundamentally different to 
those of computers. My impression is that "none of you are able to face" 
that particular truth - correct me .


More generally, I don't perceive any readiness to recognize that  the brain 
has the answers to all the many unsolved problems of AGI  - answers which 
mostly if not entirely involve *very different kinds* of computation. I 
believe, for example, that the brain extensively uses direct shape-matching/ 
mappings to compare - and only some new form of analog computation will be 
able to handle that.  I don't see anyone who's prepared for that kind of 
creative leap -  for revolutionary new kinds of hardware and software.  In 
general, everyone seems to be starting from the materials that exist, and 
praying to God that minor adaptations will work. (You too, no?) Even Richard 
who just possibly may agree with me on the importance of emulating the 
brain, opines that the brain uses massive parallel computation above - 
because, I would argue, that's what fits his materials - that's what he 
*wants* not knows to be true.  I've argued about this with Ed - I think it 
should be obvious that AGI isn't going to happen - and none of the unsolved 
problems are going to be solved - without major creative leaps. Just look 
even at the ipod & iphone -  major new technology never happens without such 
leaps. Whom do you see as a creative "high-jumper" here - even in their 
philosophy?








-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72098704-c6974e


Re: [agi] None of you seem to be able ...

2007-12-04 Thread Matt Mahoney
--- Dennis Gorelik <[EMAIL PROTECTED]> wrote:
> For example, I disagree with Matt's claim that AGI research needs
> special hardware with massive computational capabilities.

I don't claim you need special hardware.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72062645-1ca7c4


[agi] RE: A global approach to AI in virtual, artificial and real worlds

2007-12-04 Thread Ed Porter
Ken,

Wow.  I was going to say, this is one of the most interesting posts I have
read on the AGI list in a while, until I realized it wasn't on the AGI list.
Too bad.  I have copied this response and your original email (below) to the
AGI list to share the inspiration.

In the following I have copied certain parts of your post and followed them
by questions or comments.

>KEN LAWS=> And much of the advanced robotic planning software
developed at NASA Ames is based on particle filters,
a method of representing probability distributions
as they pass through various nonlinear transformations.
(It remains to be seen whether any of this software
will find use in actual missions, but I'm betting it will
be used in the next Mars rovers.)

ED PORTER=> Probability particle filters sounds cool.  I assume it means
you only consider or transmit information about probability (or
probabilistic implication) distributions or changes in such distributions
that have over a certain concentration in a given portion of space/time to
those locations in space/time.  Is that correct?  And what sort of
non-linear transforms are you talking about?

 
>KEN LAWS=> Artificial neural networks, like humans, have a remarkable
ability to deal with noise inputs and under-constrained models,
but the learning is very slow.  That's why evolution has
provided us with a priori brain structuring, instead of
a tabula rasa mind.  It makes the learning tractable.

ED PORTER=> Other than the way sensory, homeostatic, body sensation, and
other info is mapped into our brains, the cortico-basil-ganglia-thalamic
feedback loop, the cortico-cerebellum-thalamic feedback loop, and the other
pre-designed plug and play interface to the reptile brain (all of which
establish a certain type of architecture and control structure), what are
your talking about?

I would be very interesting in knowing what type of constraint, other than
these basic architectural constraints are involved.

I just attended a lecture this weekend where a Harvard researcher on the
unconscious mind said that one-day-old babies have been shown to be able to
mimic a few basic facial expressions, such as sticking out their tongue and
putting their mouth into a small circle, as when saying "who".  This is hard
to understand, because one would imaging that by this age a child fresh from
the fog of the womb would not have had time to build up the visual patterns
enabling them to recognize facial features, and further more would not have
had time to map the correspondence between that blob of pink sticking out of
the hole in somebody's face and the baby's own tongue, or own mouth. (I
don't know how much evidence this study has.)

I have not had anybody explain to me how such instinctual programming could
be represented in advance of the learned, experientially derived patterns
out of which most mind patterns are represented.  The one exception is Sam
Adam's explained in an off-stage discussion at the Singularity Conference,
about how new-borns are designed to visually focus on the female areola
because tests have shown their visual system is pre-wired to detect circular
patterns.)

>KEN LAWS=>For those who prefer fine-scale brain/mind modeling,
look into the decades of theoretical and simulation work
by the SOAR community, and by the APEX community,
for human sensor/manipulator learning simulations.

ED PORTER=> I haven't read about SOAR for ten years.  It struck me as a
generalized expert system (if-then rules), but one with a relatively
enlightened goal structure and learning structure for an expert system.  Has
it grown into a real contender for human level AGI, and what sort of tasks
its is currently actually capable of.  Re APEX, I have never heard it.  Have
you any good URLs for summarizing the nature and capabilities of each.

>KEN LAWS=>
 ...

Early pattern recognition researchers had high hopes
for statistical learning, but eventually realized
that the magic is almost always in feature extraction
rather than the statistical back end.  Represent a problem
well and it may be easy to solve; badly and you'll need
more computing power than you can afford.
..

ED PORTER=> I think most intelligent AGI models envision a system that
has many representations which compete for existence based on usefulness.
This is one way of addressing this problem.  Another is to understand that
for certain complex problems, such as the those of the type we who are
trying to design AGI often face, part of the problem is creating the proper
novel representation, and that can involve a lot of trial and error and
exploration that hopefully tends to build up patterns representing partial
or not quite right solutions that over time probabilistically increase the
chance of synthesizing a better representation.

The Novamente-OpenCog approach should be able to use both of these methods
to find proper representations, although the system should be biased toward
learning how to learn, whi

Re: [agi] Solution to "Grounding" problem

2007-12-04 Thread Mike Tintner


Dennis: >> 1) Grounding Problem (the *real* one, not the cheap substitute 
that

everyone usually thinks of as the symbol grounding problem).



Say, we are trying to build AGI for the purpose of running intelligent

chat-bot.

What would be the grounding problem in this case?


Example: understanding:

"Bush walks like a cowboy, doesn't he?"
"Dennis Gorelik is v. handsome, no?"
"You're getting v. emotional about this"


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=72028532-3f18a5


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread John G. Rose
> From: Ed Porter [mailto:[EMAIL PROTECTED]
> 
> But even with a current 32 bit PC with say 4G of Ram you should be able
> to
> build an AGI that would be a meaningful proof of concept.  Lets say 3G
> is
> for representation, at say 60 bytes per atom (less than my usual 100
> bytes/atom because using 32bit pointers), that would allow you roughly
> 50Million atoms.  Over 1 million seconds (very roughly two weeks 24/7)
> that
> would allow an average of 50 atoms a second of representation.  Of
> course
> your short term memory would record at a much higher frequency, and over
> time more and more of your representation would go into models rather
> than
> episodic recording.  But as this happened the vocabulary of patterns
> would
> grow and thus one atom, on average would be able to represent more.
> But it seems to me such an AGI should be able to have meaningful world
> knowledge about certain simple worlds, or certain simple subparts of the
> world.  For example, it should be able to have a pretty good model for
> the
> world of many early video games, such as pong and perhaps even pac-man
> (Its
> been so long since I've seen pac-man I don't know how complex it is, but
> I
> am assuming 50 million atoms, many of which, over time, would represent
> complex patterns, would be able to catch most of the meaningful
> generalizations of pac-man including its control mechanisms and the
> results
> they occur).


Yes I can imagine this. But how much information would be in each 60 byte
atom? Is it a pointer to a pattern stored on disk, or is it some sort of
index, or is it a portion of a pattern, or is it a full pattern in a simple
pacman type world?
 

> Is I said in an earlier email, if we want AGI-at-Home to catch on it
> would
> be valuable to think of some sort of application that would either
> inspire
> through importance or entice by usefulness or amusement to cause people
> let
> it use a substantial part of their machine cycles.


Well I can't elaborate publicly but I actually have this application
running, still in pre-alpha mode... ahh.. but I have to sell this thing
enabling me to buy R&D time to potentially convert it to a protoAGI...so no
open source on that one :(

BUT there are many other applications that could be the delivery mechanism.
There are a number of ways to do it... one way was discussed earlier where
you sell your PC resources. That is a good idea!

 
> You mention an interest in intelligent indexing.  Of course,
> hierarchical
> memory provides a fairly good from of intelligent indexing, in the sense
> that it automatically promotes indexing through learned combinations of
> indicies, and can be easily made to have probabilistic and importance
> weights on its index links to more efficiency allocate index
> activations.
> 
> How does your intelligent indexing work?

Well I can describe briefly, there are two basic types of virtual indexing,
the actual disk based indexing I'm trying to still use a DBMS for that since
they do it so well. First type is based on algebraic structure
decomposition. I see everything as algebraic structure; an AGI computer can
do the same, but way better. When everything is converted to algebraic
structure things become very index friendly, in fact so friendly it looks
like many things collapse or telescope down. The other type of indexing that
I just started working on is CA based universal symbolistic
generation/indexing. Algebraic structure is good for "skeltoidal" but you
need some filler. CA's seem like they can do the trick. The thing with CA's
is that they can be indexed based on uncalculated values. If a CA structure
is so darn complex why waste the cycles calculating it? The CA's have
infinite symbolistic properties that only a portion of them need be
calculated (take up resources). Linking the algebraic structure indexing
with CA indexing I'm trying to smooth out with group semiautomata, but a lot
of magic still happens there :)

So that's it without getting too into details. Very primitive still ...

John


> 
> -Original Message-
> From: John G. Rose [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, December 04, 2007 2:17 PM
> To: agi@v2.listbox.com
> Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI
> research]
> 
> > From: Ed Porter [mailto:[EMAIL PROTECTED]
> > John,
> >
> > I am sure there is interesting stuff that can be done.  It would be
> > interesting just to see what sort of an agi could be made on a PC.
> 
> Yes it would be interesting to see what could be done on a small cluster
> of
> modern server grade computers. I like to think about the newer Penryn
> 45nm,
> SSE4, quadcore quadproc servers with lots of FB DDR3 800mhz RAM running
> 64
> bit OS (sorry I prefer coding in Windows) using standard gigabit
> Ethernet
> quad NICs, with solid state drives, and 15,000 RPM SAS for the slower
> stuff,
> and a take maybe 10 of these servers. There HAS to be enough resource
> there
> to get some small prototype going.
> 
> 

[agi] "How to tepresent things" problem

2007-12-04 Thread Dennis Gorelik
Richard,

> 3) A way to represent things - and in particular, uncertainty - without
> getting buried up to the eyeballs in (e.g.) temporal logics that nobody
> believes in.

Conceptually the way of representing things is described very well.
It's Neural Network -- set of nodes (concepts), when every node can be
connected with the set of other nodes. Every connection has it's own
weight.

Some nodes are connected with external devices.
For example, one node can be connected with one word in text
dictionary (that is an external device).


Do you see any problems with such architecture?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71970053-638180


[agi] Solution to "Grounding" problem

2007-12-04 Thread Dennis Gorelik
Richard,

> 1) Grounding Problem (the *real* one, not the cheap substitute that
> everyone usually thinks of as the symbol grounding problem).

Could you describe, what *real* grounding problem is?

It would be nice to consider an example.

Say, we are trying to build AGI for the purpose of running intelligent
chat-bot.

What would be the grounding problem in this case?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71959916-5302dd


[agi] None of you seem to be able ...

2007-12-04 Thread Dennis Gorelik
Mike,

> Matt::  The whole point of using massive parallel computation is to do the
> hard part of the problem.

> The whole idea of massive parallel computation here, surely has to be wrong.
> And yet none of you seem able to face this to my mind obvious truth.

Who do you mean under "you" in this context?
Do you think that everyone here agrees with Matt on everyting?
:-)

Quite the opposite is true -- almost every AI researcher has his own
unique set of believes. Some believes are shared with one set of
researchers -- other with another set. Some believes may be even
unique.

For example, I disagree with Matt's claim that AGI research needs
special hardware with massive computational capabilities.

However I agree with Matt on quite large set of other issues.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71955617-a244b4


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
John,

As you say the hardware is just going to get better and better.  In five
years the PC's of most of the people on this list will probably have at
least 8 cores and 16 gig of ram.

But even with a current 32 bit PC with say 4G of Ram you should be able to
build an AGI that would be a meaningful proof of concept.  Lets say 3G is
for representation, at say 60 bytes per atom (less than my usual 100
bytes/atom because using 32bit pointers), that would allow you roughly
50Million atoms.  Over 1 million seconds (very roughly two weeks 24/7) that
would allow an average of 50 atoms a second of representation.  Of course
your short term memory would record at a much higher frequency, and over
time more and more of your representation would go into models rather than
episodic recording.  But as this happened the vocabulary of patterns would
grow and thus one atom, on average would be able to represent more.

But it seems to me such an AGI should be able to have meaningful world
knowledge about certain simple worlds, or certain simple subparts of the
world.  For example, it should be able to have a pretty good model for the
world of many early video games, such as pong and perhaps even pac-man (Its
been so long since I've seen pac-man I don't know how complex it is, but I
am assuming 50 million atoms, many of which, over time, would represent
complex patterns, would be able to catch most of the meaningful
generalizations of pac-man including its control mechanisms and the results
they occur).

Is I said in an earlier email, if we want AGI-at-Home to catch on it would
be valuable to think of some sort of application that would either inspire
through importance or entice by usefulness or amusement to cause people let
it use a substantial part of their machine cycles.  

You mention an interest in intelligent indexing.  Of course, hierarchical
memory provides a fairly good from of intelligent indexing, in the sense
that it automatically promotes indexing through learned combinations of
indicies, and can be easily made to have probabilistic and importance
weights on its index links to more efficiency allocate index activations.  

How does your intelligent indexing work?

Ed Porter


-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 2:17 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

> From: Ed Porter [mailto:[EMAIL PROTECTED]
> John,
> 
> I am sure there is interesting stuff that can be done.  It would be
> interesting just to see what sort of an agi could be made on a PC.

Yes it would be interesting to see what could be done on a small cluster of
modern server grade computers. I like to think about the newer Penryn 45nm,
SSE4, quadcore quadproc servers with lots of FB DDR3 800mhz RAM running 64
bit OS (sorry I prefer coding in Windows) using standard gigabit Ethernet
quad NICs, with solid state drives, and 15,000 RPM SAS for the slower stuff,
and a take maybe 10 of these servers. There HAS to be enough resource there
to get some small prototype going. 

And look at next year's 8 core Nehalem procs coming out...

Interserver messaging should make heavy use of IP multicasting. Then another
messaging channel with the new USB 3.0... Supposedly USB 3.0 is 4.8
gigabits.


> I would be interested in you Ideas for how to make a powerful AGI
> without a
> vast amount of interconnect.  The major schemes I know about for
> reducting
> interconnect involve allocating what interconnect you have to the links
> with
> the highest probability or importance, varying those measures of
> probability
> and importance in a contest specific way, and being guided by prior
> similar
> experiences.

Well I actually don't have the theory far enough to calculate interconnect
metrics. But I try to minimize that through storage structure. What gets
stored, how it gets stored, where it's stored, how systems are modeled, what
a model is, what a system of models are, how systems of models are stored,..
don't store dupes, store diffs... mixing code and data, collapsing data into
code, what is code and what is data? Basically a lot of intelligent
indexing, like real intelligent indexing... 

I'm working on using CA's as universal symbolistic indexors and generators -
IOW exploring a theory of uncalculated precalcs for computational complexity
indexing using CA's in order to control uncertainty and manage complexity...

Lots of addicting brain candy stuff...

John



 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71938578-e534ed

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread John G. Rose
> From: Ed Porter [mailto:[EMAIL PROTECTED]
> John,
> 
> I am sure there is interesting stuff that can be done.  It would be
> interesting just to see what sort of an agi could be made on a PC.

Yes it would be interesting to see what could be done on a small cluster of
modern server grade computers. I like to think about the newer Penryn 45nm,
SSE4, quadcore quadproc servers with lots of FB DDR3 800mhz RAM running 64
bit OS (sorry I prefer coding in Windows) using standard gigabit Ethernet
quad NICs, with solid state drives, and 15,000 RPM SAS for the slower stuff,
and a take maybe 10 of these servers. There HAS to be enough resource there
to get some small prototype going. 

And look at next year's 8 core Nehalem procs coming out...

Interserver messaging should make heavy use of IP multicasting. Then another
messaging channel with the new USB 3.0... Supposedly USB 3.0 is 4.8
gigabits.


> I would be interested in you Ideas for how to make a powerful AGI
> without a
> vast amount of interconnect.  The major schemes I know about for
> reducting
> interconnect involve allocating what interconnect you have to the links
> with
> the highest probability or importance, varying those measures of
> probability
> and importance in a contest specific way, and being guided by prior
> similar
> experiences.

Well I actually don't have the theory far enough to calculate interconnect
metrics. But I try to minimize that through storage structure. What gets
stored, how it gets stored, where it's stored, how systems are modeled, what
a model is, what a system of models are, how systems of models are stored,..
don't store dupes, store diffs... mixing code and data, collapsing data into
code, what is code and what is data? Basically a lot of intelligent
indexing, like real intelligent indexing... 

I'm working on using CA's as universal symbolistic indexors and generators -
IOW exploring a theory of uncalculated precalcs for computational complexity
indexing using CA's in order to control uncertainty and manage complexity...

Lots of addicting brain candy stuff...

John



 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71921419-8e0002


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
Richard,

It is not clear how valuable your 25 years of hard won learning is if it
causes you to dismiss valuable scientific work that seems to have eclipsed
the importance of anything I or you have published as "trivial exercises in
public relations" without giving any reason whatsoever for the particular
dismissal.

I welcome criticism in this forum provided it is well reasoned and without
venom.  But to dismiss a list of examples I give to support an argument as
"trivial exercises in public relations" without any justification other than
the fact that in general a certain numbers of published papers are
inaccurate and/or overblown, is every bit as dishonest as calling someone a
liar with regard to a particular statement based on nothing more than the
knowledge some people are liars.

In my past exchanges with you, sometimes your responses have been helpful.
But I have noticed that although you are very quick to question me (and
others), if I question you, rather than respond directly to my arguments you
often don't respond to them at all -- such as your recent refusal to justify
your allegation that my whole framework, presumably for understanding AGI,
was wrong (a pretty insulting statement which should not be flung around
without some justification).  Or if you do respond to challenges, you often
dismiss them as invalid without any substantial evidence, or you
substantially change the subject, such as by focusing on one small part of
my argument that I have not yet fully supported, while refusing to
acknowledge the major support I have shown for the major thrust of my
argument.

When you argue like that there really is no purpose in continuing the
conversation.  What's the point.  Under those circumstance your not dealing
with someone who is likely to tell you anything of worth.  Rather you are
only likely to hear lame defensive arguments from somebody who is either
incapable of properly defending or unwilling to properly defend their
arguments, and, thus, is unlikely to communicate anything of value in the
exchange.

Your 25 years of experience doesn't mean squat about how much you truly
understand AGI unless you are capable of being more intellectually honest,
both with yourself and with others -- and unless you are capable of actually
reasonably defending your understandings, head-on, against reasoned
questioning and countering evidence.  To dismiss counter evidence cited
against your arguments as "trivial exercises in public relations" without
any specific justification is not a reasonable defense, and the fact that
you so often result to such intellectually dishonest tactics to defend your
stated understandings relating to AGI really does call into question the
quality of those understandings.

In summary, don't go around attacking other people's statements unless you
are willing to defend those attacks in an intellectually honest manner.

Ed Porter

P.S. This is my last response in this thread.  You can have the last say if
you so wish.  

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 9:58 AM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed Porter wrote:
> 
>> RICHARD LOOSEMORE=> You have no idea of the context in which I made
> that sweeping dismissal. 
>   If you have enough experience of research in this area you will know 
> that it is filled with bandwagons, hype and publicity-seeking.  Trivial 
> models are presented as if they are fabulous achievements when, in fact, 
> they are just engineered to look very impressive but actually solve an 
> easy problem.  Have you had experience of such models?  Have you been 
> around long enough to have seen something promoted as a great 
> breakthrough even though it strikes you as just a trivial exercise in 
> public relations, and then watch history unfold as the "great 
> breakthrough" leads to  absolutely nothing at all, and is then 
> quietly shelved by its creator?  There is a constant ebb and flow of 
> exaggeration and retreat, exaggeration and retreat.  You are familiar 
> with this process, yes?
> 
> ED PORTER=> Richard, the fact that a certain percent of theories and
> demonstrations are false and/or misleading does not give you the right to
> dismiss any theory or demonstration that counters your position in an
> argument as 
> 
>   "trivial exercises in public relations, designed to look
> really impressive, and filled with hype designed to attract funding, which
> actually accomplish very little"
> 
> without at least giving some supporting argument for your dismissal.
> Otherwise you could deny any aspect of scientific, mathematical, or
> technological knowledge, no matter how sound, that proved inconvenient to
> whatever argument you were making.  
> 
> There are people who argue in that dishonest fashion, but it is
questionable
> how much time one should spend conversing with them.  Do you wan

[agi] A question for J Storrs Hall re SIGMA's

2007-12-04 Thread Mike Tintner

Josh,

A pen-pal  - an AI/robotics guy - has been waxing enthusiastic about your 
book. For him:


"the basic idea in his book is to devise what is essentially the "basic 
computational unit - BCU" [this is my term, btw] that can be extended 
indefinitely horizontally [in modules], and vertically [in hierarchical 
levels] to larger and larger systems, in order to be able to handle general 
AI problems. The problem is to get past the roadblock of scalability that 
all previous AI systems have hit.


He calls his BCU thingie a SIGMA = sigma servo, which is an IAM 
[interpolating associative memory] and a controller. You can spawn these 
things as needed to handle larger problems. SIGMAs higher in the hierarchy 
will deal with higher-level abstractions by taking outputs from SIGMAs lower 
down. You can see the influence of object oriented programming here, and 
also parallels to real brain organization.


He also mentions how the SIGMA would perform the "analogical quadrature" 
operation of Hofstadter's Copycat system, which I'm not familiar with. I'm 
not sure how well this scheme would work, but thought I'd mention it to you"


If it's not too much trouble - which it may be - perhaps you could expand a 
little on SIGMA's, with an example or two, their importance as you see it, 
and any links for further reading? Many thanks - and comments from anyone 
else also gratefully received. 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71829167-e4dda4


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Richard Loosemore

Ed Porter wrote:



RICHARD LOOSEMORE=> You have no idea of the context in which I made
that sweeping dismissal. 
  If you have enough experience of research in this area you will know 
that it is filled with bandwagons, hype and publicity-seeking.  Trivial 
models are presented as if they are fabulous achievements when, in fact, 
they are just engineered to look very impressive but actually solve an 
easy problem.  Have you had experience of such models?  Have you been 
around long enough to have seen something promoted as a great 
breakthrough even though it strikes you as just a trivial exercise in 
public relations, and then watch history unfold as the "great 
breakthrough" leads to  absolutely nothing at all, and is then 
quietly shelved by its creator?  There is a constant ebb and flow of 
exaggeration and retreat, exaggeration and retreat.  You are familiar 
with this process, yes?


ED PORTER=> Richard, the fact that a certain percent of theories and
demonstrations are false and/or misleading does not give you the right to
dismiss any theory or demonstration that counters your position in an
argument as 


"trivial exercises in public relations, designed to look
really impressive, and filled with hype designed to attract funding, which
actually accomplish very little"

without at least giving some supporting argument for your dismissal.
Otherwise you could deny any aspect of scientific, mathematical, or
technological knowledge, no matter how sound, that proved inconvenient to
whatever argument you were making.  


There are people who argue in that dishonest fashion, but it is questionable
how much time one should spend conversing with them.  Do you want to be such
a person?

The fact that one of the pieces of evidence you so rudely dismissed is a
highly functional program that has been used by many other researchers,
shows the blindness with which you dismiss the arguments of others.


Ed,

You are misunderstanding this situation.  You repeatedly make extremely 
strong statements about the subject matter of AGI, but you do not have 
enough knowledge of the issues to understand the replies you get.


Now, there is nothing wrong with not understanding, but what happens 
next is quite intolerable:  you argue back as if your opinion was just 
as valid as the hard-won knowledge that someone else took 25 years to 
acquire.


Not only that, but you go on to sprinkle your comments with instructions 
to that person to "open their mind" as if the were somehow being 
closed-minded.


AND not only that, but when I display some impatience with this behavior 
and decline to write a massive essay to explain stuff that you should be 
learning for yourself, you decide to fling out accusations such as that 
i am arguing in a "dishonest" manner, or that I am dismissing an 
argument or theory just because it counters my position.


If you look at the broad sweep of my postings on these lists you will 
notice that I spend much more time than I should writing out 
explanations when people say that they find something I wrote confusing 
or incomplete.  When someone starts behaving rudely, however, I lose 
patience.  What you are experiencing now is lost patience, that is all.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71815518-2fa3ba


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
John, 

I am sure there is interesting stuff that can be done.  It would be
interesting just to see what sort of an agi could be made on a PC.

I would be interested in you Ideas for how to make a powerful AGI without a
vast amount of interconnect.  The major schemes I know about for reducting
interconnect involve allocating what interconnect you have to the links with
the highest probability or importance, varying those measures of probability
and importance in a contest specific way, and being guided by prior similar
experiences.

Ed Porter

-Original Message-
From: John G. Rose [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, December 04, 2007 1:42 AM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed,

Well it'd be nice having a supercomputer but P2P is a poor man's
supercomputer and beggars can't be choosy.

Honestly the type of AGI that I have been formulating in my mind has not
been at all closely related to simulating neural activity through
orchestrating partial and mass activations at low frequencies and I had been
avoiding those contagious cog sci memes on purpose. But your expose on the
subject is quite interesting and I wasn't that aware that that is how things
have been being done.

But getting more than a few thousand P2P nodes is difficult. Going from 10K
to 20K nodes and up, getting more difficult to the point of being
prohibitively expensive to being impossible or extremely lucky.  There are
ways to do it but according to your calculations the supercomputer mayt be
more of a wise choice as going out and scrounging up funding for that would
be easier.

Still though (besides working on my group theory heavy design) exploring the
crafting and chiseling of an activation model you are talking about to the
P2P network could be fruitful. I feel that through a number of up front and
unfortunately complicated design changes/adaptations that the activation
orchestrations could be improved thus bringing down the message rate
requirements, reducing activation requirements, depths and frequencies,
through a sort of computational resource topology consumption,
self-organizational design molding.

You do indicate some dynamic resource adaption and things like "intelligent
inference guiding schemes" in your description but it doesn't seem like it
melts enough into the resource space. But having a design be less static
risks excessive complications...

A major problem though with P2P and the activation methodology is that there
are so many variances in the latencies and availability that serious
synchronicity/simultaneity issues would exist that even more messaging might
be required. Since there are so many variables in public P2P, empirical data
also would be necessary to get a gander on feasibility.

I still feel strongly that the way to do AGI P2P (with public P2P as core
not augmental) is to understand the grid, and build the AGI design based on
that and what it will be in a few years, instead of taking a design and
morphing it to the resource space. That said, there are finite designs that
will work so the number of choices is few.

John


_
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 6:17 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi]
Funding AGI research]


John, 

You raised some good points.  The problem is that the total
number of messages/sec that can be received is relatively small.  It is not
as if you are dealing with a multidimensional grid or toroidal net in which
spreading tree activation can take advantage of the fact that the total
parallel bandwidth for regional messaging can be much greater than the
x-sectional bandwidth.  

In a system where each node is a server class node with
multiple processors and 32 or 64Gbytes of ram, much of which is allocable to
representation, sending messages to local indices on each machine could
fairly efficiently activate all occurrences of something in a 32 to 64 TByte
knowledge base with a max of 1K internode messages, if there was only 1K
nodes.

But in a PC based P2P system the ratio of nodes to
representation space is high and the total number of 128 byte messages/sec
than can be received is limited to about 100, so neither methods of trying
to increase number of patterns than can be activated with the given
interconnect of the network buy you as much.

Human level context sensitivity arises because a large
number of things that can depend on a large number of things in the current
context are made aware of those dependencies.  This takes a lot of
messaging, and I don't see how a P2P system where each node can only receive
about 100 relatively short messages a second is going to make this possible
unless you had a hug

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter


>RICHARD LOOSEMORE=> You have no idea of the context in which I made
that sweeping dismissal. 
  If you have enough experience of research in this area you will know 
that it is filled with bandwagons, hype and publicity-seeking.  Trivial 
models are presented as if they are fabulous achievements when, in fact, 
they are just engineered to look very impressive but actually solve an 
easy problem.  Have you had experience of such models?  Have you been 
around long enough to have seen something promoted as a great 
breakthrough even though it strikes you as just a trivial exercise in 
public relations, and then watch history unfold as the "great 
breakthrough" leads to  absolutely nothing at all, and is then 
quietly shelved by its creator?  There is a constant ebb and flow of 
exaggeration and retreat, exaggeration and retreat.  You are familiar 
with this process, yes?

ED PORTER=> Richard, the fact that a certain percent of theories and
demonstrations are false and/or misleading does not give you the right to
dismiss any theory or demonstration that counters your position in an
argument as 

"trivial exercises in public relations, designed to look
really impressive, and filled with hype designed to attract funding, which
actually accomplish very little"

without at least giving some supporting argument for your dismissal.
Otherwise you could deny any aspect of scientific, mathematical, or
technological knowledge, no matter how sound, that proved inconvenient to
whatever argument you were making.  

There are people who argue in that dishonest fashion, but it is questionable
how much time one should spend conversing with them.  Do you want to be such
a person?

The fact that one of the pieces of evidence you so rudely dismissed is a
highly functional program that has been used by many other researchers,
shows the blindness with which you dismiss the arguments of others.


>RICHARD LOOSEMORE=>This entire discussion baffles me.  Does it matter
at all to you that I 
have been working in this field for decades?  Would you go up to someone 
at your local university and tell them how to do their job?  Would you 
listen to what they had to say about issues that arise in their field of 
expertise, or would you consider your own opinion entirely equal to 
theirs, with only a tiny fraction of their experience?

ED PORTER=> No mater how many years you have been studying something, if
your argumentative and intellectual approach is to dismiss evidence contrary
to your position on clearly false bases, as you did with you dismissal of my
evidence with your above quoted insult, a serious question is raised as to
whether you are worth listening to or conversing with. 


ED PORTER





-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 10:47 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed Porter wrote:
> 
> 
>I'm sorry, but this is not addressing the actual
> issues involved.
> 
> You are implicitly assuming a certain framework for solving the problem 
> of representing knowledge ... and then all your discussion is about 
> whether or not it is feasible to implement that framework (to overcome 
> various issues to do with searches that have to be done within that 
> framework).
> 
> But I am not challenging the implementation issues, I am challenging the 
> viability of the framework itself.
> 
> 
> ED PORTER=> So what is wrong with my framework?  What is wrong with a
> system of recording patterns, and a method for developing compositions and
> generalities from those patterns, in multiple hierarchical levels, and for
> indicating the probabilities of certain patterns given certain other
pattern
> etc?  
> 
> I know it doesn't genuflect before the alter of complexity.  But what is
> wrong with the framework other than the fact that it is at a high level
and
> thus does not explain every little detail of how to actually make an AGI
> work?
> 
> 
> 
>> RICHARD LOOSEMORE=> These models you are talking about are trivial
> exercises in public 
> relations, designed to look really impressive, and filled with hype 
> designed to attract funding, which actually accomplish very little.
> 
> Please, Ed, don't do this to me. Please don't try to imply that I need 
> to open my mind any more.  Th implication seems to be that I do not 
> understand the issues in enough depth, and need to do some more work to 
> understand you points.  I can assure you this is not the case.
> 
> 
> 
> ED PORTER=> Shastri's Shruiti is a major piece of work.  Although it
is
> a highly simplified system, for its degree of simplification it is
amazingly
> powerful.  It has been very helpful to my thinking about AGI.  Please give
> me some excuse for calling it "trivial exercise in public relations."  I
> certainly have not published anything as important.  Have you?
> 
> The same for Mike Col

RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-04 Thread Ed Porter
Bryan, The name grub sounds familiar.  That is probably it.  Ed

-Original Message-
From: Bryan Bishop [mailto:[EMAIL PROTECTED] 
Sent: Monday, December 03, 2007 10:47 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

On Thursday 29 November 2007, Ed Porter wrote:
> Somebody (I think it was David Hart) told me there is a shareware
> distributed web crawler already available, but I don't know the
> details, such as how good or fast it is.

http://grub.org/
Previous owner went by the name of 'kordless'. I found him on Slashdot.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71801708-39700e


Re: [agi] "AGI" first mention on NPR!

2007-12-04 Thread Richard Loosemore

Joshua Fox wrote:
I actually thought that that was one of the more positive pieces I've 
found. Listeners may come out with a bad (mis-)impression, but NPR did 
nothing to abet that.


Agreed.

It is just that the baseline is so low that I suppose we feel gratified 
when they only miss the point and insert just a few bogeymen.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71799027-9fb069


Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-04 Thread Mike Dougherty
On Dec 3, 2007 11:03 PM, Bryan Bishop <[EMAIL PROTECTED]> wrote:
> On Monday 03 December 2007, Mike Dougherty wrote:
> Another method of doing search agents, in the mean time, might be to
> take neural tissue samples (or simple scanning of the brain) and try to
> simulate a patch of neurons via computers so that when the simulated
> neurons send good signals, the search agent knows that there has been a
> good match that excites the neurons, and then tells the wetware human
> what has been found. The problem that immediately comes to mind is that
> neurons for such searching are probably somewhere deep in the
> prefrontal cortex ... does anybody have any references to studies done
> with fMRI on people forming Google queries?

...and a few dozen brains from which we can extract the useful parts?  :)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71797586-08a419


Re: [agi] "AGI" first mention on NPR!

2007-12-04 Thread Joshua Fox
I actually thought that that was one of the more positive pieces I've found.
Listeners may come out with a bad (mis-)impression, but NPR did nothing to
abet that.

Joshua

2007/12/3, Bob Mottram <[EMAIL PROTECTED]>:
>
> Perhaps a good word of warning is that it will be really easy to
> satirise/lampoon/misrepresent AGI and its proponents until such time
> as one is actually created.
>
>
>
>
> On 03/12/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> >
> > Yesterday I heard the phrase "Artificial General Intelligence" on the
> > radio for the first time ever:
> >
> > http://www.npr.org/templates/story/story.php?storyId=16816185
> >
> > 
> > Weekend Edition Sunday, December 2, 2007 ยท The idea of what Artificial
> > Intelligence should be has evolved over the past 50 years โ€” from solving
> > puzzles and playing chess to emulating the abilities of a child:
> > walking, recognizing objects. A recent conference brought together those
> > who invent the future.
> >
> > A recent "Singularity Summit" brought together those who imagine โ€” and
> > invent โ€” the future.
> > 
> >
> >
> > Unfortunately, most of the report was filled with sound bites that were,
> > to my mind, ridiculously naive extrapolations and speculations, like:
> >
> > Paul Saffo: "The optimistic scenario is they will treat us like
> pets"
> >
> > most of which were calculated to horrify the audience.
> >
> >
> >
> >
> >
> > Richard Loosemore
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71796167-f4551c

Re: [agi] RE:P2P and/or communal AGI development [WAS Hacker intelligence level...]

2007-12-04 Thread Russell Wallace
On Dec 3, 2007 7:19 PM, Ed Porter <[EMAIL PROTECTED]> wrote:
> Perhaps one aspect of the AGI-at-home project would be to develop a good
> generalized architecture for wedding various classes of narrow AI and AGI in
> such a learning environment.

Yes, I think this is the key aspect, the meta-problem whose solution
would enable piecemeal solutions to the other problems: Create an
architecture within which procedural knowledge can flow like water,
the way text does on the Web. It would also need to scale well across
a slow, unreliable network the way the Web does; once that's in hand,
P2P [EMAIL PROTECTED] follows fairly naturally.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=71759601-2c1ca5