Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner


Ben:  To publish your ideas

in academic journals, you need to ground them in the existing research
literature,
not in your own personal introspective observations.


Big mistake. Think what would have happened if Freud had omitted the 40-odd 
examples of slips in The Psychopathology of Everyday Life (if I've got the 
right book!) The scientific heavyweights are the people who are heavily 
grounded. The big difference between Darwin and Wallace is all those 
examples/research, and not the creative idea.


And what I didn't explain in my simple, but I believe important, two-stage 
theory of creative development is that there's an immense psychological 
resistance to moving onto the second stage. You have enough psychoanalytical 
understanding, I think, to realise that the unusual length of your reply to 
me may possibly be a reflection of that resistance and an inner conflict. 
The resistance occurs inpart because you have to privilege a normally 
underderprivileged level of the mind - the level that provides and seeks 
actual, historical examples of  generalisations, as opposed to the normally 
more privileged level that provides hypothetical, made-up examples . Look at 
philosophers and you will see virtually an entire profession/field that has 
not moved beyond providing hypothetical examples. It's much harder to deal 
in actual examples/ evidence  - things that have actually happened - because 
they take longer to locate in memory. You have to be patient while your 
brain drags them out. But you can normally make up examples almost 
immediately. (If only Richard's massive parallel, cerebral computation were 
true!)


But BTW an interesting misunderstanding on your part is that evidence here 
means *introspective* observations. Freud's evidence for the unconscious 
consisted entirely of publicly observable events - the slips. You must do 
similarly for your multiple selves - not tell me, say, how fragmented you 
feel! Try and produce such evidence   I think you'll find you will rapidly 
lose enthusiasm for your idea. Stick to the same single, but divided self 
described with extraordinary psychological consistency by every great 
religion over 1000's of years and a whole string of humanist psychologists 
including Freud,  - and make sure your AGI has something similar.


P.S. Just recalling a further difference between the original and the 
creative thinker - the creative one has greater *complexes* of ideas - it 
usually doesn't take just one idea to produce major creative work, as people 
often think, but a whole interdependent network of them. That, too, is v. 
hard.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73122427-f045c6


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner


JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.) 
mean by 'complexity' (as opposed to the common usage of complex meaning 
difficult).


Well, I as an ignoramus, was wondering about this - so thankyou. And it 
wasn't clear at all to me from Richard's paper what he meant. What I'm 
taking out from your account is that it involves random inputs...? Is there 
a fuller account of it? Is it the random dimension that he/others hope will 
produce emergent/human-like behaviour? (..because if so, I'd disagree - I'd 
argue the complications of human behaviour flow from conflict/ conflicting 
goals - which happens to be signally missing from his (and cognitive 
science's) ideas about emotions).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73135416-76c456


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner


ATM: http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --

has just gone through a major bug-solving update, and is now much
better at maintaining chains of continuous thought -- after the
user has entered sufficient knowledge for the AI to think about.

It doesn't have - you didn't try to give it - independent curiosity (like an 
infant)? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73137353-3f3449


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 Ben:  To publish your ideas
  in academic journals, you need to ground them in the existing research
  literature,
  not in your own personal introspective observations.

 Big mistake. Think what would have happened if Freud had omitted the 40-odd
 examples of slips in The Psychopathology of Everyday Life (if I've got the
 right book!)

Obviously, Freud's reliance on introspection and qualitative experience had
plusses and minuses.  He generated a lot of nonsense as well as some
brilliant ideas.

But anyway, I was talking about style of exposition, not methodology of
doing work.  If Freud were a professor today, he would write in a different
style in order to get journal publications; though he might still write some
books in a more expository style as well.

I was pointing out that, due to the style of exposition required in contemporary
academic culture, one can easily get a false impression that no one in academia
is doing original thinking -- but the truth is that, even if you DO
original thinking,
you are required in writing your ideas up for publication to give them
the appearance
of minimal originality via grounding them exorbitantly in the prior
literature (even if in fact
their conception had nothing, or very little, to do with the prior
literature).  I'm not
saying I like this -- I'm just describing the reality.  Also, in the
psych literature, grounding
an idea in your own personal observations is not acceptable and is not
going to get
you published -- unless of course you're a clinical psychologist,
which I am not.

The scientific heavyweights are the people who are heavily
 grounded. The big difference between Darwin and Wallace is all those
 examples/research, and not the creative idea.

That is an unwarranted overgeneralization.

Anyway YOU were the one who was harping on the lack of creativity in AGI.

Now you've changed your tune and are harping on the lack of {creativity coupled
with a lot of empirical research}

Ever consider that this research is going on RIGHT NOW?  I don't know why you
think it should be instantaneous.  A number of us are doing concrete
research work
aimed at investigating our creative ideas about AGI.  Research is
hard.  It takes
time.  Darwin's research took time.  The Manhattan Project took time.  etc.

 And what I didn't explain in my simple, but I believe important, two-stage
 theory of creative development is that there's an immense psychological
 resistance to moving onto the second stage. You have enough psychoanalytical
 understanding, I think, to realise that the unusual length of your reply to
 me may possibly be a reflection of that resistance and an inner conflict.

What is bizarre to me, in this psychoanalysis of Ben Goertzel that you present,
is that you overlook
the fact that I am spending most of my time on concrete software projects, not
on abstract psychological/philosophical theory
Including the Novamente Cognition Engine
project which is aimed precisely at taking some of my creative ideas about AGI
and realizing them in useful software

As it happens, my own taste IS more for theory, math and creative arts than
software development -- but, I decided some time ago that the most IMPORTANT
thing I could do would be to focus a lot of attention on
implementation and detailed
design rather than just generating more and more funky ideas.  It is
always tempting to me to
consider my role as being purely that of a thinker, and leave all
practical issues to others
who like that sort of thing better -- but I consider the creation of
AGI *so* important
that I've been willing to devote the bulk of my time to activities
that run against my
personal taste and inclination, for some years now  And
fortunately I have found
some great software engineers as collaborators.


 P.S. Just recalling a further difference between the original and the
 creative thinker - the creative one has greater *complexes* of ideas - it
 usually doesn't take just one idea to produce major creative work, as people
 often think, but a whole interdependent network of them. That, too, is v.
 hard.

Mike, you can make a lot of valid criticisms against me, but I don't
think you can
claim I have not originated an interdependent network of creative ideas.
I certainly have done so.  You may not like or believe my various ideas, but
for sure they form an interdependent network.  Read The Hidden Pattern
for evidence.

-- Ben Goertzel

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73146505-9fe3b7


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Mark Waser
THE KEY POINT I WAS TRYING TO GET ACROSS WAS ABOUT NOT HAVING TO 
EXPLICITLY DEAL WITH 500K TUPLES


And I asked -- Do you believe that this is some sort of huge conceptual 
breakthrough?




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73155533-eaf7a5


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Dougherty
On Dec 6, 2007 8:23 AM, Benjamin Goertzel [EMAIL PROTECTED] wrote:
 On Dec 5, 2007 6:23 PM, Mike Tintner [EMAIL PROTECTED] wrote:
  resistance to moving onto the second stage. You have enough psychoanalytical
  understanding, I think, to realise that the unusual length of your reply to
  me may possibly be a reflection of that resistance and an inner conflict.

 What is bizarre to me, in this psychoanalysis of Ben Goertzel that you 
 present,
 is that you overlook [snip]

 Mike, you can make a lot of valid criticisms against me, but I don't
 think you can
 claim I have not originated an interdependent network of creative ideas.
 I certainly have done so.  You may not like or believe my various ideas, but
 for sure they form an interdependent network.  Read The Hidden Pattern
 for evidence.

I just wanted to comment on how well Ben accepted Mike's 'analysis.'
 Personally, I was offended by Mike's inconsiderate use of language.
Apparently we have different ideas of etiquette, so that's all I'll
say about it.  (rather than be drawn into a completely off-topic
pissing contest over who is right to say what, etc.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73157985-48127a


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Jean-Paul,

Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.  

What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper Complex Systems, Artificial Intelligence and
Theoretical Psychology, for irreducible computability and global-local
interconnect themselves are not totally clear about this distinction, and
as a result, when Richard says that those two issues are an unavoidable part
of AGI design that must be much more deeply understood before AGI can
advance, by the more loose definitions which would cover the types of
complexity involved in large matrix calculations and the design of a massive
supercomputer, of course those issues would arise in AGI design, but its no
big deal because we have a long history of dealing with them.

But in my email to Richard I said I was assuming he was not using this more
loose definitions of these words, because if he were, they would not present
the unexpected difficulties of the type he has been predicting.  I said I
though he was dealing with more the potentially unruly type of complexity, I
assume you were talking about.

I am aware of that type of complexity being a potential problem, but I have
designed my system to hopefully control it.  A modern-day well functioning
economy is complex (people at the Santa Fe Institute often cite economies as
examples of complex systems), but it is often amazingly unchaotic
considering how loosely it is organized and how many individual entities it
has in it, and how many transitions it is constantly undergoing.  Unsually,
unless something bangs on it hard (such as having the price of a major
commodity all of a sudden triple), it has a fair amount of stability, while
constantly creating new winners and losers (which is a productive form of
mini-chaos).  Of course in the absence of regulation it is naturally prone
to boom and bust cycles.  

So the system would need regulation.

Most of my system operates on a message passing system with little concern
for synchronization, it does not require low latencies, most of its units,
operate under fairly similar code.  But hopefully when you get it all
working together it will be fairly dynamic, but that dynamism with be under
multiple controls.

I think we are going to have to get such systems up and running to find you
just how hard or easy they will be to control, which I acknowledged in my
email to Richard.  I think that once we do we will be in a much better
position to think about what is needed to control them.  I believe such
control will be one of the major intellectual challenges to getting AGI to
function at a human-level.  This issue is not only preventing runaway
conditions, it is optimizing the intelligence of the inferencing, which I
think will be even more import and diffiducle.  (There are all sorts of
damping mechanisms and selective biasing mechanism that should be able to
prevent many types of chaotic behaviors.)  But I am quite confident with
multiple teams working on it, these control problems could be largely
overcome in several years, with the systems themselves doing most of the
learning.

Even a little OpenCog AGI on a PC, could be interesting first indication of
the extent to which complexity will present control problems.  As I said if
you had 3G of ram for representation, that should allow about 50 million
atoms.  Over time you would probably end up with at least hundreds of
thousand of complex patterns, and it would be interesting to see how easy it
would be to properly control them, and get them to work together as a
properly functioning thought economy in what ever small interactive world
they developed their self-organizing pattern base.  Of course on such a PC
based system you would only, on average, be able to do about 10million
pattern to pattern activations a second, so you would be talking about a
fairly trivial system, but with say 100K patterns, it would be a good first
indication of how easy or hard agi systems will be to control.

Ed Porter

-Original Message-
From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 1:34 AM
To: agi@v2.listbox.com
Subject: RE: [agi] None of you seem to be able ...

Hi Ed

You seem to have missed what many A(G)I people (Ben, Richard, etc.) mean by
'complexity' (as opposed to the common usage of complex meaning difficult).
It is not the *number* of calculations or interconnects that gives rise to
complexity or chaos, but their nature. E.g. calculating the eigen-values of
a n=10^1 matrix is *very* difficult but not complex. So the large matrix
calculations, map-reduces or BleuGene configuration are very simple. A
map-reduce or matrix calculation is typically one line of code (at least in
Python - which is where Google probably gets the idea from :)

To make them complex, you need to go beyond. 
E.g. a 500K-node 3 layer 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems.  Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a major problem.  The question is whether
one has an adequate theory of one's system to allow one to do this
without an intractable amount of trial and error.  Loosemore -- if I
interpret him correctly -- seems to be suggesting that for powerful
AGI systems no such theory can exist, on principle.  I doubt very much
this is correct.

-- Ben G

On Dec 6, 2007 9:40 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Jean-Paul,

 Although complexity is one of the areas associated with AI where I have less
 knowledge than many on the list, I was aware of the general distinction you
 are making.

 What I was pointing out in my email to Richard Loosemore what that the
 definitions in his paper Complex Systems, Artificial Intelligence and
 Theoretical Psychology, for irreducible computability and global-local
 interconnect themselves are not totally clear about this distinction, and
 as a result, when Richard says that those two issues are an unavoidable part
 of AGI design that must be much more deeply understood before AGI can
 advance, by the more loose definitions which would cover the types of
 complexity involved in large matrix calculations and the design of a massive
 supercomputer, of course those issues would arise in AGI design, but its no
 big deal because we have a long history of dealing with them.

 But in my email to Richard I said I was assuming he was not using this more
 loose definitions of these words, because if he were, they would not present
 the unexpected difficulties of the type he has been predicting.  I said I
 though he was dealing with more the potentially unruly type of complexity, I
 assume you were talking about.

 I am aware of that type of complexity being a potential problem, but I have
 designed my system to hopefully control it.  A modern-day well functioning
 economy is complex (people at the Santa Fe Institute often cite economies as
 examples of complex systems), but it is often amazingly unchaotic
 considering how loosely it is organized and how many individual entities it
 has in it, and how many transitions it is constantly undergoing.  Unsually,
 unless something bangs on it hard (such as having the price of a major
 commodity all of a sudden triple), it has a fair amount of stability, while
 constantly creating new winners and losers (which is a productive form of
 mini-chaos).  Of course in the absence of regulation it is naturally prone
 to boom and bust cycles.

 So the system would need regulation.

 Most of my system operates on a message passing system with little concern
 for synchronization, it does not require low latencies, most of its units,
 operate under fairly similar code.  But hopefully when you get it all
 working together it will be fairly dynamic, but that dynamism with be under
 multiple controls.

 I think we are going to have to get such systems up and running to find you
 just how hard or easy they will be to control, which I acknowledged in my
 email to Richard.  I think that once we do we will be in a much better
 position to think about what is needed to control them.  I believe such
 control will be one of the major intellectual challenges to getting AGI to
 function at a human-level.  This issue is not only preventing runaway
 conditions, it is optimizing the intelligence of the inferencing, which I
 think will be even more import and diffiducle.  (There are all sorts of
 damping mechanisms and selective biasing mechanism that should be able to
 prevent many types of chaotic behaviors.)  But I am quite confident with
 multiple teams working on it, these control problems could be largely
 overcome in several years, with the systems themselves doing most of the
 learning.

 Even a little OpenCog AGI on a PC, could be interesting first indication of
 the extent to which complexity will present control problems.  As I said if
 you had 3G of ram for representation, that should allow about 50 million
 atoms.  Over time you would probably end up with at least hundreds of
 thousand of complex patterns, and it would be interesting to see how easy it
 would be to properly control them, and get them to work together as a
 properly functioning thought economy in what ever small interactive world
 they developed their self-organizing pattern base.  Of course on such a PC
 based system you would only, on average, be able to do about 10million
 pattern to pattern activations a second, so you would be talking about a
 fairly trivial system, but with say 100K patterns, it would be a good first
 indication of how easy or hard agi systems will be to control.

 Ed Porter

 -Original Message-
 From: Jean-Paul Van Belle [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 06, 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:

Richard: Now, interpreting that result is not easy,

Richard, I get the feeling you're getting understandably tired with all 
your correspondence today. Interpreting *any* of the examples of *hard* 
cog sci that you give is not easy. They're all useful, stimulating 
stuff, but they don't add up to a hard pic. of the brain's cognitive 
architecture. Perhaps Ben will back me up on this - it's a rather 
important point - our overall *integrated* picture of the brain's 
cognitive functioning is really v. poor, although certainly we have a 
wealth of details about, say, which part of the brain is somehow 
connected to a given operation.


You make an important point, but in your haste to make it you may have 
overlooked the fact that I really agree with you ... and have gone on to 
say that I am trying to fix that problem.


What I mean by that:  if you look at cog psy/cog sci in a superficial 
way you might come awy with the strong impression that they don't add 
up to a hard pic. of the brain's cognitive architecture.  Sure.  But 
that is what I meant when I said that cog sci has a huge amount of 
information stashed away, but it is in a format that makes it very hard 
for someone trying to build an intelligent system to actually use.


I believe I can see deeper into this problem, and I think that cog sci 
can be made to add up to a consistent picture, but it requires an extra 
organizational ingredient that I am in the process of adding right now.


The root of the problem is that the cog sci and AI communities both have 
extremely rigid protocols about how to do research, which are 
incompatible with each other.  In cog sci you are expected to produce a 
micro-theory for every experimental result, and efforts to work on 
larger theories or frameworks without introducing new experimental 
results that are directly explained are frowned upon.  The result is a 
style of work that produces local patch theories that do not have any 
generality.


The net result of all this is that when you say that our overall 
*integrated* picture of the brain's cognitive functioning is really v. 
poor I would point out that this is only true if you replace the our 
with the AI community's.



Richard:I admit that I am confused right
now:  in the above paragraphs you say that your position is that the
human mind is 'rational' and then later that it is 'irrational' - was
the first one of those a typo?

Richard, No typo whatsoever if you just reread. V. clear. I say and 
said: *scientific pychology* and *cog sci* treat the mind as rational. I 
am the weirdo who is saying this is nonsense - the mind is 
irrational/crazy/creative - rationality is a major *achievement* not 
something that comes naturally. Mike Tintner= crazy/irrational- 
somehow, I don't think you'll find that hard to remember.


The problem here is that I am not sure in what sense you are using the 
word rational.  There are many usages.  One of those usages is very 
common in cog sci, and if I go with *that* usage your claim is 
completely wrong:  you can pick up an elementary cog psy textbook and 
find at least two chapters dedicated to a discussion about the many ways 
that humans are (according to the textbook) irrational.


I suspect what is happening is that you are using the term in a 
different way, and that this is the cause of the confusion.  Since you 
are making the claim, I think the ball is in your court:  please try to 
explain why this discrepency arises so I can understand you claim.  Take 
a look at e.g. Eysenck and Keane (Cognitive Psychology) and try to 
reconcile what you say with what they say.


Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73173298-c0f919


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Ed Porter wrote:
Richard, 


I quickly reviewed your paper, and you will be happy to note that I
had underlined and highlighted it so such skimming was more valuable that it
otherwise would have been.

With regard to COMPUTATIONAL IRREDUCIBILITY, I guess a lot depends
on definition. 


Yes, my vision of a human AGI would be a very complex machine.  Yes,
a lot of its outputs could only be made with human level reasonableness
after a very large amount of computation.  I know of no shortcuts around the
need to do such complex computation.  So it arguably falls in to what you
say Wolfram calls computational irreducibility.  


But the same could be said for any of many types of computations,
such as large matrix equations or Google's map-reduces, which are routinely
performed on supercomputers.

So if that is how you define irreducibility, its not that big a
deal.  It just means you have to do a lot of computing to get an answer,
which I have assumed all along for AGI (Remember I am the one pushing for
breaking the small hardware mindset.)  But it doesn't mean we don't know how
to do such computing or that we have to do a lot more complexity research,
of the type suggested in your paper, before we can successfully designing
AGIs.

With regard to GLOBAL-LOCAL DISCONNECT, again it depends what you
mean.  


You define it as

The GLD merely signifies that it might be difficult or
impossible to derive analytic explanations of global regularities that we
observe in the system, given only a knowledge of the local rules that drive
the system. 

I don't know what this means.  Even the game of Life referred to in
your paper can be analytically explained.  It is just that some of the
things that happen are rather complex and would take a lot of computing to
analyze.  So does the global-local disconnect apply to anything where an
explanation requires a lot of analysis?  If that is the case than any large
computation, of the type which mankind does and designs every day, would
have a global-local disconnect.

If that is the case, the global-local disconnect is no big deal.  We
deal with it every day.


Forgive, but I am going to have to interrupt at this point.

Ed, what is going on here is that my paper is about complex systems 
but you are taking that phrase to mean something like complicated 
systems rather than the real meaning -- the real meaning is very much 
not complicated systems, it has to do with a particular class of 
systems that are labelled complex BECAUSE they show overall behavior 
that appears to be disconnected from the mechanisms out of which the 
systems are made up.


The problem is that complex systems has a specific technical meaning. 
 If you look at the footnote in my paper (I think it is on page one), 
you will find that the very first time I use the word complex I make 
sure that my audience does not take it the wrong way by explaining that 
it does not refer to complicated system.


Everything you are saying here in this post is missing the point, so 
could I request that you do some digging around to figure out what 
complex systems are, and then make a second attempt?  I am sorry:  I do 
not have the time to write a long introductory essay on complex systems 
right now.


Without this understanding, the whole of my paper will seem like 
gobbledegook.  I am afraid this is the result of skimming through the 
paper.  I am sure you would have noticed the problem if you had gone 
more slowly.




Richard Loosemore.



I don't know exactly what you mean by regularities in the above
definition, but I think you mean something equivalent to patterns or
meaningful generalizations.  In many types of computing commonly done, you
don't know what the regularizes will be without tremendous computing.  For
example in principal component analysis, you often don't know what the major
dimensions of a distribution will be until you do a tremendous amount of
computation.  Does that mean there is a GLD in that problem?  If so, it
doesn't seem to be a big deal.  PCA is done all the time, as are all sorts
of other complex matrix computations.

But you have implied multiple times that you think the global-local
disconnect is a big, big deal.  You have implied multiple times it presents
a major problem to developing AGI.  If I interpret your prior statements
taken in conjunction with your paper correctly, I am guessing your major
thrust is that it will be very difficult to design AGI's where the desired
behavior is to be the result of many casual relations between a vast number
of active elements, because in such system the causality is so non-linear
and complex that we cannot currently properly think and design in terms of
them.  


Although this proposition is not obviously true on its face, it is
arguably also not obviously false on its face.

Although it is easy to design system where the 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:


JVPB:You seem to have missed what many A(G)I people (Ben, Richard, etc.) 
mean by 'complexity' (as opposed to the common usage of complex meaning 
difficult).


Well, I as an ignoramus, was wondering about this - so thankyou. And it 
wasn't clear at all to me from Richard's paper what he meant. 


Well, to be fair to me, I pointed out in a footnote at the very 
beginning of the paper that the term complex system was being sued in 
the technical sense, and then shortly afterwards I gave some references 
to anyone who needed to figure out what that technical sense actually was...


Could I have done more?

Look up the Waldrop book that I gave as a reference:  at least that is a 
nice non-technical read.




Richard Loosemore


What I'm
taking out from your account is that it involves random inputs...? Is 
there a fuller account of it? Is it the random dimension that he/others 
hope will produce emergent/human-like behaviour? (..because if so, I'd 
disagree - I'd argue the complications of human behaviour flow from 
conflict/ conflicting goals - which happens to be signally missing from 
his (and cognitive science's) ideas about emotions).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73179225-6ab0e8


Re: [agi] None of you seem to be able ...

2007-12-06 Thread A. T. Murray
Mike Tintner wrote on Thu, 6 Dec 2007:

 ATM:
 http://mentifex.virtualentity.com/mind4th.html -- an AGI prototype --
 has just gone through a major bug-solving update, and is now much
 better at maintaining chains of continuous thought -- after the
 user has entered sufficient knowledge for the AI to think about.

 It doesn't have - you didn't try to give it - 
 independent curiosity (like an infant)?
 
No, sorry, but the Forthmind does have an Ask module at 
http://mentifex.virtualenty.com/ask.html for asking questions --
which, come to think of it, may be a form of innate curiosity.

Meanwhile a year and a half after receiving a bug report, 
the current bug-solving update has been posted at 
http://tech.groups.yahoo.com/group/win32forth/message/13048
as follows FYI:

 OK, the audRecog subroutine is not totally bugfree
 when it comes to distinguishing certain sequences 
 of ASCII characters. It may be necessary to not use
 MACHINES or SERVE if these words confuse the AI.
 In past years I have spent dozens of painful
 hours fiddling with the audRecog subroutine, 
 and usually the slightest change breaks it worse
 than it was before. It works properly probably 
 eighty percent of the time, if not more.
 Even though the audRecog module became suspect 
 to me over time, I pressed on for True AI.

On 14 June 2006 I responded above to a post by FJR.
Yesterday -- a year and a half later -- I finally 
tracked down and eliminated the bug in question.

http://mind.sourceforge.net/audrecog.html -- 
the auditory recognition audRecog module -- 
was sometimes malfunctioning by misrecognizing 
one word of input as the word of a different 
concept, usually if both words ended the same. 

The solution was to base the selection of an 
auditory recognition upon finding the candidate 
word-match with the highest incremental activation, 
rather than merely taking the most recent match. 

By what is known as serendipity or sheer luck, 
the present solution to the old audRecog problem 
opens up a major new possibility for a far more 
advanced version of the audRecog module -- one 
that can recognize the concept of, say, book 
as input of either the word book or books. 
Since audRecog now recognizes a word by using 
incremental activation, it should not be too 
hard to switch the previous pattern-recognition 
algorithm into one that no longer insists upon 
dealing only with entire words, but can instead 
recognize less than an entire word because so 
much incremental activation has built up.

The above message may not be very crystal clear, 
and so it is posted here mainly as a show of 
hope and as a forecasting of what may yet come.

http://mind.sourceforge.net/mind4th.html is 
the original Mind.Forth with the new audRecog.

http://AIMind-I.com is FJR's AI Mind in Forth.
(Sorry I can't help in the matter of timers.)

ATM
-- 
http://mentifex.virtualentity.com/mind4th.html 

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73193379-092711


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Ed Porter
Mark,

First you attack me for making a statement which you falsely claimed
indicated I did not understand the math in the Collins' article (and
potentially discreted everything I said on this list).  Once it was show
that that attack was unfair, rather than apologizing sufficiently for the
unfair attack, now you seem to be coming back with another swing.  Now you
are implicitly attacking me for implying it is new to think you could deal
with vectors in some sort of compressed representation.

I was aware that there were previous methods for dealing with vectors in
high dimensional spaces using various compression schemes, although I had
only heard of a few examples.  I personally had been planning for years
prior to reading Collin's paper to score matches based mainly on the number
of similar features, and not all the dissimilar features(except in certain
cases) to avoid the curse of high dimensionalities.  

But I was also aware of many discussions, such as one in a current best
selling AI textbook, which implies that a certain problem becomes
intractable easily because it assumes one is saddled with dealing with the
full possible dimensionality of the problem space being represented, when it
is clear you can accomplish a high percent of the same thing with a GNG type
approach by only placing represention where there are significant
probabilities.

So, all though it may not be new to you, it seems to be new to some that the
curse of high dimensionality can often be avoided in many classes of
problems.  I was citing the Collins paper as one example for showing that AI
systems have been able to deal well with high dimensionality.  I attended a
lecture at MIT that a few years after the Collin's paper came out where the
major thrust of the speech was that recently great headway was being made in
many field of AI because people were beginning to realize all sorts of
efficient hacks that avoid many of the problems of combinatorial explosion
of high dimensionality that had previously thwarted their efforts.  The
Collins paper is an example of that movement.

When it was relatively new, the Collins paper was treated by several people
I talked to as quite a breakthrough, because in conjunction of the work of
people like Haussler it showed a relatively simple way to apply the Kernel
trick to graph mapping.  As you may be aware the Kernel trick not only
allows one to score matches, but also allows many of the analytical tools of
linear algebra to be applied through the kernel, greatly reducing the
complexity of applying such tools in the much higher dimensional space
represented by the kernel mapping.  I am not a historian of this field of
math, but in its day the Kernel trick was getting a lot of buzz from many
people in the field.  I attended an NL conference at CMU in the early '90s.
The use of support vector classifiers using the kernel trick was all the
rage at the conference, and the kernels they were use seemed much less
appropriate than that Collin's paper discloses.

Ed Porter


-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 9:09 AM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 THE KEY POINT I WAS TRYING TO GET ACROSS WAS ABOUT NOT HAVING TO 
 EXPLICITLY DEAL WITH 500K TUPLES

And I asked -- Do you believe that this is some sort of huge conceptual 
breakthrough?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73199664-8396eaattachment: winmail.dat

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Ed Porter wrote:

Jean-Paul,

Although complexity is one of the areas associated with AI where I have less
knowledge than many on the list, I was aware of the general distinction you
are making.  


What I was pointing out in my email to Richard Loosemore what that the
definitions in his paper Complex Systems, Artificial Intelligence and
Theoretical Psychology, for irreducible computability and global-local
interconnect themselves are not totally clear about this distinction, and
as a result, when Richard says that those two issues are an unavoidable part
of AGI design that must be much more deeply understood before AGI can
advance, by the more loose definitions which would cover the types of
complexity involved in large matrix calculations and the design of a massive
supercomputer, of course those issues would arise in AGI design, but its no
big deal because we have a long history of dealing with them.

But in my email to Richard I said I was assuming he was not using this more
loose definitions of these words, because if he were, they would not present
the unexpected difficulties of the type he has been predicting.  I said I
though he was dealing with more the potentially unruly type of complexity, I
assume you were talking about.

I am aware of that type of complexity being a potential problem, but I have
designed my system to hopefully control it.  A modern-day well functioning
economy is complex (people at the Santa Fe Institute often cite economies as
examples of complex systems), but it is often amazingly unchaotic
considering how loosely it is organized and how many individual entities it
has in it, and how many transitions it is constantly undergoing.  Unsually,
unless something bangs on it hard (such as having the price of a major
commodity all of a sudden triple), it has a fair amount of stability, while
constantly creating new winners and losers (which is a productive form of
mini-chaos).  Of course in the absence of regulation it is naturally prone
to boom and bust cycles.  


Ed,

I now understand that you have indeed heard of complex systems before, 
but I must insist that in your summary above you have summarized what 
they are in such a way that completely contradicts what they are!


A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.


I am struggling here, Ed.  I want to go on to explain exactly what I 
mean (and what complex systems theorists mean) but I cannot see a way to 
do it without writing half a book this afternoon.


Okay, let me try this.

Imagine that we got a bunch of computers and connected them with a 
network that allowed each one to talk to (say) the ten nearest machines.


Imagine that each one is running a very simple program:  it keeps a 
handful of local parameters (U, V, W, X, Y) and it updates the values of 
its own parameters according to what the neighboring machines are doing 
with their parameters.


How does it do the updating?  Well, imagine some really messy and 
bizarre algorithm that involves looking at the neighbors' values, then 
using them to cross reference each other, and introduce delays and 
gradients and stuff.


On the face of it, you might think that the result will be that the U V 
W X Y values just show a random sequence of fluctuations.


Well, we know two things about such a system.

1) Experience tells us that even though some systems like that are just 
random mush, there are some (a noticeably large number in fact) that 
have overall behavior that shows 'regularities'.  For example, much to 
our surprise we might see waves in the U values.  And every time two 
waves hit each other, a vortex is created for exactly 20 minutes, then 
it stops.  I am making this up, but that is the kind of thing that could 
happen.


2) The algorithm is so messy that we cannot do any math to analyse and 
predict the behavior of the system.  All we can do is say that we have 
absolutely no techniques that will allow us to mathematical progress on 
the problem today, and we do not know if at ANY time in future history 
there will be a mathematics that will cope with this system.


What this means is that the waves and vortices we observed cannot be 
explained in the normal way.  We see them happening, but we do not 
know why they do.  The bizzare algorithm is the low level mechanism 
and the waves and vortices are the high level behavior, and when I say 
there is a Global-Local Disconnect in this system, all I mean is that 
we are completely stuck when it comes to explaining the high level in 
terms of the low level.


Believe me, it is childishly easy to write down equations/algorithms for 
a system like this that are so profoundly intractable that no 
mathematician would even think of touching them.  You have to trust me 
on this.  Call your local Math department at Harvard or somewhere, and 
check with them 

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Ben,
You below email is a much more concise statement of the basic point
I was trying to make
Ed Porter

-Original Message-
From: Benjamin Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 9:45 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

There is no doubt that complexity, in the sense typically used in
dynamical-systems-theory, presents a major issue for AGI systems.  Any
AGI system with real potential is bound to have a lot of parameters
with complex interdependencies between them, and tuning these
parameters is going to be a major problem.  The question is whether
one has an adequate theory of one's system to allow one to do this
without an intractable amount of trial and error.  Loosemore -- if I
interpret him correctly -- seems to be suggesting that for powerful
AGI systems no such theory can exist, on principle.  I doubt very much
this is correct.

-- Ben G

On Dec 6, 2007 9:40 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Jean-Paul,

 Although complexity is one of the areas associated with AI where I have
less
 knowledge than many on the list, I was aware of the general distinction
you
 are making.

 What I was pointing out in my email to Richard Loosemore what that the
 definitions in his paper Complex Systems, Artificial Intelligence and
 Theoretical Psychology, for irreducible computability and global-local
 interconnect themselves are not totally clear about this distinction, and
 as a result, when Richard says that those two issues are an unavoidable
part
 of AGI design that must be much more deeply understood before AGI can
 advance, by the more loose definitions which would cover the types of
 complexity involved in large matrix calculations and the design of a
massive
 supercomputer, of course those issues would arise in AGI design, but its
no
 big deal because we have a long history of dealing with them.

 But in my email to Richard I said I was assuming he was not using this
more
 loose definitions of these words, because if he were, they would not
present
 the unexpected difficulties of the type he has been predicting.  I said I
 though he was dealing with more the potentially unruly type of complexity,
I
 assume you were talking about.

 I am aware of that type of complexity being a potential problem, but I
have
 designed my system to hopefully control it.  A modern-day well functioning
 economy is complex (people at the Santa Fe Institute often cite economies
as
 examples of complex systems), but it is often amazingly unchaotic
 considering how loosely it is organized and how many individual entities
it
 has in it, and how many transitions it is constantly undergoing.
Unsually,
 unless something bangs on it hard (such as having the price of a major
 commodity all of a sudden triple), it has a fair amount of stability,
while
 constantly creating new winners and losers (which is a productive form of
 mini-chaos).  Of course in the absence of regulation it is naturally prone
 to boom and bust cycles.

 So the system would need regulation.

 Most of my system operates on a message passing system with little concern
 for synchronization, it does not require low latencies, most of its units,
 operate under fairly similar code.  But hopefully when you get it all
 working together it will be fairly dynamic, but that dynamism with be
under
 multiple controls.

 I think we are going to have to get such systems up and running to find
you
 just how hard or easy they will be to control, which I acknowledged in my
 email to Richard.  I think that once we do we will be in a much better
 position to think about what is needed to control them.  I believe such
 control will be one of the major intellectual challenges to getting AGI to
 function at a human-level.  This issue is not only preventing runaway
 conditions, it is optimizing the intelligence of the inferencing, which I
 think will be even more import and diffiducle.  (There are all sorts of
 damping mechanisms and selective biasing mechanism that should be able to
 prevent many types of chaotic behaviors.)  But I am quite confident with
 multiple teams working on it, these control problems could be largely
 overcome in several years, with the systems themselves doing most of the
 learning.

 Even a little OpenCog AGI on a PC, could be interesting first indication
of
 the extent to which complexity will present control problems.  As I said
if
 you had 3G of ram for representation, that should allow about 50 million
 atoms.  Over time you would probably end up with at least hundreds of
 thousand of complex patterns, and it would be interesting to see how easy
it
 would be to properly control them, and get them to work together as a
 properly functioning thought economy in what ever small interactive world
 they developed their self-organizing pattern base.  Of course on such a PC
 based system you would only, on average, be able to do about 10million
 pattern to pattern activations a 

RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Richard,

I read your core definitions of computationally irreducabile and
global-local disconnect and by themselves they really don't distinguish
very well between complicated and complex.

But I did assume from your paper and other writings you meant complex
although your core definitions are not very clear about the distinction.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 10:31 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:
 Richard, 
 
   I quickly reviewed your paper, and you will be happy to note that I
 had underlined and highlighted it so such skimming was more valuable that
it
 otherwise would have been.
 
   With regard to COMPUTATIONAL IRREDUCIBILITY, I guess a lot depends
 on definition. 
 
   Yes, my vision of a human AGI would be a very complex machine.  Yes,
 a lot of its outputs could only be made with human level reasonableness
 after a very large amount of computation.  I know of no shortcuts around
the
 need to do such complex computation.  So it arguably falls in to what you
 say Wolfram calls computational irreducibility.  
 
   But the same could be said for any of many types of computations,
 such as large matrix equations or Google's map-reduces, which are
routinely
 performed on supercomputers.
 
   So if that is how you define irreducibility, its not that big a
 deal.  It just means you have to do a lot of computing to get an answer,
 which I have assumed all along for AGI (Remember I am the one pushing for
 breaking the small hardware mindset.)  But it doesn't mean we don't know
how
 to do such computing or that we have to do a lot more complexity research,
 of the type suggested in your paper, before we can successfully designing
 AGIs.
 
   With regard to GLOBAL-LOCAL DISCONNECT, again it depends what you
 mean.  
 
   You define it as
 
   The GLD merely signifies that it might be difficult or
 impossible to derive analytic explanations of global regularities that we
 observe in the system, given only a knowledge of the local rules that
drive
 the system. 
 
   I don't know what this means.  Even the game of Life referred to in
 your paper can be analytically explained.  It is just that some of the
 things that happen are rather complex and would take a lot of computing to
 analyze.  So does the global-local disconnect apply to anything where an
 explanation requires a lot of analysis?  If that is the case than any
large
 computation, of the type which mankind does and designs every day, would
 have a global-local disconnect.
 
   If that is the case, the global-local disconnect is no big deal.  We
 deal with it every day.

Forgive, but I am going to have to interrupt at this point.

Ed, what is going on here is that my paper is about complex systems 
but you are taking that phrase to mean something like complicated 
systems rather than the real meaning -- the real meaning is very much 
not complicated systems, it has to do with a particular class of 
systems that are labelled complex BECAUSE they show overall behavior 
that appears to be disconnected from the mechanisms out of which the 
systems are made up.

The problem is that complex systems has a specific technical meaning. 
  If you look at the footnote in my paper (I think it is on page one), 
you will find that the very first time I use the word complex I make 
sure that my audience does not take it the wrong way by explaining that 
it does not refer to complicated system.

Everything you are saying here in this post is missing the point, so 
could I request that you do some digging around to figure out what 
complex systems are, and then make a second attempt?  I am sorry:  I do 
not have the time to write a long introductory essay on complex systems 
right now.

Without this understanding, the whole of my paper will seem like 
gobbledegook.  I am afraid this is the result of skimming through the 
paper.  I am sure you would have noticed the problem if you had gone 
more slowly.



Richard Loosemore.


   I don't know exactly what you mean by regularities in the above
 definition, but I think you mean something equivalent to patterns or
 meaningful generalizations.  In many types of computing commonly done, you
 don't know what the regularizes will be without tremendous computing.  For
 example in principal component analysis, you often don't know what the
major
 dimensions of a distribution will be until you do a tremendous amount of
 computation.  Does that mean there is a GLD in that problem?  If so, it
 doesn't seem to be a big deal.  PCA is done all the time, as are all sorts
 of other complex matrix computations.
 
   But you have implied multiple times that you think the global-local
 disconnect is a big, big deal.  You have implied multiple times it
presents
 a major problem to developing AGI.  If I interpret your prior 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Ed Porter wrote:

Richard,

I read your core definitions of computationally irreducabile and
global-local disconnect and by themselves they really don't distinguish
very well between complicated and complex.


That paper was not designed to be a complex systems for absolute 
beginners paper, so these definitions work very well for anyone who has 
even a little background knowledge of complex systems.




Richard Loosemore



But I did assume from your paper and other writings you meant complex
although your core definitions are not very clear about the distinction.

Ed Porter



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73230463-32f239


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Ed Porter
Richard,

You will be happy to note that I have copied the text of your reply to my
Valuable Clippings From AGI Mailing List file.  Below are some comments.

RICHARD LOOSEMORE= I now understand that you have indeed heard of
complex systems before, but I must insist that in your summary above you
have summarized what they are in such a way that completely contradicts what
they are!

A complex system such as the economy can and does have stable modes in 
which it appears to be stable.  This does not constradict the complexity 
at all.  A system is not complex because it is unstable.

ED PORTER= Richard, I was citing a relatively stable economies as
exactly what you say they are, an example of a complex system that is
relatively stable.  So why is it that my summary  summarized what they are
in such a way that completely contradicts what they are!?   I implied that
economies have traditionally had instabilities, such as boom and bust
cycles, and I am aware that even with all our controls, other major
instabilities could strike, in much the same ways that people can have
nervous breakdowns.

ED PORTER= With regard to the rest of your paper I find it one of your
better reasoned discussions of the problem of complexity.  I like Ben, agree
it is a potential problem.  I said that in the email you were responding to.
My intuition, like Ben's, tells me we probably be able to deal with it, but
your paper is correct to point out that such intuitions are really largely
guesses.  

RICHARD LOOSEMORE=how can someone know that how much impact the
complexity is going to have, when in the same breath they will admit that
NOBODY currently understands just how much of an impact the complexity has.

the best that anyone can do is point to other systems in which there is a
small amount of complexity and say:  Well, these folks managed to
understand their systems without getting worried about complexity, so why
don't we assume that our problem is no worse than theirs?  For example,
someone could point to the dynamics of planetary systems and say that there
is a small bit of complexity there, but it is a relatively small effect in
the grand scheme of things.

ED PORTER= A better example would be the world economy.  Its got 6
billion highly autonomous players.  It has all sorts of non-linearities and
complex connections.  Although it has fits and starts is has surprising
stability considering everything that is thrown at it (Not clear how far
this stability will hold into the singularity future) but still it is an
instructive example of how extremely complex things, with lots of
non-linearities, can work relatively well if there are the proper
motivations and controls.

RICHARD LOOSEMORE=Problem with that line of argument is that there are
NO other examples of an engineering system with as much naked funkiness in
the interactions between the low level components.

ED PORTER= The key is try to avoid and/or control funkiness in your
components.  Remember that an experiential system derives most of its
behavior by re-enacting, largely through substitutions and
probabilistic-transition-based synthesis, from past experience, with a bias
toward past experiences that have worked in some sense meaningful to the
machine.  These creates a tremendous bias toward desirable, vs. funky,
behaviors.

So, net, net, Richard, re-reading your paper and reading your below long
post have increased my respect for your arguments.  I am somewhat more
afraid of complexity gotchas than I was two days ago.  But I still am pretty
confident (without anything beginning to approach proof) such gotchas will
not prevent use from making useful human level AGI within the decade if AGI
got major funding.

But I have been afraid for a long time that even the other type of
complexity (i.e., complication, which often involves some risk of
complexity) means that it may be very difficult for us humans to keep
control of superhuman-level AGI's for very long, so I have always worried
about that sort of complexity Gotcha.

But with regard to the complexity problem, it seems to me that we should
design systems with an eye to reducing their knarlyness, including planning
multiple types of control systems, and then once we get initial such system
up and running try to find out what sort of complexity problems we have.

Ed Porter



-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 11:46 AM
To: agi@v2.listbox.com
Subject: Re: [agi] None of you seem to be able ...

Ed Porter wrote:
 Jean-Paul,
 
 Although complexity is one of the areas associated with AI where I have
less
 knowledge than many on the list, I was aware of the general distinction
you
 are making.  
 
 What I was pointing out in my email to Richard Loosemore what that the
 definitions in his paper Complex Systems, Artificial Intelligence and
 Theoretical Psychology, for irreducible computability and global-local
 interconnect 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Mike Tintner
Richard,  The problem here is that I am not sure in what sense you are using 
the
word rational.  There are many usages.  One of those usages is very 
common in cog sci, and if I go with *that* usage your claim is completely 
wrong:  you can pick up an elementary cog psy textbook and find at least 
two chapters dedicated to a discussion about the many ways that humans are 
(according to the textbook) irrational.


This is a subject of huge importance, and it shouldn't be hard to reach a 
mutual understanding at least. Rational in general means that a system or 
agent follows a coherent and systematic set of steps in solving a problem.


The social sciences treat humans as rational agents maximising or boundedly 
satisficing their utilities in taking decisions  - coherently systematically 
finding solutions for their needs,( there is much controversy about this - 
everyone knows it ain't right, but no substitute has been offered)


Cognitive science treats the human mind as basically a programmed 
computational machine much like actual programmed computers - and programs 
are normally conceived of  as rational. - coherent sets of steps etc.


Both cog sci and sci psych. generally endlessly highlight irrationalities in 
our decisionmaking/problemsolving processes - but these are only in *parts* 
of those processes, not the processes as a whole. They're like bugs in the 
program, but the program and mind as a whole are basically rational - 
following coherent sets of steps - it's just that the odd heuristic/ 
attitude/ assumption is wrong (or perhaps they have a neurocognitive 
deficit).


Thousands of years of philosophy have also treated human beings as 
fundamentally rational creatures.


The reality is two-sided. Let's start with why the mind is in fact 
irrational.


The mind is actually designed to deal with problematic, divergent problems, 
otherwise known as wicked, ill-structured problems.  A simple example is - 
writing an essay. Write an essay (or a post) on the future/evils/ flaws of 
AI. An even simpler example is: would you like to watch this or that TV 
program?  The literature on wicked problems acknowledges (sotto voce) that 
these are extraordinarily abundant and more or less continuous. Eysenck 
acknowledges this too.


What characterises these problems is that they are indeed ill-structured - 
put it another way: there is *no such thing as a rational solution or way of 
solving them*. There is no rational essay on the future of AI or the 
causes of the French revolution. No rational beginning, middle, ending or 
any step at all. There is no rational way to think about them -  incl. about 
which program you want to watch. It would be quite reasonable from a purely 
logical point of view to spend eternity debating that question. These are in 
fact infinite problems with infinite solutions or, at least, (with the TV 
program decision), infinite ways of solving them.


The mind has no coherent structure or inner systematic programming for 
dealing with these problems -no coherent set of steps at all to follow -  it 
has to find and achieve a structure - as you do for an essay or a post.


Consequently the mind can be regarded as systematically irrational. Look 
at how people actually write essays or posts and you will find that they can 
and will depart at each and any stage from what might be regarded as an 
ideal process. They don't even define the problem - (I don't think there's a 
single person engaged in an AGI project who has yet defined the problem) - 
they don't answer the problem but answer something else entirely - they 
don't look at the evidence - they don't have ideas but endlessly redefine 
the problem - they don't order or organize their ideas -  do any checking. 
They actually write a confused mix of three essays rather than one. etc etc. 
They always jump to conclusions to some extent, because it's actually 
impossible to do otherwise.  And they may or may not make these errors on 
different occasions. Everyone's practice is highly variable. IOW these 
errors have nothing to do with bugs or deficits.


Put it another way, humans are systematically more or less unfocussed, 
disordered, disorganized, poorly concentrated and applied,  uncritical, 
unimaginative, sloppy, etc. etc. in their thinking. But since there is never 
world enough and time to think about divergent problems this is more or less 
inevitable, (except when an AGI-er doesn't define the problem, which is 
unforgivable :) )


Sci psych and cog sci v. largely ignore all this.

Scientific psychology does not pay any serious attention at all to divergent 
problems - as Michael Eysenck acknowledges. (Why? Because psychologists like 
convergent problems with nice right, rational answers that can be easily 
studied and marked).


IQ focusses on convergent problems, even though essaywriting and similar 
projects constitute a good half - and by far the most important half  -of 
educational and real world problemsolving 

Re: [agi] None of you seem to be able ...

2007-12-06 Thread Benjamin Goertzel
 Conclusion:  there is a danger that the complexity that even Ben agrees
 must be present in AGI systems will have a significant impact on our
 efforts to build them.  But the only response to this danger at the
 moment is the bare statement made by people like Ben that I do not
 think that the danger is significant.  No reason given, no explicit
 attack on any component of the argument I have given, only a statement
 of intuition, even though I have argued that intuition cannot in
 principle be a trustworthy guide here.

But Richard, your argument ALSO depends on intuitions ...

I'll try, though, to more concisely frame the reason I think your argument
is wrong.

I agree that AGI systems contain a lot of complexity in the dynamical-
systems-theory sense.

And I agree that tuning all the parameters of an AGI system externally
is likely to be intractable, due to this complexity.

However, part of the key to intelligence is **self-tuning**.

I believe that if an AGI system is built the right way, it can effectively
tune its own parameters, hence adaptively managing its own complexity.

Now you may say there's a problem here: If AGI component A2 is to
tune the parameters of AGI component A1, and A1 is complex, then
A2 has got to also be complex ... and who's gonna tune its parameters?

So the answer has got to be that: To effectively tune the parameters
of an AGI component of complexity X, requires an AGI component of
complexity a bit less than X.  Then one can build a self-tuning AGI system,
if one does the job right.

Now, I'm not saying that Novamente (for instance) is explicitly built
according to this architecture: it doesn't have N components wherein
component A_N tunes the parameters of component A_(N+1).

But in many ways, throughout the architecture, it relies on this sort of
fundamental logic.

Obviously it is not the case that every system of complexity X can
be parameter-tuned by a system of complexity less than X.  The question
however is whether an AGI system can be built of such components.
I suggest the answer is yes -- and furthermore suggest that this is
pretty much the ONLY way to do it...

Your intuition is that this is not possible, but you don't have a proof
of this...

And yes, I realize the above argument of mine is conceptual only -- I haven't
given a formal definition of complexity.  There are many, but that would
lead into a mess of math that I don't have time to deal with right now,
in the context of answering an email...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73243865-194e0e


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Derek Zahn
Richard Loosemore writes: Okay, let me try this.  Imagine that we got a 
bunch of computers [...]
 
Thanks for taking the time to write that out.  I think it's the most 
understandable version of your argument that you have written yet.  Put it on 
the web somewhere and link to it whenever the issue comes up again in the 
future.
 
If you are right, you may have to resort to told you so when other projects 
fail to produce the desired emergent intelligence.  No matter what you do, 
system builders can and do and will say that either their system is probably 
not heavily impacted by the issue, or that the issue itself is overstated for 
AGI development, and I doubt that most will be convinced otherwise.  By making 
such a clear exposition, at least the issue is out there for people to think 
about.
 
I have no position myself on whether Novamente (for example) is likely to be 
slain by its own complexity, but it is interesting to ponder.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73249587-454993

Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Benjamin Goertzel
 Show me ONE other example of the reverse engineering of a system in
 which the low level mechanisms show as many complexity-generating
 characteristics as are found in the case of intelligent systems, and I
 will gladly learn from the experience of the team that did the job.

 I do not believe you can name a single one.

Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.

 You know, I sympathize with you in a way.  You are trying to build an
 AGI system using a methodology that you are completely committed to.
 And here am I coming along like Bertrand Russell writing his letter to
 Frege, just as poor Frege was about to publish his Grundgesetze der
 Arithmetik, pointing out that everything in the new book was undermined
 by a paradox.  How else can you respond except by denying the idea as
 vigorously as possible?

It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73249230-63bddf


Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Benjamin Goertzel wrote:

Richard,

Well, I'm really sorry to have offended you so much, but you seem to be
a mighty easy guy to offend!

I know I can be pretty offensive at times; but this time, I wasn't
even trying ;-)


The argument I presented was not a conjectural assertion, it made the
following coherent case:

   1) There is a high prima facie *risk* that intelligence involves a
significant amount of irreducibility (some of the most crucial
characteristics of a complete intelligence would, in any other system,
cause the behavior to show a global-local disconnect), and


The above statement contains two fuzzy terms -- high and significant ...

You have provided no evidence for any particular quantification of
these terms...
your evidence is qualitative/intuitive, so far as I can tell...

Your quantification of these terms seems to me a conjectural assertion
unsupported by evidence.


[This is going to cross over your parallel response to a different post. 
No time to address that other argument, but the comments made here are 
not affected by what is there.]


I have answered this point very precisely on many occasions, including 
in the paper.  Here it is again:


If certain types of mechanisms do indeed give rise to complexity (as all 
the complex systems theorist agree), then BY DEFINITION it will never be 
possible to quantify the exact relationship between:


   1)  The precise characteristics of the low-level mechanisms (both 
the type and the quantity) that would lead us to expect complexity, and


   2)  The amount of complexity thereby caused in the high-level behavior.

Even if the complex systems effect were completely real, the best we 
could ever do would be to come up with suggestive characteristics that 
lead to complexity.  Nevertheless, there is a long list of such 
suggestive characteristics, and everyone (including you) agree that all 
those suggestive characteristics are present in the low level mechanisms 
that must be in an AGI.


So the one most important thing we know about complex systems is that if 
complex systems really do exist, then we CANNOT say Give me precise 
quantitative evidence that we should expect complexity in this 
particular system.


And what is your response to this most important fact about complex systems?

Your response is: Give me precise quantitative evidence that we should 
expect complexity in this particular system.


And then, when I explain all of the above (as I have done before, many 
times), you go on to conclude:


[You are giving] a conjectural assertion unsupported by evidence.

Which is, in the context of my actual argument, a serious little bit of 
sleight-of-hand (to be as polite as possible about it).





   2) Because of the unique and unusual nature of complexity there is
only a vanishingly small chance that we will be able to find a way to
assess the exact degree of risk involved, and

   3) (A corollary of (2)) If the problem were real, but we were to
ignore this risk and simply continue with an engineering approach
(pretending that complexity is insignificant),


The engineering approach does not pretend that complexity is
insignificant.  It just denies that the complexity of intelligent systems
leads to the sort of irreducibility you suggest it does.


It denies it?  Based on what?  My argument above makes it crystal clear 
that if the engineering approach is taking that attitude, then it does 
so purely on the basis of wishful thinking, whilst completely ignoring 
the above argument.  The engineering approach would be saying:  We 
understand complex systems well enough to know that there isn't a 
problem in this case  a nonsensical position when by definition it 
is not possible for anyone to really understand the connection, and the 
best evidence we can get is actually pointing to the opposite conclusion.


So this comes back to the above argument:  the engineering approach has 
to address that first, before it can make any such claim.




Some complex systems can be reverse-engineered in their general
principles even if not in detail.  And that is all one would need to do
in order to create a brain emulation (not that this is what I'm trying
to do) --- assuming one's goal was not to exactly emulate some
specific human brain based on observing the behaviors it generates,
but merely to emulate the brainlike character of the system...


This has never been done, but that is exactly what I am trying to do.

Show me ONE other example of the reverse engineering of a system in 
which the low level mechanisms show as many complexity-generating 
characteristics as are found in the case of intelligent systems, and I 
will gladly learn from the experience of the team that did the job.


I do not believe you can name a single one.




then the *only* evidence
we would ever get that irreducibility was preventing us from building a
complete intelligence would be the fact that we would simply run around
in circles all the time, 

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Mark Waser

Ed,

   Get a grip.  Try to write with complete words in complete sentences 
(unless discreted means a combination of excreted and discredited -- which 
works for me :-).


   I'm not coming back for a second swing.  I'm still pursuing the first 
one.  You just aren't oriented well enough to realize it.


Now you are implicitly attacking me for implying it is new to think you 
could deal with vectors in some sort of compressed representation.


   Nope.  First of all, compressed representation is *absolutely* the wrong 
term for what you're looking for.


   Second, I actually am still trying to figure out what *you* think you 
ARE gushing about.  (And my quest is not helped by such gems as all though 
[sic] it may not be new to you, it seems to be new to some)


   Why don't you just answer my question?  Do you believe that this is some 
sort of huge conceptual breakthrough?  For NLP (as you were initially 
pushing) or just for some nice computational tricks?


   I'll also note that you've severely changed the focus of this away from 
the NLP that you were initially raving about as such quality work -- and 
while I'll agree that kernel mapping is a very elegant tool -- Collin's work 
is emphatically *not* what I would call a shining example of it (I mean, 
*look* at his results -- they're terrible).  Yet you were touting it because 
of your 500,000 dimension fantasies and you're belief that it's good NLP 
work.


   So, in small words -- and not whining about an attack -- what precisely 
are you saying?



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73247008-aecb7f


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Ed Porter
Mark,

You claimed I made a particular false statement about the Collins paper.
(That by itself could have just been a misunderstanding or an honest
mistake.) But then you added an insult to that by implying I had probably
made the alleged error because I was incapable of understand the mathematics
involved.  As if that wasn't enough in the way of gratuitous insults, you
suggested my alleged error called in to question the validity of the other
things I have said on this list.  

That is a pretty deep, purposely and unnecessarily, insulting put down.

I think I have shown that I did understood the math in question, perhaps
better than you, since you initially totally ignored the part of the paper
that supported my statement.  I have shown that my statement was in fact
correct by a reasonable interpretation of my words.  Thus, not only was your
accusation of my error unjustified, but also, even more so, the two insults
placed on top of it.

You have not apologized for your unjustified accusation of error and the two
additional unnecessary insults (unless your statement Ok. I'll bite. is
considered an appropriate apology for such an improper set of deep insults).
Instead you have continued in an even more insulting tone, including
starting one subsequent email with a comment about something I had said that
went as follows: 

HeavySarcasmWow.  Is that what dot products
are?/HeavySarcasm

I don't mind people questioning me, or pointing out errors when I make them.
I even have a fair amount of tolerance for people mistakenly accusing me of
making an error, if they make the false accusation honestly and not in a
purposely insulting manner, as did you.

Why should I waste more time conversing with someone who wants to converse
in such an insulting tone?

Mark, you have been quick to publicly call other people on this list
trolls, in effect to their face, in front of the whole list.  This is a
behavior most people would consider very hurtful.  So what do you call
people on this list who not only falsely accuse other people of errors, add
several unnecessary insults based on the false accusation, and then when
shown to be in error, continue addressing comments to the falsely accused
person in a HeavySarcasm style?  

How about mean spirited.

Mark, you are an intelligent person, and I have found some of your posts
valuable.  That day a few weeks ago when you and Ben were riffing back and
forth, I was offended by your tone, but I thought many of your questions
were valuable.  If you wish to continue any sort of communication with me,
feel free to question and challenge, but please lay off the HeavySarcasm
and insults which do nothing to further the exchange and clarification of
ideas.

With regard to your questions below, If you actually took the time to read
my prior responses, I think you will see I have substantially answered them.

Ed Porter

-Original Message-
From: Mark Waser [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 1:24 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Ed,

Get a grip.  Try to write with complete words in complete sentences 
(unless discreted means a combination of excreted and discredited -- which 
works for me :-).

I'm not coming back for a second swing.  I'm still pursuing the first 
one.  You just aren't oriented well enough to realize it.

 Now you are implicitly attacking me for implying it is new to think you 
 could deal with vectors in some sort of compressed representation.

Nope.  First of all, compressed representation is *absolutely* the wrong

term for what you're looking for.

Second, I actually am still trying to figure out what *you* think you 
ARE gushing about.  (And my quest is not helped by such gems as all though 
[sic] it may not be new to you, it seems to be new to some)

Why don't you just answer my question?  Do you believe that this is some

sort of huge conceptual breakthrough?  For NLP (as you were initially 
pushing) or just for some nice computational tricks?

I'll also note that you've severely changed the focus of this away from 
the NLP that you were initially raving about as such quality work -- and 
while I'll agree that kernel mapping is a very elegant tool -- Collin's work

is emphatically *not* what I would call a shining example of it (I mean, 
*look* at his results -- they're terrible).  Yet you were touting it because

of your 500,000 dimension fantasies and you're belief that it's good NLP 
work.

So, in small words -- and not whining about an attack -- what precisely 
are you saying?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney

--- Ed Porter [EMAIL PROTECTED] wrote:

 I have a lot of respect for Google, but I don't like monopolies, whether it
 is Microsoft or Google.  I think it is vitally important that there be
 several viable search competators.  
 
 I wish this wicki one luck.  As I said, it sounds a lot like your idea.

Partly.  The main difference is that I am also proposing a message posting
service, where messages become instantly searchable and are also directed to
persistent queries.

Wikia has a big hurdle to get over.  People will ask how is this better than
Google? before they bother to download the software.  For example, Grub
(distributed spider) uses a lot of bandwidth and disk without providing much
direct benefit to the user.  The major benefit of Wikia seems to be that users
provide feedback on relevance to query responses, which in theory ought to
provide a better ranking algorithm than something like Google's PageRank.  But
assuming they get enough users to get to this level, spammers could still game
the system by flooding the network with with high rankings for their websites.

In a distributed message posting service, each peer would have its own policy
regarding which messages to relay, keep in its cache, or ignore.  If a
document is valuable, then lots of peers would keep a copy.  A client could
then rank query responses by the number of copies received weighted by the
peer's reputation.  Spammers could try to game the system by adding lots of
peers and flooding the network with advertising, but this would fail because
most other peers would be configured to ignore peers that don't provide
reciprocal services by routing their own outgoing messages.  Any peer not so
configured would quickly be abused and isolated from the network in the same
way that open relay SMTP servers get abused by spammers and blacklisted by
spam filters.

Of course a message posting service would have a big hurdle too.  Initially,
the service would have to be well integrated with the existing Internet. 
Client queries would have to go to the major search engines, and there would
have to be websites set up as peers without the user having to install
software.  Most computers are not configured to run as servers (dynamic IP,
behind firewalls, slow upload, etc), so peers will probably need to allow
message passing over client HTTP (website polling), by email, and over instant
messaging protocols.

File sharing networks became popular because they offered a service not
available elsewhere (free music).  But I don't intend for the message posting
service to be used to evade copyright or censorship (although it probably
could be).  The protocol requires that the message's originator and
intermediate routers all be identified by a reply address and time stamp.  It
won't work otherwise.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73286384-77b385


RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Ed Porter
Matt,

Does a PC become more vulnerable to viruses, worms, Trojan horses, root
kits, and other web attacks if it becomes part of a P2P network? And if so
why and how much.  

Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 3:01 PM
To: agi@v2.listbox.com
Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS Re:
[agi] Funding AGI research])


--- Ed Porter [EMAIL PROTECTED] wrote:

 I have a lot of respect for Google, but I don't like monopolies, whether
it
 is Microsoft or Google.  I think it is vitally important that there be
 several viable search competators.  
 
 I wish this wicki one luck.  As I said, it sounds a lot like your idea.

Partly.  The main difference is that I am also proposing a message posting
service, where messages become instantly searchable and are also directed to
persistent queries.

Wikia has a big hurdle to get over.  People will ask how is this better
than
Google? before they bother to download the software.  For example, Grub
(distributed spider) uses a lot of bandwidth and disk without providing much
direct benefit to the user.  The major benefit of Wikia seems to be that
users
provide feedback on relevance to query responses, which in theory ought to
provide a better ranking algorithm than something like Google's PageRank.
But
assuming they get enough users to get to this level, spammers could still
game
the system by flooding the network with with high rankings for their
websites.

In a distributed message posting service, each peer would have its own
policy
regarding which messages to relay, keep in its cache, or ignore.  If a
document is valuable, then lots of peers would keep a copy.  A client could
then rank query responses by the number of copies received weighted by the
peer's reputation.  Spammers could try to game the system by adding lots of
peers and flooding the network with advertising, but this would fail because
most other peers would be configured to ignore peers that don't provide
reciprocal services by routing their own outgoing messages.  Any peer not so
configured would quickly be abused and isolated from the network in the same
way that open relay SMTP servers get abused by spammers and blacklisted by
spam filters.

Of course a message posting service would have a big hurdle too.  Initially,
the service would have to be well integrated with the existing Internet. 
Client queries would have to go to the major search engines, and there would
have to be websites set up as peers without the user having to install
software.  Most computers are not configured to run as servers (dynamic IP,
behind firewalls, slow upload, etc), so peers will probably need to allow
message passing over client HTTP (website polling), by email, and over
instant
messaging protocols.

File sharing networks became popular because they offered a service not
available elsewhere (free music).  But I don't intend for the message
posting
service to be used to evade copyright or censorship (although it probably
could be).  The protocol requires that the message's originator and
intermediate routers all be identified by a reply address and time stamp.
It
won't work otherwise.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73293460-0b3fcd

[agi] Re: Hacker intelligence level

2007-12-06 Thread Mark Waser
With regard to your questions below, If you actually took the time to 
read
my prior responses, I think you will see I have substantially answered 
them.


No, Ed.  I don't see that at all.  All I see is you refusing to answer them 
even when I repeatedly ask them.  That's why I asked them again.


All I've seen is you ranting on about how insulted you are and *many* 
divergences from your initial statements.  Why don't you just answer the 
questions instead of whining about how unfairly you're being treated.


Hint:  Answers are most effective when you directly address the question 
*before* rampaging down apparently unrelated tangents.





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73301324-a28b1f


RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney
--- Ed Porter [EMAIL PROTECTED] wrote:

 Matt,
 
 Does a PC become more vulnerable to viruses, worms, Trojan horses, root
 kits, and other web attacks if it becomes part of a P2P network? And if so
 why and how much.  

It does if the P2P software has vulnerabilities, just like any other server or
client.  Worms would be especially dangerous because they could spread quickly
without user intervention, but slowly spreading viruses that are well hidden
can be dangerous too.  There is no foolproof defense, but it helps to keep the
protocol and software as simple as possible, to run the P2P software as a
nonprivileged process, use open source code, and not to depend to any large
extent on a single source of software.

The protocol I have in mind is that a message contain searchable natural
language text, possibly some nonsearchable attached files, and a header with
the reply address and timestamp of the originator and any intermediate peers
through which the message was routed.  The protocol is not dangerous except
for the attached files, but these have to be included because it is a useful
service.  If you don't include it, people will figure out how to embed
arbitrary data in the message text, which would make the protocol more
dangerous because it wasn't planned for.

In theory, you could use the P2P network to spread information about malicious
peers and deliver software patches.  But I think this would introduce more
problems than it solves because it would also introduce a mechanism for
spreading false information and patches containing trojans.  Peers should have
defenses that operate independently of the network, including disconnecting
itself if it detects anomalies in its own behavior.

Of course the network is vulnerable even if the peers behave properly. 
Malicious peers could forge headers, for example, to hide the true source of
messages or to force replies to be directed to unintended targets.  Some
attacks could be very complex depending on the idiosyncratic behavior of
particular peers.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73321137-bba914


Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:
Richard,  The problem here is that I am not sure in what sense you are 
using the
word rational.  There are many usages.  One of those usages is very 
common in cog sci, and if I go with *that* usage your claim is 
completely wrong:  you can pick up an elementary cog psy textbook and 
find at least two chapters dedicated to a discussion about the many 
ways that humans are (according to the textbook) irrational.


This is a subject of huge importance, and it shouldn't be hard to reach 
a mutual understanding at least. Rational in general means that a 
system or agent follows a coherent and systematic set of steps in 
solving a problem.


The social sciences treat humans as rational agents maximising or 
boundedly satisficing their utilities in taking decisions  - coherently 
systematically finding solutions for their needs,( there is much 
controversy about this - everyone knows it ain't right, but no 
substitute has been offered)


Cognitive science treats the human mind as basically a programmed 
computational machine much like actual programmed computers - and 
programs are normally conceived of  as rational. - coherent sets of 
steps etc.


Both cog sci and sci psych. generally endlessly highlight 
irrationalities in our decisionmaking/problemsolving processes - but 
these are only in *parts* of those processes, not the processes as a 
whole. They're like bugs in the program, but the program and mind as a 
whole are basically rational - following coherent sets of steps - it's 
just that the odd heuristic/ attitude/ assumption is wrong (or perhaps 
they have a neurocognitive deficit).


Mike,

What is happening here is that you have gotten an extremely 
oversimplified picture of what cognitive science is claiming.  This 
particular statement of yours focusses on the key misunderstanding:


 Cognitive science treats the human mind as basically a programmed
 computational machine much like actual programmed computers - and
 programs are normally conceived of  as rational. - coherent sets of
 steps etc.

Programs IN GENERAL are not rational, it is just that the folks who 
tried to AI and build models of mind in the very very early years 
started out with simple programs that tried to do reasoning-like 
computations, and as a result you have seen this as everything that 
computers do.


This would be analogous to someone saying Paint is used to build 
pictures that directly represent objects in the world.  This would not 
be true:  paint is completely neutral and can be used to either 
represent real things, or represent non-real things, or represent 
nothing at all.  In the same way computer programs are completely 
neutral and can be used to build systems that are either rational or 
irrational.  My system is not rational in that sense at all.


Just because some paintings represent things, that does not mean that 
paint only does that.  Just because some people tried to use computers 
to build rational-looking models of mind, that does not mean that 
computers in general do that.





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73344123-2104e3


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Derek Zahn wrote:

Richard Loosemore writes:

  Okay, let me try this.
 
  Imagine that we got a bunch of computers [...]
 
Thanks for taking the time to write that out.  I think it's the most 
understandable version of your argument that you have written yet.  Put 
it on the web somewhere and link to it whenever the issue comes up again 
in the future.


Thanks:  I will do that very soon.

If you are right, you may have to resort to told you so when other 
projects fail to produce the desired emergent intelligence.  No matter 
what you do, system builders can and do and will say that either their 
system is probably not heavily impacted by the issue, or that the issue 
itself is overstated for AGI development, and I doubt that most will be 
convinced otherwise.  By making such a clear exposition, at least the 
issue is out there for people to think about.


True.  I have to go further than that if I want to get more people 
involved in working on this project though.  People with money listen to 
the mainstream voice and want nothing to do with an idea so heavily 
criticised, no matter that the criticism comes from those with a vested 
interest in squashing it.



I have no position myself on whether Novamente (for example) is likely 
to be slain by its own complexity, but it is interesting to ponder.


I would rather it did not, and I hope Ben is right in being so 
optimistic.  I just know that it is a dangerous course to follow if you 
actually don't want to run the risk of another 50 years of running 
around in circles.



Richard Loosemore.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73348560-68439c


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread James Ratcliff
Richard,
  What is your specific complaint about the 'viability of the framework'?


Ed,
  This line of data gathering is very interesting to me as well, though I found 
quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational 
information on a range of topics. 
   With a well defined ontology system, and some human overview, a large amount 
of information can be extracted and many probabilities learned.

James


Ed Porter [EMAIL PROTECTED] wrote: 
RICHARD LOOSEMORE=
You are implicitly assuming a certain framework for solving the problem of 
representing knowledge ... and then all your discussion is about whether or not 
it is feasible to implement that framework (to overcome various issues to do 
with searches that have to be done within that framework).

But I am not challenging the implementation issues, I am challenging the 
viability of the framework itself.

JAMES--- What e


ED PORTER= So what is wrong with my framework?  What is wrong with a
system of recording patterns, and a method for developing compositions and
generalities from those patterns, in multiple hierarchical levels, and for
indicating the probabilities of certain patterns given certain other pattern
etc?  

I know it doesn't genuflect before the alter of complexity.  But what is
wrong with the framework other than the fact that it is at a high level and
thus does not explain every little detail of how to actually make an AGI
work?



RICHARD LOOSEMORE= These models you are talking about are trivial
exercises in public 
relations, designed to look really impressive, and filled with hype 
designed to attract funding, which actually accomplish very little.

Please, Ed, don't do this to me. Please don't try to imply that I need 
to open my mind any more.  Th implication seems to be that I do not 
understand the issues in enough depth, and need to do some more work to 
understand you points.  I can assure you this is not the case.



ED PORTER= Shastri's Shruiti is a major piece of work.  Although it is
a highly simplified system, for its degree of simplification it is amazingly
powerful.  It has been very helpful to my thinking about AGI.  Please give
me some excuse for calling it trivial exercise in public relations.  I
certainly have not published anything as important.  Have you?

The same for Mike Collins's parsers which, at least several years ago I was
told by multiple people at MIT was considered one of the most accurate NL
parsers around.  Is that just a trivial exercise in public relations?  

With regard to Hecht-Nielsen's work, if it does half of what he says it does
it is pretty damned impressive.  It is also a work I think about often when
thinking how to deal with certain AI problems.  

Richard if you insultingly dismiss such valid work as trivial exercises in
public relations it sure as hell seems as if either you are quite lacking
in certain important understandings -- or you have a closed mind -- or both.



Ed Porter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73349390-542055

Re: Last word for the time being [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Benjamin Goertzel wrote:

Show me ONE other example of the reverse engineering of a system in
which the low level mechanisms show as many complexity-generating
characteristics as are found in the case of intelligent systems, and I
will gladly learn from the experience of the team that did the job.

I do not believe you can name a single one.


Well, I am not trying to reverse engineer the brain.  Any more than
the Wright Brothers were trying to reverse engineer  a bird -- though I
do imagine the latter will eventually be possible.


You know, I sympathize with you in a way.  You are trying to build an
AGI system using a methodology that you are completely committed to.
And here am I coming along like Bertrand Russell writing his letter to
Frege, just as poor Frege was about to publish his Grundgesetze der
Arithmetik, pointing out that everything in the new book was undermined
by a paradox.  How else can you respond except by denying the idea as
vigorously as possible?


It's a deeply flawed analogy.

Russell's paradox is a piece of math and once Frege
was confronted with it he got it.  The discussion between the two of them
did not devolve into long, rambling dialogues about the meanings of terms
and the uncertainties of various intuitions.


Believe me, I know:  which is why I envy Russell for the positive 
response he got from Frege.  You could help the discussion enormously by 
not pushing it in the direction of long rambling dialogues, and by not 
trying to argue about the meanings of terms and the uncertainties of 
various intuitions, which have nothing to do with the point that I made.


I for one hate that kind of pointless discussion, which is why I keep 
trying to make you address the key point.


Unfortunately, you never do address the key point:  in the above, you 
ignored it completely!  (Again!)


At least Frege did actually get it.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73346948-931def


RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Ed Porter
Matt,  
So if it is perceived as something that increases a machine's vulnerability,
it seems to me that would be one more reason for people to avoid using it.
Ed Porter

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 4:06 PM
To: agi@v2.listbox.com
Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS Re:
[agi] Funding AGI research])

--- Ed Porter [EMAIL PROTECTED] wrote:

 Matt,
 
 Does a PC become more vulnerable to viruses, worms, Trojan horses, root
 kits, and other web attacks if it becomes part of a P2P network? And if so
 why and how much.  

It does if the P2P software has vulnerabilities, just like any other server
or
client.  Worms would be especially dangerous because they could spread
quickly
without user intervention, but slowly spreading viruses that are well hidden
can be dangerous too.  There is no foolproof defense, but it helps to keep
the
protocol and software as simple as possible, to run the P2P software as a
nonprivileged process, use open source code, and not to depend to any large
extent on a single source of software.

The protocol I have in mind is that a message contain searchable natural
language text, possibly some nonsearchable attached files, and a header with
the reply address and timestamp of the originator and any intermediate peers
through which the message was routed.  The protocol is not dangerous except
for the attached files, but these have to be included because it is a useful
service.  If you don't include it, people will figure out how to embed
arbitrary data in the message text, which would make the protocol more
dangerous because it wasn't planned for.

In theory, you could use the P2P network to spread information about
malicious
peers and deliver software patches.  But I think this would introduce more
problems than it solves because it would also introduce a mechanism for
spreading false information and patches containing trojans.  Peers should
have
defenses that operate independently of the network, including disconnecting
itself if it detects anomalies in its own behavior.

Of course the network is vulnerable even if the peers behave properly. 
Malicious peers could forge headers, for example, to hide the true source of
messages or to force replies to be directed to unintended targets.  Some
attacks could be very complex depending on the idiosyncratic behavior of
particular peers.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73357661-483045

Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread William Pearson
On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
 Matt,
 So if it is perceived as something that increases a machine's vulnerability,
 it seems to me that would be one more reason for people to avoid using it.
 Ed Porter


Why are you having this discussion on an AGI list?

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73366106-264b25


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Vladimir Nesov
On Dec 7, 2007 1:20 AM, Ed Porter [EMAIL PROTECTED] wrote:

 This is something I have been telling people for years.   That you should be
 able to extract a significant amount (but probably far from all) world
 knowledge by scanning large corpora of text.  I would love to see how well
 it actually works for a given size of corpora, and for a given level of
 algorithmic sophistication.


But what's knowledge?


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73373961-20dc54


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Ed Porter
James,

 

Do you have any description or examples of you results.  

 

This is something I have been telling people for years.   That you should be
able to extract a significant amount (but probably far from all) world
knowledge by scanning large corpora of text.  I would love to see how well
it actually works for a given size of corpora, and for a given level of
algorithmic sophistication.

 

Ed Porter

 

-Original Message-
From: James Ratcliff [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 4:51 PM
To: agi@v2.listbox.com
Subject: RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

 

Richard,
  What is your specific complaint about the 'viability of the framework'?


Ed,
  This line of data gathering is very interesting to me as well, though I
found quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational
information on a range of topics. 
   With a well defined ontology system, and some human overview, a large
amount of information can be extracted and many probabilities learned.

James


Ed Porter [EMAIL PROTECTED] wrote:


RICHARD LOOSEMORE=
You are implicitly assuming a certain framework for solving the problem of
representing knowledge ... and then all your discussion is about whether or
not it is feasible to implement that framework (to overcome various issues
to do with searches that have to be done within that framework).

But I am not challenging the implementation issues, I am challenging the
viability of the framework itself.

JAMES--- What e


ED PORTER= So what is wrong with my framework? What is wrong with a
system of recording patterns, and a method for developing compositions and
generalities from those patterns, in multiple hierarchical levels, and for
indicating the probabilities of certain patterns given certain other pattern
etc? 

I know it doesn't genuflect before the alter of complexity. But what is
wrong with the framework other than the fact that it is at a high level and
thus does not explain every little detail of how to actually make an AGI
work?



RICHARD LOOSEMORE= These models you are talking about are trivial
exercises in public 
relations, designed to look really impressive, and filled with hype 
designed to attract funding, which actually accomplish very little.

Please, Ed, don't do this to me. Please don't try to imply that I need 
to open my mind any more. Th implication seems to be that I do not 
understand the issues in enough depth, and need to do some more work to 
understand you points. I can assure you this is not the case.



ED PORTER= Shastri's Shruiti is a major piece of work. Although it is
a highly simplified system, for its degree of simplification it is amazingly
powerful. It has been very helpful to my thinking about AGI. Please give
me some excuse for calling it trivial exercise in public relations. I
certainly have not published anything as important. Have you?

The same for Mike Collins's parsers which, at least several years ago I was
told by multiple people at MIT was considered one of the most accurate NL
parsers around. Is that just a trivial exercise in public relations? 

With regard to Hecht-Nielsen's work, if it does half of what he says it does
it is pretty damned impressive. It is also a work I think about often when
thinking how to deal with certain AI problems. 

Richard if you insultingly dismiss such valid work as trivial exercises in
public relations it sure as hell seems as if either you are quite lacking
in certain important understandings -- or you have a closed mind -- or both.



Ed Porter

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




___
James Ratcliff - http://falazar.com
Looking for something...

  

  _  

Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try
http://us.rd.yahoo.com/evt=51733/*http:/mobile.yahoo.com/;_ylt=Ahu06i62sR8H
DtDypao8Wcj9tAcJ%20  it now.

  _  

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73371326-7ffb17

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Mike Tintner

Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs (and 
what are the programs/ systems) that are either irrational or 
non-rational  (and described  as such)?


When I have proposed (in different threads) that the mind is not rationally, 
algorithmically programmed I have been met with uniform and often fierce 
resistance both on this and another AI forum. My argument re the philosophy 
of mind of  cog sci  other sciences is of course not based on such 
reactions, but they do confirm my argument. And the position you at first 
appear to be adopting is unique both in my experience and my reading.


2) How is your system not rational? Does it not use algorithms?

And could you give a specific example or two of the kind of problem that it 
deals with - non-rationally?  (BTW I don't think I've seen any problem 
examples for your system anywhere, period  - for all I know, it could be 
designed to read children' stories, bomb Iraq, do syllogisms, work out your 
domestic budget, or work out the meaning of life - or play and develop in 
virtual worlds).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73382084-a9590d


RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Ed Porter
It was part of a discussion of using a P2P network with OpenCog to develop
distributed AGI's.

-Original Message-
From: William Pearson [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 5:20 PM
To: agi@v2.listbox.com
Subject: Re: Distributed search (was RE: Hacker intelligence level [WAS Re:
[agi] Funding AGI research])

On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
 Matt,
 So if it is perceived as something that increases a machine's
vulnerability,
 it seems to me that would be one more reason for people to avoid using it.
 Ed Porter


Why are you having this discussion on an AGI list?

  Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73390249-cd905b

Re: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney

--- William Pearson [EMAIL PROTECTED] wrote:

 On 06/12/2007, Ed Porter [EMAIL PROTECTED] wrote:
  Matt,
  So if it is perceived as something that increases a machine's
 vulnerability,
  it seems to me that would be one more reason for people to avoid using it.
  Ed Porter
 
 
 Why are you having this discussion on an AGI list?

Because this is an AGI design.  The intelligence comes from having a lot of
specialized experts on narrow topics and a distributed infrastructure that
directs your queries to the right experts.  The P2P protocol is natural
language text.  I will write up the proposal so it will make more sense than
the current collection of posts.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73390737-69c951


RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Ed Porter
Are you saying the increase in vulnerability would be no more than that?

-Original Message-
From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 6:17 PM
To: agi@v2.listbox.com
Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS Re:
[agi] Funding AGI research])


--- Ed Porter [EMAIL PROTECTED] wrote:

 Matt,  
 So if it is perceived as something that increases a machine's
vulnerability,
 it seems to me that would be one more reason for people to avoid using it.
 Ed Porter

A web browser and email increases your computer's vulnerability, but it
doesn't stop people from using them.

 
 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 06, 2007 4:06 PM
 To: agi@v2.listbox.com
 Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS
Re:
 [agi] Funding AGI research])
 
 --- Ed Porter [EMAIL PROTECTED] wrote:
 
  Matt,
  
  Does a PC become more vulnerable to viruses, worms, Trojan horses, root
  kits, and other web attacks if it becomes part of a P2P network? And if
so
  why and how much.  
 
 It does if the P2P software has vulnerabilities, just like any other
server
 or
 client.  Worms would be especially dangerous because they could spread
 quickly
 without user intervention, but slowly spreading viruses that are well
hidden
 can be dangerous too.  There is no foolproof defense, but it helps to keep
 the
 protocol and software as simple as possible, to run the P2P software as a
 nonprivileged process, use open source code, and not to depend to any
large
 extent on a single source of software.
 
 The protocol I have in mind is that a message contain searchable natural
 language text, possibly some nonsearchable attached files, and a header
with
 the reply address and timestamp of the originator and any intermediate
peers
 through which the message was routed.  The protocol is not dangerous
except
 for the attached files, but these have to be included because it is a
useful
 service.  If you don't include it, people will figure out how to embed
 arbitrary data in the message text, which would make the protocol more
 dangerous because it wasn't planned for.
 
 In theory, you could use the P2P network to spread information about
 malicious
 peers and deliver software patches.  But I think this would introduce more
 problems than it solves because it would also introduce a mechanism for
 spreading false information and patches containing trojans.  Peers should
 have
 defenses that operate independently of the network, including
disconnecting
 itself if it detects anomalies in its own behavior.
 
 Of course the network is vulnerable even if the peers behave properly. 
 Malicious peers could forge headers, for example, to hide the true source
of
 messages or to force replies to be directed to unintended targets.  Some
 attacks could be very complex depending on the idiosyncratic behavior of
 particular peers.
 
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73394329-17b2b6

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney

--- Ed Porter [EMAIL PROTECTED] wrote:

 Matt,  
 So if it is perceived as something that increases a machine's vulnerability,
 it seems to me that would be one more reason for people to avoid using it.
 Ed Porter

A web browser and email increases your computer's vulnerability, but it
doesn't stop people from using them.

 
 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 06, 2007 4:06 PM
 To: agi@v2.listbox.com
 Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS Re:
 [agi] Funding AGI research])
 
 --- Ed Porter [EMAIL PROTECTED] wrote:
 
  Matt,
  
  Does a PC become more vulnerable to viruses, worms, Trojan horses, root
  kits, and other web attacks if it becomes part of a P2P network? And if so
  why and how much.  
 
 It does if the P2P software has vulnerabilities, just like any other server
 or
 client.  Worms would be especially dangerous because they could spread
 quickly
 without user intervention, but slowly spreading viruses that are well hidden
 can be dangerous too.  There is no foolproof defense, but it helps to keep
 the
 protocol and software as simple as possible, to run the P2P software as a
 nonprivileged process, use open source code, and not to depend to any large
 extent on a single source of software.
 
 The protocol I have in mind is that a message contain searchable natural
 language text, possibly some nonsearchable attached files, and a header with
 the reply address and timestamp of the originator and any intermediate peers
 through which the message was routed.  The protocol is not dangerous except
 for the attached files, but these have to be included because it is a useful
 service.  If you don't include it, people will figure out how to embed
 arbitrary data in the message text, which would make the protocol more
 dangerous because it wasn't planned for.
 
 In theory, you could use the P2P network to spread information about
 malicious
 peers and deliver software patches.  But I think this would introduce more
 problems than it solves because it would also introduce a mechanism for
 spreading false information and patches containing trojans.  Peers should
 have
 defenses that operate independently of the network, including disconnecting
 itself if it detects anomalies in its own behavior.
 
 Of course the network is vulnerable even if the peers behave properly. 
 Malicious peers could forge headers, for example, to hide the true source of
 messages or to force replies to be directed to unintended targets.  Some
 attacks could be very complex depending on the idiosyncratic behavior of
 particular peers.
 
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73388768-0927ef


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Scott Brown
Hi Richard,

On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Try to think of some other example where we have tried to build a system
 that behaves in a certain overall way, but we started out by using
 components that interacted in a completely funky way, and we succeeded
 in getting the thing working in the way we set out to.  In all the
 history of engineering there has never been such a thing.


I would argue that, just as we don't have to fully understand the complexity
posed by the interaction of subatomic particles to make predictions about
the way molecular systems behave, we don't have to fully understand the
complexity of interactions between neurons to make predictions about how
cognitive systems behave.  Many researchers are attempting to create
cognitive models that don't necessarily map directly back to low-level
neural activity in biological organisms.  Doesn't this approach mitigate
some of the risk posed by complexity in neural systems?

-- Scott

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73399933-fcedd2

Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Vladimir Nesov
Edward,

It's certainly a trick question, since if you don't define semantics
for this knowledge thing, it can turn out to be anything from simplest
do-nothings to full-blown physically-infeasible superintelligences. So
you assertion doesn't cut the viability of knowledge extraction for
various purposes, and without that it's not clear what you actually
mean.


On Dec 7, 2007 1:20 AM, Ed Porter [EMAIL PROTECTED] wrote:
 This is something I have been telling people for years.   That you should be
 able to extract a significant amount (but probably far from all) world
 knowledge by scanning large corpora of text.  I would love to see how well
 it actually works for a given size of corpora, and for a given level of
 algorithmic sophistication.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73400395-303d49


Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Vladimir Nesov
Yes, it's what triggered my nitpicking reflex; I am sorry about that.

Your comment sounds fine when related to viability of teaching an AGI
in a text-only mode without too much manual assistance, but semantics
of what it was given to is quite different.


On Dec 7, 2007 3:13 AM, Ed Porter [EMAIL PROTECTED] wrote:
 Vlad,

 My response was to the following message

 ==
 Ed,
   This line of data gathering is very interesting to me as well, though I
 found quickly that using all web sources quickly devolved into insanity.
 By using scanned text novels, I was able to extract lots of relational
 information on a range of topics.
With a well defined ontology system, and some human overview, a large
 amount of information can be extracted and many probabilities learned.

 James
 =
 so I was asking what sort of knowledge he had extracted as part of the lots
 of relational information on a range of topics.

 Ed Porter



 -Original Message-
 From: Vladimir Nesov [mailto:[EMAIL PROTECTED]
 Sent: Thursday, December 06, 2007 7:02 PM
 To: agi@v2.listbox.com
 Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]


 Edward,

 It's certainly a trick question, since if you don't define semantics
 for this knowledge thing, it can turn out to be anything from simplest
 do-nothings to full-blown physically-infeasible superintelligences. So
 you assertion doesn't cut the viability of knowledge extraction for
 various purposes, and without that it's not clear what you actually
 mean.


 On Dec 7, 2007 1:20 AM, Ed Porter [EMAIL PROTECTED] wrote:
  This is something I have been telling people for years.   That you should
 be
  able to extract a significant amount (but probably far from all) world
  knowledge by scanning large corpora of text.  I would love to see how well
  it actually works for a given size of corpora, and for a given level of
  algorithmic sophistication.


 --
 Vladimir Nesovmailto:[EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73408474-ba1629


RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

2007-12-06 Thread Ed Porter
Vlad,

My response was to the following message

==
Ed,
  This line of data gathering is very interesting to me as well, though I
found quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational
information on a range of topics. 
   With a well defined ontology system, and some human overview, a large
amount of information can be extracted and many probabilities learned.

James
=
so I was asking what sort of knowledge he had extracted as part of the lots
of relational information on a range of topics.

Ed Porter



-Original Message-
From: Vladimir Nesov [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 06, 2007 7:02 PM
To: agi@v2.listbox.com
Subject: Re: Hacker intelligence level [WAS Re: [agi] Funding AGI research]

Edward,

It's certainly a trick question, since if you don't define semantics
for this knowledge thing, it can turn out to be anything from simplest
do-nothings to full-blown physically-infeasible superintelligences. So
you assertion doesn't cut the viability of knowledge extraction for
various purposes, and without that it's not clear what you actually
mean.


On Dec 7, 2007 1:20 AM, Ed Porter [EMAIL PROTECTED] wrote:
 This is something I have been telling people for years.   That you should
be
 able to extract a significant amount (but probably far from all) world
 knowledge by scanning large corpora of text.  I would love to see how well
 it actually works for a given size of corpora, and for a given level of
 algorithmic sophistication.


-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73401551-1f6d58

[agi] Viability of the framework [WAS Re: Hacker intelligence level]

2007-12-06 Thread Richard Loosemore

James Ratcliff wrote:

Richard,
  What is your specific complaint about the 'viability of the framework'?


I was referring mainly to my complex systems problem (currently being 
hashed to death on a parallel thread, and many times before).



Richard Loosemore



Ed,
  This line of data gathering is very interesting to me as well, though 
I found quickly that using all web sources quickly devolved into insanity.
By using scanned text novels, I was able to extract lots of relational 
information on a range of topics.
   With a well defined ontology system, and some human overview, a large 
amount of information can be extracted and many probabilities learned.


James


*/Ed Porter [EMAIL PROTECTED]/* wrote:


 RICHARD LOOSEMORE=
You are implicitly assuming a certain framework for solving the
problem of representing knowledge ... and then all your discussion
is about whether or not it is feasible to implement that framework
(to overcome various issues to do with searches that have to be done
within that framework).

But I am not challenging the implementation issues, I am challenging
the viability of the framework itself.


[snipped]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73423342-cd44d9


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:

Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs 
(and what are the programs/ systems) that are either irrational or 
non-rational  (and described  as such)?


I'm a little partied out right now, so all I have time for is to 
suggest:  Hofstadter's group builds all kinds of programs that do things 
without logic.  Phil Johnson-Laird (and students) used to try to model 
reasoning ability using systems that did not do logic.  All kinds of 
language processing people use various kinds of neural nets:  see my 
earlier research papers with Gordon Brown et al, as well as folks like 
Mark Seidenberg, Kim Plunkett etc.  Marslen-Wilson and Tyler used 
something called a Cohort Model to describe some aspects of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people do 
not use systems that do any kind of logical processing.  I could go on 
indefinitely.  There are probably hundreds of them.  They do not try to 
build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform and 
often fierce resistance both on this and another AI forum. 


Hey, join the club!  You have read my little brouhaha with Yudkowsky 
last year I presume?  A lot of AI people have their heads up their 
asses, so yes, they believe that rationality is God.


It does depend how you put it though:  sometimes you use rationality to 
not mean what they mean, so that might explain the ferocity.



My argument
re the philosophy of mind of  cog sci  other sciences is of course not 
based on such reactions, but they do confirm my argument. And the 
position you at first appear to be adopting is unique both in my 
experience and my reading.


2) How is your system not rational? Does it not use algorithms?


It uses dynamic relaxation in a generalized neural net.  Too much to 
explain in a hurry.



And could you give a specific example or two of the kind of problem that 
it deals with - non-rationally?  (BTW I don't think I've seen any 
problem examples for your system anywhere, period  - for all I know, it 
could be designed to read children' stories, bomb Iraq, do syllogisms, 
work out your domestic budget, or work out the meaning of life - or play 
and develop in virtual worlds).


I am playing this close, for the time being, but I have released a small 
amount of it in a forthcoming neuroscience paper.  I'll send it to you 
tomorrow if you like, but it does not go into a lot of detail.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73425500-35e13a


Re: [agi] None of you seem to be able ...

2007-12-06 Thread Richard Loosemore

Scott Brown wrote:

Hi Richard,

On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


Try to think of some other example where we have tried to build a system
that behaves in a certain overall way, but we started out by using
components that interacted in a completely funky way, and we succeeded
in getting the thing working in the way we set out to.  In all the
history of engineering there has never been such a thing.


I would argue that, just as we don't have to fully understand the 
complexity posed by the interaction of subatomic particles to make 
predictions about the way molecular systems behave, we don't have to 
fully understand the complexity of interactions between neurons to make 
predictions about how cognitive systems behave.  Many researchers are 
attempting to create cognitive models that don't necessarily map 
directly back to low-level neural activity in biological organisms.  
Doesn't this approach mitigate some of the risk posed by complexity in 
neural systems?


I completely agree that the neural-level stuff does not have to impact 
cognitive-level stuff:  that is why I work at the cognitive level and do 
not bother too much with exact neural architecture.


The only problem with your statement was the last sentence:  when I say 
that there is a complex systems problem, I only mean complexity at the 
cognitive level, not complexity at the neural level.


I am not too worried about any complexity that might exist down at the 
neural level because as far as I can tell that level is not *dominated* 
by complex effects.  At the cognitive level, on the other hand, there is 
a strong possibility that what happens when the mind builds a model of 
some situation, it gets a large nummber of concepts to come together and 
try to relax into a stable representation, and that relaxation process 
is potentially sensitive to complex effects (some small parameter in the 
design of the concepts could play a crucial role in ensuring that the 
relaxation process goes properly, for example).


I am being rather terse here due to lack of time, but that is the short 
answer.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73430502-7926e9


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Mike Tintner
Well, I'm not sure if  not doing logic necessarily means a system is 
irrational, i.e if rationality equates to logic.  Any system consistently 
followed can classify as rational. If for example, a program consistently 
does Freudian free association and produces nothing but a chain of 
associations with some connection:


bird - - feathers - four..tops 

or on the contrary, a 'nonsense' chain where there is NO connection..

logic.. sex... ralph .. essence... pi... Loosemore...

then it is rational - it consistently follows a system with a set of rules. 
And the rules could, for argument's sake, specify that every step is 
illogical - as in breaking established rules of logic - or that steps are 
alternately logical and illogical.  That too would be rational. Neural nets 
from the little I know are also rational inasmuch as they follow rules. 
Ditto Hofstadter  Johnson-Laird from again the little I know also seem 
rational - Johnson-Laird's jazz improvisation program from my cursory 
reading seemed rational and not truly creative.


I do not know enough to pass judgment on your system, but  you do strike me 
as a rational kind of guy (although probably philosophically much closer to 
me than most here  as you seem to indicate).  Your attitude to emotions 
seems to me rational, and your belief that you can produce an AGI that will 
almost definitely be cooperative , also bespeaks rationality.


In the final analysis, irrationality = creativity (although I'm using the 
word with a small c, rather than the social kind, where someone produces a 
new idea that no one in society has had or published before). If a system 
can change its approach and rules of reasoning at literally any step of 
problem-solving, then it is truly crazy/ irrational (think of a crazy 
path). And it will be capable of producing all the human irrationalities 
that I listed previously - like not even defining or answering the problem. 
It will by the same token have the capacity to be truly creative, because it 
will ipso facto be capable of lateral thinking at any step of 
problem-solving. Is your system capable of that? Or anything close? Somehow 
I doubt it, or you'd already be claiming the solution to both AGI and 
computational creativity.


But yes, please do send me your paper.

P.S. I hope you won't -  I actually don't think - that you will get all 
pedantic on me like so many AI-ers  say ah but we already have programs 
that can modify their rules. Yes, but they do that according to metarules - 
they are still basically rulebound. A crazy/ creative program is 
rulebreaking (and rulecreating) - can break ALL the rules, incl. metarules. 
Rulebound/rulebreaking is one of the most crucial differences between narrow 
AI/AGI.



Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs 
(and what are the programs/ systems) that are either irrational or 
non-rational  (and described  as such)?


I'm a little partied out right now, so all I have time for is to suggest: 
Hofstadter's group builds all kinds of programs that do things without 
logic.  Phil Johnson-Laird (and students) used to try to model reasoning 
ability using systems that did not do logic.  All kinds of language 
processing people use various kinds of neural nets:  see my earlier 
research papers with Gordon Brown et al, as well as folks like Mark 
Seidenberg, Kim Plunkett etc.  Marslen-Wilson and Tyler used something 
called a Cohort Model to describe some aspects of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people do 
not use systems that do any kind of logical processing.  I could go on 
indefinitely.  There are probably hundreds of them.  They do not try to 
build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform and 
often fierce resistance both on this and another AI forum.


Hey, join the club!  You have read my little brouhaha with Yudkowsky last 
year I presume?  A lot of AI people have their heads up their asses, so 
yes, they believe that rationality is God.


It does depend how you put it though:  sometimes you use rationality to 
not mean what they mean, so that might explain the ferocity.



My argument
re the philosophy of mind of  cog sci  other sciences is of course not 
based on such reactions, but they do confirm my argument. And the 
position you at first appear to be adopting is unique both in my 
experience and my reading.


2) How is your system not rational? Does it not use algorithms?


It uses dynamic relaxation in a generalized neural 

RE: Distributed search (was RE: Hacker intelligence level [WAS Re: [agi] Funding AGI research])

2007-12-06 Thread Matt Mahoney

--- Ed Porter [EMAIL PROTECTED] wrote:

 Are you saying the increase in vulnerability would be no more than that?

Yes, at least short term if we are careful with the design.  But then again,
you can't predict what AGI will do, or else it wouldn't be intelligent.  I
can't say for certain long term (2040s?) it wouldn't launch a singularity, or
even that it wouldn't create an intelligent worm that would eat the Internet. 
I don't think anyone is smart enough to get it right, but it is going to
happen in one form or another.

I wrote up a quick description of my AGI proposal at
http://www.mattmahoney.net/agi.html
basically summarizing what I posted over the last several emails, including
various attack scenarios.  I'm sure I didn't think of everything.  It is kind
of sketchy because it's not an area I am actively pursuing.  It should be a
useful service at least in the short term before it destroys us.


 
 -Original Message-
 From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
 Sent: Thursday, December 06, 2007 6:17 PM
 To: agi@v2.listbox.com
 Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS Re:
 [agi] Funding AGI research])
 
 
 --- Ed Porter [EMAIL PROTECTED] wrote:
 
  Matt,  
  So if it is perceived as something that increases a machine's
 vulnerability,
  it seems to me that would be one more reason for people to avoid using it.
  Ed Porter
 
 A web browser and email increases your computer's vulnerability, but it
 doesn't stop people from using them.
 
  
  -Original Message-
  From: Matt Mahoney [mailto:[EMAIL PROTECTED] 
  Sent: Thursday, December 06, 2007 4:06 PM
  To: agi@v2.listbox.com
  Subject: RE: Distributed search (was RE: Hacker intelligence level [WAS
 Re:
  [agi] Funding AGI research])
  
  --- Ed Porter [EMAIL PROTECTED] wrote:
  
   Matt,
   
   Does a PC become more vulnerable to viruses, worms, Trojan horses, root
   kits, and other web attacks if it becomes part of a P2P network? And if
 so
   why and how much.  
  
  It does if the P2P software has vulnerabilities, just like any other
 server
  or
  client.  Worms would be especially dangerous because they could spread
  quickly
  without user intervention, but slowly spreading viruses that are well
 hidden
  can be dangerous too.  There is no foolproof defense, but it helps to keep
  the
  protocol and software as simple as possible, to run the P2P software as a
  nonprivileged process, use open source code, and not to depend to any
 large
  extent on a single source of software.
  
  The protocol I have in mind is that a message contain searchable natural
  language text, possibly some nonsearchable attached files, and a header
 with
  the reply address and timestamp of the originator and any intermediate
 peers
  through which the message was routed.  The protocol is not dangerous
 except
  for the attached files, but these have to be included because it is a
 useful
  service.  If you don't include it, people will figure out how to embed
  arbitrary data in the message text, which would make the protocol more
  dangerous because it wasn't planned for.
  
  In theory, you could use the P2P network to spread information about
  malicious
  peers and deliver software patches.  But I think this would introduce more
  problems than it solves because it would also introduce a mechanism for
  spreading false information and patches containing trojans.  Peers should
  have
  defenses that operate independently of the network, including
 disconnecting
  itself if it detects anomalies in its own behavior.
  
  Of course the network is vulnerable even if the peers behave properly. 
  Malicious peers could forge headers, for example, to hide the true source
 of
  messages or to force replies to be directed to unintended targets.  Some
  attacks could be very complex depending on the idiosyncratic behavior of
  particular peers.
  
  
  
  -- Matt Mahoney, [EMAIL PROTECTED]
  
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
  
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
 
 
 -- Matt Mahoney, [EMAIL PROTECTED]
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73450735-649fdc


RE: [agi] None of you seem to be able ...

2007-12-06 Thread Jean-Paul Van Belle
Interesting - after drafting three replies I have come to realize that it is 
possible to hold two contradictory views and live or even run with it. Looking 
at their writings, both Ben  Richard know damn well what complexity means and 
entails for AGI. 
Intuitively, I side with Richard's stance that, if the current state of 'the 
new kind of science' cannot even understand simple chaotic systems - the 
toy-problems of three-variable differential quadratic equations  and 2-D Alife, 
then what hope is there to find a theoretical solution for a really complex 
system. The way forward is by experimental exploration of part of the solution 
space. I don't think we'll find general complexity theories any time soon.
On the other hand, practically I think that it *is* (or may be) possible to 
build an AGI system up carefully and systematically from the ground up i.e. 
inspired by a sound (or at least plausible) theoretical framework or by 
modelling it on real-world complex systems that seem to work (because that's 
the way I proceed too), finetuning the system parameters and managing emerging 
complexity as we go along and move up the complexity scale. (Just like 
engineers can build pretty much anything without having a GUT.)
Both paradagmatic approaches have their merits and are in fact complementary: 
explore, simulate, genetically evolve etc. from the top down to get a bird's 
eye view of the problem space versus incrementally build up from the bottom up 
following a carefully chartered path/ridge inbetween the chasms of the unknown 
based on a strong conceptual theoretical founding. It is done all the time in 
other sciences - even maths!
Interestingly, I started out wanting to use a simulation tool to check the 
behaviour (read: fine-tune the parameters) of my architectural designs but then 
realised that the simulation of a complex system is actually a complex system 
itself and it'd be easier and more efficient to prototype than to simulate. But 
that's just because of the nature of my architecture. Assuming Ben's theories 
hold, he is adopting the right approach. Given Richard's assumption or 
intuitions, he is following the right path too. I doubt that they will converge 
on a common solution but the space of conceivably possible AGI architectures is 
IMHO extremely large. In fact, my architectural approach is a bit of a poor 
cousin/hybrid: having neither Richard's engineering skills nor Ben's 
mathematical understanding I am hoping to do a scruffy alternative path :)
-- 

Research Associate: CITANDA
Post-Graduate Section Head 
Department of Information Systems
Phone: (+27)-(0)21-6504256
Fax: (+27)-(0)21-6502280
Office: Leslie Commerce 4.21


 On 2007/12/07 at 03:06, in message [EMAIL PROTECTED],
 Conclusion:  there is a danger that the complexity that even Ben agrees
 must be present in AGI systems will have a significant impact on our
 efforts to build them.  But the only response to this danger at the
 moment is the bare statement made by people like Ben that I do not
 think that the danger is significant.  No reason given, no explicit
 attack on any component of the argument I have given, only a statement
 of intuition, even though I have argued that intuition cannot in
 principle be a trustworthy guide here.
 But Richard, your argument ALSO depends on intuitions ...
 I agree that AGI systems contain a lot of complexity in the dynamical-
 systems-theory sense.
 And I agree that tuning all the parameters of an AGI system externally
 is likely to be intractable, due to this complexity.
 However, part of the key to intelligence is **self-tuning**.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73455082-621f89

Re[2]: [agi] How to represent things problem

2007-12-06 Thread Dennis Gorelik
Richard,

 It's Neural Network -- set of nodes (concepts), when every node can be
 connected with the set of other nodes. Every connection has it's own
 weight.
 
 Some nodes are connected with external devices.
 For example, one node can be connected with one word in text
 dictionary (that is an external device).

 you need special extra mechanisms to handle the difference between
 generic nodes and instance nodes (in a basic neural net there is no
 distinction between these two, so the system cannot represent even the most 
 basic of situations),

1) Are you talking about problems of basic neural net or problems of
Neural Net that I described?

2) Human brain is more complex than basic neural net and probably
works similar to what I described.

3) Extra mechanisms would add additional features to instance nodes.
(I prefer to call such nodes peripheral or surface.)
Surface nodes have the same abilities as regular nodes, but they are
also heavily affected by special device.

4) Are you saying that developing special device is a problem?


 and you need extra mechanisms to handle the dynamic creation/assignment of
 new nodes, because new things are being experienced all the time.

That's correct. Such mechanism that creates new nodes is required.
Is that a problem?


 These extra mechanisms are so important that is arguable that the
 behavior of the system is dominated by *them*, not by the mere fact that
 the design started out as a neural net.

It doesn't matter what part of system dominates. If we able to solve How to 
represent
things problem by such architecture -- it's good enough, right?


 Having said that, I believe in neural nets as a good conceptual starting
 point.


Are you saying that what I described is not exactly a Neural Net?
How would you call it then?
Blend Neural Net?


 It is just that you need to figure out all that machinery - and no one
 has, so there is a representation problem in my previous list of problems.

We can talk about machinery in all details.
I agree, that the system would be complex, but it would have
manageable complexity.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73456976-acd60e