RE: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Ed Porter

===Colin said==

The tacit assumption is that the model's thus implemented on a computer
will/can 'behave' indistinguishably from the real thing, when what you are
observing is a model of the real thing, not the real thing.

===ED's reply===

I was making no assumption that the model would be behave indistinquisably
from the real thing, but instead only that there were meaningful --- and,
from a cross-fertilization standpoint, informative --- levels of description
at which the computer model and the corresponding brain behavior were
similar.

 


===Colin said==

There's a boundary to cross - when you claim to have access to human level
intellect - then you are demanding a equivalence with a real human, not a
model of a human. 

===ED's reply===

When I, and presumably many other AGIers, say human-level AGI, we do not
mean an exact functional replica of the human brain or mind.  Rather we mean
an AGI that can do things like speak and understand natural language, see
and understand the meaning of its visual surroundings, reason from the rough
equivalent of human-level world knowledge, have common sense, do creative
problems solving, and other mental tasks --- substantially as well as most
people.  Its methods of computation do not have to be exactly like those
used in the mind, the major issue is that its competencies be at least
roughly as good over a range of talents. 

 

===Colin said==

I don't think there's any real issue here. Mostly semantics being mixed a
bit.

Gotta get back to xmas! Yuletide stuff to you. 

===ED's reply===

Agreed.

 

Ed Porter

 

 

-Original Message-
From: Colin Hales [mailto:c.ha...@pgrad.unimelb.edu.au] 
Sent: Tuesday, December 23, 2008 7:55 PM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a
machine that can learn from experience

 

Ed,
Comments interspersed below:

Ed Porter wrote: 

Colin,

 

Here are my comments re  the following parts of your below post:

 

===Colin said==

I merely point out that there are fundamental limits as to how computer
science (CS) can inform/validate basic/physical science - (in an AGI
context, brain science). Take the Baars/Franklin IDA project.. It predicts
nothing neuroscience can poke a stick at.

 

===ED's reply===

Different AGI models can have different degrees of correspondence to, and
different explanatory relevance to, what is believed to take place in the
brain.  For example the Thomas Serre's PhD thesis Learning a Dictionary of
Shape-Components in Visual Cortex:  Comparison with Neurons, Humans and
Machines, at from
http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf

http://cbcl.mit.edu/projects/cbcl/publications/ps/MIT-CSAIL-TR-2006-028.pdf
, is a computer simulation which is rather similar to my concept of how a
Novamente-like AGI could perform certain tasks in visual perception, and yet
it is designed to model the human visual system to a considerable degree.
It shows that a certain model of how Serre and Poggio think a certain aspect
of the human brain works, does in fact work surprisingly well when simulated
in a computer.

 

A surprisingly large number of brain science papers are based on computer
simulations, many of which are substantially simplified models, but they do
given neuroscientists a way to poke a stick at various theories they might
have for how the brain operates at various levels of organization.  Some of
these papers are directly relevant to AGI.  And some AGI papers are directly
relevant to providing answers to certain brain science questions.

You are quite right! Realistic models can be quite informative and feed back
- suggesting new empirical approaches. There can be great
cross-fertilisation.

However the point is irrelevant to the discussion at hand.

 The phrase does in fact work surprisingly well when simulated in a
computer illustrates the confusion. 'work'? according to whom?
surprisingly well? by what criteria? The tacit assumption is that the
model's thus implemented on a computer will/can 'behave' indistinguishably
from the real thing, when what you are observing is a model of the real
thing, not the real thing.

HERE If you targeting AGI with a benchmark/target of human intellect or
problem solving skills, then the claim made on any/all models is that models
can attain that goal. A computer implements a model. To make a claim that a
model  completely captures the reality upon which it was based, you need to
have a solid theory of the relationships between models and reality that is
not wishful thinking or assumption, but solid science. Here's where you run
into the problematic issue that basic physical sciences have with models.  

There's a boundary to cross - when you claim to have access to human level
intellect - then you are demanding a equivalence with a real human, not a
model of a human. 

 

===Colin said==

I agree with your :

At the other end of things, 

Re: [agi] SyNAPSE might not be a joke ---- was ---- Building a machine that can learn from experience

2008-12-24 Thread Richard Loosemore



Why is it that people who repeatedly resort to personal abuse like this 
are still allowed to participate in the discussion on the AGI list?





Richard Loosemore





Ed Porter wrote:

Richard,

 

You originally totally trashed Tononi's paper, including its central 
core, by saying:


 


It is, for want of a better word, nonsense.  And since people take me to

task for being so dismissive, let me add that it is the central thesis

of the paper that is nonsense:  if you ask yourself very carefully

what it is he is claiming, you can easily come up with counterexammples

that make a mockery of his conclusion.\

 


When asked to support your statement that

 

you can easily come up with counterexammples that make a mockery of his 
conclusion 


 

you refused.  You did so by grossly mis-describing Tononi’s paper (for 
example it does not include “pages of …math”, of any sort, and 
particularly not “pages of irrelevant math”) and implying its 
mis-described faults so offended your delicate sense of AGI propriety 
that re-reading it enough to find support for your extremely critical 
(and perhaps totally unfair) condemnation would be either too much work 
or too emotionally painful.


 

You said the counterexamples to the core of this paper were easy to come 
up with, but you can’t seem to come up with any.


 


Such stunts have the appearance of being those of a pompous windbag.

 


Ed Porter

 

P.S. Your postscript is not sufficiently clear to provide much support 
for your position.


P.P.S. You below  

 

 


-Original Message-
From: Richard Loosemore [mailto:r...@lightlink.com]
Sent: Tuesday, December 23, 2008 9:53 AM
To: agi@v2.listbox.com
Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a 
machine that can learn from experience


 


Ed Porter wrote:


 Richard,







 Please describe some of the counterexamples, that you can easily come up



 with, that make a mockery of Tononi's conclusion.







 Ed Porter


 


Alas, I will have to disappoint.  I put a lot of effort into

understanding his paper first time around, but the sheer agony of

reading (/listening to) his confused, shambling train of thought, the

non-sequiteurs, and the pages of irrelevant math  that I do not need

to experience a second time.  All of my original effort only resulted in

the discovery that I had wasted my time, so I have no interest in

wasting more of my time.

 


With other papers that contain more coherent substance, but perhaps what

looks like an error, I would make the effort.  But not this one.

 


It will have to be left as an exercise for the reader, I'm afraid.

 

 

 


Richard Loosemore

 

 


P.S.   A hint.  All I remember was that he started talking about

multiple regions (columns?) of the brain exchanging information with one

another in a particular way, and then he asserted a conclusion which, on

quick reflection, I knew would not be true of a system resembling the

distributed one that I described in my consciousness paper (the

molecular model).  Knowing that his conclusion was flat-out untrue for

that one case, and for a whole class of similar systems, his argument

was toast.

 

 

 

 

 

 

 

 

 


 -Original Message-



 From: Richard Loosemore [mailto:r...@lightlink.com]



 Sent: Monday, December 22, 2008 8:54 AM



 To: agi@v2.listbox.com



 Subject: Re: [agi] SyNAPSE might not be a joke  was  Building a



 machine that can learn from experience







 Ed Porter wrote:



 I don't think this AGI list should be so quick to dismiss a $4.9 million



 dollar grant to create an AGI.  It will not necessarily be vaporware.



 I think we should view it as a good sign.






 







 Even if it is for a project that runs the risk, like many DARPA projects



 (like most scientific funding in general) of not necessarily placing its



 money where it might do the most good --- it is likely to at least



 produce some interesting results --- and it just might make some very



 important advances in our field.






 







 The article from http://www.physorg.com/news148754667.html said:






 







 .a $4.9 million grant.for the first phase of DARPA's Systems of



 Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) project.






 







 Tononi and scientists from Columbia University and IBM will work on the



 software for the thinking computer, while nanotechnology and



 supercomputing experts from Cornell, Stanford and the University of



 California-Merced will create the hardware. Dharmendra Modha of IBM is



 the principal investigator.






 







 The idea is to create a computer capable of sorting through multiple


 streams of changing data, to look for patterns and make logical 

decisions.





 







 There's another requirement: The finished cognitive computer should be



 as small as a the brain of a small mammal and use as little power as a



 100-watt light bulb. It's a major 

[agi] Levels of Self-Awareness?

2008-12-24 Thread Steve Richfield
This is more of a question than a statement.

There appears to be several levels of self-awareness, e.g.

1.  Knowing that you are an individual in a group, have a name, etc. Even
kittens and puppies quickly learn their names, know to watch others when
their names are called, etc.

2.  Understanding that they have some (limited) ability to modify their own
behavior, reactions, etc., so that you can explain to them how something
they did was inappropriate, and they can then modify their behavior. Can
filter what they say, etc. I know one lady with two Masters degree who
apparently has NOT reached this level.

3.  Understanding that the process of thinking itself is a skill that no
one has completely mastered, that there are advanced techniques to be
learned, that there are probably as-yet undiscovered techniques for really
advanced capabilities, etc. Further, are capable of internalizing new
thinking techniques. There appears to be several people on this list who
have apparently NOT reached this level.

4.  Any theories as to what the next level might be?

Note that the above relates to soul, especially in that an individual at a
higher level might look upon individuals at a lower level as soulless
creatures. Given that various people span several levels, wouldn't this
consign much of the human race as being soulless creatures?

Clearly, it would seem that no AGI researcher can program a level of
self-awareness that they themselves have not reached, tried and failed to
reach, etc. Hence, this may impose a cap on a future AGI's potential
abilities, especially if thegold is in #4, #5, etc.

Has someone already looked into this?

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Levels of Self-Awareness?

2008-12-24 Thread Philip Hunt
2008/12/24 Steve Richfield steve.richfi...@gmail.com:

 Clearly, it would seem that no AGI researcher can program a level of
 self-awareness that they themselves have not reached, tried and failed to
 reach, etc.

This is not at all clear to me. It is certainly prossible for
programmers to program computer to do tasks better than they can (e.g.
play chess) and I see no reason why it shouldn't be possible for self
awareness. Indeed it would be rather trivial to give an AGI access to
its source code.

-- 
Philip Hunt, cabala...@googlemail.com
Please avoid sending me Word or PowerPoint attachments.
See http://www.gnu.org/philosophy/no-word-attachments.html


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Levels of Self-Awareness?

2008-12-24 Thread Steve Richfield
Philip,

On 12/24/08, Philip Hunt cabala...@googlemail.com wrote:

 2008/12/24 Steve Richfield steve.richfi...@gmail.com:
 
  Clearly, it would seem that no AGI researcher can program a level of
  self-awareness that they themselves have not reached, tried and failed to
  reach, etc.

 This is not at all clear to me. It is certainly prossible for
 programmers to program computer to do tasks better than they can (e.g.
 play chess)


Yes, but these programmers already know how to play chess. They (probably)
can't program a game in which they themselves don't have any skill at all.

In the case of higher forms of self-awareness, programmers in effect don't
even know the rules of the game to be programmed, yet the game will have
a vast overall effect on everything the AGI thinks.

To illustrate, much human thought goes into dispute resolution - a field
rich with advanced concepts that are generally unknown to the general
population and AGI programmers. Since this has to much to do with the
subtleties of common errors in human thinking, there is no practical way for
an AGI to figure this out for itself short of participating in thousands of
disputes - that humans would simply not tolerate.

Once these concepts are understood, the very act of thinking is changed
forever. Someone who is highly trained and experienced in dispute resolution
thinks quite differently than you probably do, and probably regards your
thinking as immature and generally low-level. In short, their idea of
self-awareness is quite different than yours.

Regardless of tools, I don't see how such a thing could be programmed except
by someone who is already able to think at that level.

Then, how about the NEXT level, whatever that might be?


 and I see no reason why it shouldn't be possible for self
 awareness.


My point is that lower-level self-awareness is MUCH simpler to contemplate
than is higher-level, and further, that different people (and AGI
researchers) function at various levels.

Indeed it would be rather trivial to give an AGI access to
 its source code.


Why should it be any better at modifying its source code than we would be at
writing it? The problem of levels still remains.

 Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Introducing Steve's Theory of Everything in cognition.

2008-12-24 Thread Steve Richfield
Ben, et al,

After ~5 months of delay for theoretical work, here are the basic ideas as
to how really fast and efficient automatic learning could be made almost
trivial. I decided NOT to post the paper (yet), but rather, to just discuss
the some of the underlying ideas in AGI-friendly terms.

Suppose for a moment that a NN or AGI program (they can be easily mapped
from one form to the other), instead of operating on objects (in an
object-oriented sense), instead, operates on the rate-of-changes in the
probabilities of objects, or dp/dt. Presuming sufficient bandwidth to
generally avoid superstitious coincidences, fast unsupervised learning then
becomes completely trivial, as like objects cause simultaneous
like-patterned changes in the inputs WITHOUT the overlapping effects of the
many other objects typically present in the input (with numerous minor
exceptions).

But, what would Bayesian equations or NN neuron functionality look like in
dp/dt space? NO DIFFERENCE (math upon request). You could trivially
differentiate the inputs to a vast and complex existing AGI or NN, integrate
the outputs, and it would perform *identically* (except for some little
details discussed below). Of course, while the transforms would be
identical, unsupervised learning would be quite a different matter, as now
the nearly-impossible becomes trivially simple.

For some things (like short-term memory) you NEED an integrated
object-oriented result. Very simple - just integrate the signal. How about
muscle movements? Note that muscle actuation typically causes acceleration,
which doubly integrates the driving signal, which would require yet another
differentiation of a differentiated signal to, when doubly integrated by the
mechanical system, produce movement to the desired location.

Note that once input values are stored in a matrix for processing, the baby
has already been thrown out with the bathwater. You must START with
differentiated input values and NOT static measured values. THIS is what the
PCA folks have been missing in their century-long quest for an efficient
algorithm to identify principal components, as their arrays had already
discarded exactly what they needed. Of course you could simply subtract
successive samples from one another - at some considerable risk, since you
are now sampling at only half the Nyquist-required speed to make your AGI/NN
run at its intended speed. In short, if inputs are not being electronically
differentiated, then sampling must proceed at least twice as fast as the
NN/AGI cycles.

But - how about the countless lost constants of integration? They all come
out in the wash - except for where actual integration at the outputs is
needed. Then, clippers and leaky integrators, techniques common to
electrical engineering, will work fine and produce many of the same
artifacts (like visual extinction) seen in natural systems.

It all sounds SO simple, but I couldn't find any prior work in this
direction using Google. However, the collective memory of this group is
pretty good, so perhaps someone here knows of some prior effort that did
something like this. I would sure like to put SOMETHING in the References
section of my paper.

Loosemore: THIS is what I was talking about when I explained that there is
absolutely NO WAY to understand a complex system through direct observation,
except by its useless anomalies. By shifting an entire AGI or NN to operate
on derivatives instead of object values, it works *almost* (the operative
word in this statement) exactly the same as one working in object-oriented
space, only learning is transformed from the nearly-impossible to the
trivially simple. Do YOU see any observation-based way to tell how we are
operating behind our eyeballs, object-oriented or dp/dt? While there are
certainly other explanations for visual extinction, this is the only one
that I know of that is absolutely impossible to engineer around. No one has
(yet) proposed any value to visual extinction, and it is a real problem for
hunters, so if it were avoidable, then I suspect that ~200 million years of
evolution would have eliminated it long ago.

From this comes numerous interesting corollaries.

Once the dp/dt signals are in array form, it would become simple to
automatically recognize patterns representing complex phenomena at the level
of the neurons/equations in question. Of course, putting it in this array
form is effectively a transformation from AGI equations to NN construction,
a transformation that has been discussed in prior postings. In short, if you
want your AGI to learn at anything approaching biological speeds, it appears
that you absolutely MUST transform your AGI structure to a NN-like
representation, regardless of the structure of the processor on which it
runs.

Unless I am missing something really important here, this should COMPLETELY
transform the AGI field, regardless of the particular approach taken.

Any thoughts?

Steve Richfield




Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-24 Thread Vladimir Nesov
On Thu, Dec 25, 2008 at 9:33 AM, Steve Richfield
steve.richfi...@gmail.com wrote:

 Any thoughts?


I can't tell this note from nonsense. You need to work on
presentation, if your idea can actually hold some water. If you think
you understand the idea enough to express it as math, by all means do
so, it'll make your own thinking clearer if nothing else.

-- 
Vladimir Nesov
robot...@gmail.com
http://causalityrelay.wordpress.com/


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Introducing Steve's Theory of Everything in cognition.

2008-12-24 Thread J. Andrew Rogers


On Dec 24, 2008, at 10:33 PM, Steve Richfield wrote:
Of course you could simply subtract successive samples from one  
another - at some considerable risk, since you are now sampling at  
only half the Nyquist-required speed to make your AGI/NN run at its  
intended speed. In short, if inputs are not being electronically  
differentiated, then sampling must proceed at least twice as fast as  
the NN/AGI cycles.



Or... you could be using something like compressive sampling, which  
safely ignores silly things like the Nyquist limit.


Cheers,

J. Andrew Rogers




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com