Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Bob Mottram
2008/12/11 Mike Tintner <[EMAIL PROTECTED]>:
> *"Ben Goertzel" is a continuously changing reality. At 10.05 pm he will be
> different from 10.00pm, and so on. He is in fact many individuals.


Based on some of the stuff which I've been doing with SLAM algorithms
I'd agree with this sort of interpretation.  You can only really tell
what Ben was doing after the fact, and even then with uncertainty
still attached.  Ben is traversing a multiverse of possible paths,
which collapse progressively over time as more data becomes available
to the observer.

If Ben were a poet or author of popular fiction it might be
advantageous for him to use strategies which prevent the possible
paths from collapsing too far, so that the observer may exercise a
degree of fuzzy creative interpretation about him and his works.
Novelists often seem to be amused by the numerous and occasionally
unexpected possible interpretations of their stories by readers.


> *A movie of Ben chatting from 10.00pm to 10.05pm will be subject to
> extremely few possible interpretations, compared with a verbal statement
> about him.


True, but still a movie contains a high degree of uncertainty.  Movie
directors exploit uncertainty to convey a particular impression or
mood within the storyline.  When you back-project the rays of light
from each pixel (aka "picture element") within the movie, you'll find
that what's actually being depicted is very fuzzy and uncertain, and
it's only through integration over time together with dodgy heuristics
(subject to errors illustrated by well know visual illusions) that
this uncertainty is reduced.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Mike Tintner
Steve,

I thought someone would come up with the kind of objection you have put 
forward. Your objection is misplaced. ( And here we have an example of what I 
meant by "knowing [metacognitively about] imaginative intelligence" - 
understanding metacognitively how imagination works). 

Consider what an image sequence/ movie of Ben talking to you would show. Ben 
talking to you, uttering certain words, making certain gestures; his face 
showing certain expressions.

What you're bringing up here is not what the movie shows, but what went on in 
your mind - your reactions - as you, in a sense, watched and listened to that 
movie - and wondered about what was going on in Ben's mind *behind the scenes*. 
"What does he really think and feel about this, that and the other (as distinct 
from the limited amount of information he is actually expressing)?" We all 
react and think similarly to other people all the time and wonder about what's 
going on *invisibly* in their minds..

But an image/movie can only be compared with a verbal statement in terms of 
what it *actually shows*  - the *surface, visible action.* His actual, 
observable dialogue and gestures and expressions - that and only that is what a 
movie records with high fidelity..

In terms of those actual actions, an image/ image sequence is *infinitely* 
superior to any verbal description. No verbal description could *begin* to 
convey to you his particular smile and the curl of his lips, the drone of his 
voice, the precise music of his statements, the tangled state of his hair, the 
way he tosses his hair etc. etc.  No words could tell or show you his state of 
body - his posture and attitude - at 10.05.01-to-10.05.02  pm.

[Words (along with other symbols, like numbers and algebraic symbols) can only 
provide the *formulae* for objects - their constitutents. They can never 
provide you, as images do, with actual maps of the whole *forms* of objects - 
and therefore the layout of those constituents. They can never give us - and we 
can never "get the picture". of those objects.

Consider a verbal statement:

"The Alsatian suddenly bit the man on the face."

What you have there is a complex action decomposed into a set of words - a 
verbal formula. How much do you now know about the action being referred to?

Now try an image:of the same:

http://www.youtube.com/watch?v=nk9yHKQRE94&feature=related 

Notice the difference? Now you've seen the whole dog engaging in the whole 
action - and you've got the picture - the whole form, not just the formula]

And here's the reason I talk about understanding metacognitively about 
imaginative intelligence. (I don't mean to be disparaging - I understand 
comparably little re logic, say]. If you were a filmmaker, say, and had thought 
about the problems of filmmaking, you would probably be alive to the difference 
between what images show - people's actual faces and voices - and what they 
can't show - what lies behind - their hidden thoughts and emotions.  And you 
wouldn't have posed your objection.


  Mike,


  MT:: 
*Even words for individuals are generalisations.
*"Ben Goertzel" is a continuously changing reality. At 10.05 pm he will be 
different from 10.00pm, and so on. He is in fact many individuals.
*Any statement about an individual, like "Ben Goertzel", is also vague and 
open-ended.

*The only way to refer to and capture individuals with high (though not 
perfect) precision is with images.
*A movie of Ben chatting from 10.00pm to 10.05pm will be subject to 
extremely few possible interpretations, compared with a verbal statement about 
him.

  Even better than a movie, I had some opportunity to observe and interact with 
Ben during CONVERGENCE08, I dispute the above statement!

  I had sought to extract just a few specific bits of information from/about 
Ben. Using VERY specific examples:

  Bit#1: Did Ben understand that AI/AGI code and NN representation were 
interchangeable, at the prospective cost of some performance one way or the 
other. Bit#1=TRUE.

  Bit#2: Did Ben realize that there were prospectively ~3 orders of magnitude 
in speed available by running NN instead of AI/AGI representation on an array 
processor instead of a scalar (x86) processor. Bit#2 affected by question, now 
True, but utility disputed by the apparent unavailability of array processors.

  Bit#3: Did Ben realize that the prospective emergence of array processors 
(e.g. as I have been promoting) would obsolete much of his present work, 
because its structure isn't vectorizable, so he is in effect betting on 
continued stagnation in processor architecture, and may in fact be a small 
component in a large industry failure by denying market? Bit#3= probably FALSE.

  As always, I attempted to "get the measure of the man", but as so often 
happens with leaders, there just isn't a "bin" to toss them in. Without an 
appropriate bin, I got lots of raw data (e.g., he has a LOT of hair), but not 
all that much usa

FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Ed Porter
An article related to how changes in the epigenonme could affect learning
and memory (the subject which started this thread a week ago)

 

 

http://www.technologyreview.com/biomedicine/21801/

 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Ed Porter
To save you the trouble the most relevant language from the below cited
article is 

 

 

"While scientists don't yet know exactly how epigenetic regulation affects
memory, the theory is that certain triggers, such as exercise, visual
stimulation, or drugs, unwind DNA, allowing expression of genes involved in
neural plasticity. That increase in gene expression might trigger
development of new neural connections and, in turn, strengthen the neural
circuits that underlie memory formation. "Maybe our brains are using these
epigenetic mechanisms to allow us to learn and remember things, or to
provide sufficient plasticity to allow us to learn and adapt," says John
Satterlee
 ,
program director of epigenetics at the National Institute on Drug Abuse, in
Bethesda, MD. 

"We have solid evidence that HDAC inhibitors massively promote growth of
dendrites and increase synaptogenesis [the creation of connections between
neurons]," says Tsai. The process may boost memory or allow mice to regain
access to lost memories by rewiring or repairing damaged neural circuits.
"We believe the memory trace is still there, but the animal cannot retrieve
it due to damage to neural circuits," she adds. "

 

-Original Message-
From: Ed Porter [mailto:[EMAIL PROTECTED] 
Sent: Thursday, December 11, 2008 10:28 AM
To: 'agi@v2.listbox.com'
Subject: FW: [agi] Lamarck Lives!(?)

 

An article related to how changes in the epigenonme could affect learning
and memory (the subject which started this thread a week ago)

 

 

http://www.technologyreview.com/biomedicine/21801/

 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Terren Suydam

After talking to an old professor of mine, it bears mentioning that epigenetic 
mechanisms such as methylation and histone remodeling are not the only means of 
altering transcription. A long established mechanism involves phosphorylation 
of transcription factors in the neuron (phosphorylation is a way of chemically 
enabling or disabling the function of a particular enzyme).

In light of that I think there is some fuzziness around the use of "epigenetic" 
here because you could conceivably consider the above phosphorylation mechanism 
as "epigenetic" - functionally speaking, the effect is the same - an increase 
or decrease in transcription. The only difference between that and methylation 
etc is transience: phosphorylation of transcription factors is less "permanent" 
then altering the DNA.

He also shed some light on the effects on synapses due to epigenetic 
mechanisms. Ed, you were wondering how synapse-specific changes could occur in 
response to transcription mechanisms (which are central to the neuron). 
Specifically: "There are 2 possible answers to that puzzle 
(that I am aware of);  1) evidence of mRNA and translation machinery 
present in dendrites at the site of synapses (see papers published by Oswald 
Steward or 2) activity causes a specific synapse to be 'tagged' so that 
newly synthesized proteins in the cell body are targeted specifically to the 
tagged synapses."

Terren

--- On Thu, 12/11/08, Ed Porter <[EMAIL PROTECTED]> wrote:
From: Ed Porter <[EMAIL PROTECTED]>
Subject: FW: [agi] Lamarck Lives!(?)
To: agi@v2.listbox.com
Date: Thursday, December 11, 2008, 10:32 AM

I


 


 








To save you the trouble the most relevant
language from the below cited article is 

 

 

“While scientists don't yet know exactly
how epigenetic regulation affects memory, the theory is that certain triggers,
such as exercise, visual stimulation, or drugs, unwind DNA, allowing expression
of genes involved in neural plasticity. That increase in gene expression might
trigger development of new neural connections and, in turn, strengthen the
neural circuits that underlie memory formation. "Maybe our brains are
using these epigenetic mechanisms to allow us to learn and remember things, or
to provide sufficient plasticity to allow us to learn and adapt," says John 
Satterlee, program director of epigenetics at the National
Institute on Drug Abuse, in Bethesda, MD. 

"We
have solid evidence that HDAC inhibitors massively promote growth of dendrites
and increase synaptogenesis [the creation of connections between
neurons]," says Tsai. The process may boost memory or allow mice to regain
access to lost memories by rewiring or repairing damaged neural circuits.
"We believe the memory trace is still there, but the animal cannot
retrieve it due to damage to neural circuits," she adds. ”

 

-Original Message-

From: Ed Porter
[mailto:[EMAIL PROTECTED] 

Sent: Thursday,
 December 11, 2008 10:28 AM

To: 'agi@v2.listbox.com'

Subject: FW: [agi] Lamarck
Lives!(?)

 

An article related to how changes in the
epigenonme could affect learning and memory (the subject which started this
thread a week ago)

 

 

http://www.technologyreview.com/biomedicine/21801/

 

 







  

  
  agi | Archives

 | Modify
 Your Subscription


  

  


 




  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Eric Burton
It's all a big vindication for genetic memory, that's for certain. I
was comfortable with the notion of certain templates, archetypes,
being handed down as aspects of brain design via natural selection,
but this really clears the way for organisms' life experiences to
simply be copied in some form to their offspring. DNA form!

It is scary to imagine memes scribbling on your genome in this way.
Food for thought! :O

On 12/11/08, Terren Suydam  wrote:
>
> After talking to an old professor of mine, it bears mentioning that
> epigenetic mechanisms such as methylation and histone remodeling are not the
> only means of altering transcription. A long established mechanism involves
> phosphorylation of transcription factors in the neuron (phosphorylation is a
> way of chemically enabling or disabling the function of a particular
> enzyme).
>
> In light of that I think there is some fuzziness around the use of
> "epigenetic" here because you could conceivably consider the above
> phosphorylation mechanism as "epigenetic" - functionally speaking, the
> effect is the same - an increase or decrease in transcription. The only
> difference between that and methylation etc is transience: phosphorylation
> of transcription factors is less "permanent" then altering the DNA.
>
> He also shed some light on the effects on synapses due to epigenetic
> mechanisms. Ed, you were wondering how synapse-specific changes could occur
> in response to transcription mechanisms (which are central to the neuron).
> Specifically: "There are 2 possible answers to that puzzle
> (that I am aware of);  1) evidence of mRNA and translation machinery
> present in dendrites at the site of synapses (see papers published by Oswald
> Steward or 2) activity causes a specific synapse to be 'tagged' so that
> newly synthesized proteins in the cell body are targeted specifically to the
> tagged synapses."
>
> Terren
>
> --- On Thu, 12/11/08, Ed Porter  wrote:
> From: Ed Porter 
> Subject: FW: [agi] Lamarck Lives!(?)
> To: agi@v2.listbox.com
> Date: Thursday, December 11, 2008, 10:32 AM
>
> I
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> To save you the trouble the most relevant
> language from the below cited article is
>
>
>
>
>
> "While scientists don't yet know exactly
> how epigenetic regulation affects memory, the theory is that certain
> triggers,
> such as exercise, visual stimulation, or drugs, unwind DNA, allowing
> expression
> of genes involved in neural plasticity. That increase in gene expression
> might
> trigger development of new neural connections and, in turn, strengthen the
> neural circuits that underlie memory formation. "Maybe our brains are
> using these epigenetic mechanisms to allow us to learn and remember things,
> or
> to provide sufficient plasticity to allow us to learn and adapt," says John
> Satterlee, program director of epigenetics at the National
> Institute on Drug Abuse, in Bethesda, MD.
>
> "We
> have solid evidence that HDAC inhibitors massively promote growth of
> dendrites
> and increase synaptogenesis [the creation of connections between
> neurons]," says Tsai. The process may boost memory or allow mice to regain
> access to lost memories by rewiring or repairing damaged neural circuits.
> "We believe the memory trace is still there, but the animal cannot
> retrieve it due to damage to neural circuits," she adds. "
>
>
>
> -Original Message-
>
> From: Ed Porter
> [mailto:ewpor...@msn.com]
>
> Sent: Thursday,
>  December 11, 2008 10:28 AM
>
> To: 'agi@v2.listbox.com'
>
> Subject: FW: [agi] Lamarck
> Lives!(?)
>
>
>
> An article related to how changes in the
> epigenonme could affect learning and memory (the subject which started this
> thread a week ago)
>
>
>
>
>
> http://www.technologyreview.com/biomedicine/21801/
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>   agi | Archives
>
>  | Modify
>  Your Subscription
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Richard Loosemore

Eric Burton wrote:

It's all a big vindication for genetic memory, that's for certain. I
was comfortable with the notion of certain templates, archetypes,
being handed down as aspects of brain design via natural selection,
but this really clears the way for organisms' life experiences to
simply be copied in some form to their offspring. DNA form!

It is scary to imagine memes scribbling on your genome in this way.
Food for thought! :O


Well, no: that was not the conclusion that we came to during this thread.

I think we all agreed that although we could imagine ways in which some 
acquired information could be passed on through the DNA, the *current* 
evidence does not indicate that large scale transfer of memories is 
happening.


In effect, the recent discoveries might conceivably allow nature to hand 
over to the next generation a 3.5 inch floppy disk (remember those?) 
with some data on it, whereas the implication in what you just said was 
that this floppy disk could be used to transfer the contents of the 
Googleplex :-).  Not so fast, I say.





Richard Loosemore








On 12/11/08, Terren Suydam  wrote:

After talking to an old professor of mine, it bears mentioning that
epigenetic mechanisms such as methylation and histone remodeling are not the
only means of altering transcription. A long established mechanism involves
phosphorylation of transcription factors in the neuron (phosphorylation is a
way of chemically enabling or disabling the function of a particular
enzyme).

In light of that I think there is some fuzziness around the use of
"epigenetic" here because you could conceivably consider the above
phosphorylation mechanism as "epigenetic" - functionally speaking, the
effect is the same - an increase or decrease in transcription. The only
difference between that and methylation etc is transience: phosphorylation
of transcription factors is less "permanent" then altering the DNA.

He also shed some light on the effects on synapses due to epigenetic
mechanisms. Ed, you were wondering how synapse-specific changes could occur
in response to transcription mechanisms (which are central to the neuron).
Specifically: "There are 2 possible answers to that puzzle
(that I am aware of);  1) evidence of mRNA and translation machinery
present in dendrites at the site of synapses (see papers published by Oswald
Steward or 2) activity causes a specific synapse to be 'tagged' so that
newly synthesized proteins in the cell body are targeted specifically to the
tagged synapses."

Terren

--- On Thu, 12/11/08, Ed Porter  wrote:
From: Ed Porter 
Subject: FW: [agi] Lamarck Lives!(?)
To: agi@v2.listbox.com
Date: Thursday, December 11, 2008, 10:32 AM

I














To save you the trouble the most relevant
language from the below cited article is





"While scientists don't yet know exactly
how epigenetic regulation affects memory, the theory is that certain
triggers,
such as exercise, visual stimulation, or drugs, unwind DNA, allowing
expression
of genes involved in neural plasticity. That increase in gene expression
might
trigger development of new neural connections and, in turn, strengthen the
neural circuits that underlie memory formation. "Maybe our brains are
using these epigenetic mechanisms to allow us to learn and remember things,
or
to provide sufficient plasticity to allow us to learn and adapt," says John
Satterlee, program director of epigenetics at the National
Institute on Drug Abuse, in Bethesda, MD.

"We
have solid evidence that HDAC inhibitors massively promote growth of
dendrites
and increase synaptogenesis [the creation of connections between
neurons]," says Tsai. The process may boost memory or allow mice to regain
access to lost memories by rewiring or repairing damaged neural circuits.
"We believe the memory trace is still there, but the animal cannot
retrieve it due to damage to neural circuits," she adds. "



-Original Message-

From: Ed Porter
[mailto:ewpor...@msn.com]

Sent: Thursday,
 December 11, 2008 10:28 AM

To: 'agi@v2.listbox.com'

Subject: FW: [agi] Lamarck
Lives!(?)



An article related to how changes in the
epigenonme could affect learning and memory (the subject which started this
thread a week ago)





http://www.technologyreview.com/biomedicine/21801/














  agi | Archives

 | Modify
 Your Subscription















---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/arch

Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Matt Mahoney
--- On Thu, 12/11/08, Eric Burton  wrote:

> It's all a big vindication for genetic memory, that's for certain. I
> was comfortable with the notion of certain templates, archetypes,
> being handed down as aspects of brain design via natural selection,
> but this really clears the way for organisms' life experiences to
> simply be copied in some form to their offspring. DNA form!

No it's not. 

1. There is no experimental evidence that learned memories are passed to 
offspring in humans or any other species.

2. If memory is encoded by DNA methylation as proposed in 
http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
 then how is the memory encoded in 10^11 separate neurons (not to mention 
connectivity information) transferred to a single egg or sperm cell with less 
than 10^5 genes? The proposed mechanism is to activate one gene and turn off 
another -- 1 or 2 bits.

3. The article at http://www.technologyreview.com/biomedicine/21801/ says 
nothing about where memory is encoded, only that memory might be enhanced by 
manipulating neuron chemistry. There is nothing controversial here. It is well 
known that certain drugs affect learning.

4. The memory mechanism proposed in 
http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
is distinct from (2). It proposes protein regulation at the mRNA level near 
synapses (consistent with the Hebbian model) rather than DNA in the nucleus. 
Such changes could not make their way back to the nucleus unless there was a 
mechanism to chemically distinguish the tens of thousands of synapses and 
encode this information, along with the connectivity information (about 10^6 
bits per neuron) back to the nuclear DNA.

Last week I showed how learning could occur in neurons rather than synapses in 
randomly and sparsely connected neural networks where all of the outputs of a 
neuron are constrained to have identical weights. The network is trained by 
tuning neurons toward excitation or inhibition to reduce the output error. In 
general an arbitrary X to Y bit binary function with N = Y 2^X bits of 
complexity can be learned using about 1.5N to 2N neurons with ~ N^1/2 synapses 
each and ~N log N training cycles. As an example I posted a program that learns 
a 3 by 3 bit multiplier in about 20 minutes on a PC using 640 neurons with 36 
connections each.

This is slower than Hebbian learning by a factor of O(N^1/2) on sequential 
computers, as well as being inefficient because sparse networks cannot be 
simulated efficiently using typical vector processing parallel hardware or 
memory optimized for sequential access. However this architecture is what we 
actually observe in neural tissue, which nevertheless does everything in 
parallel. The presence of neuron-centered learning does not preclude Hebbian 
learning occurring at the same time (perhaps at a different rate). However, the 
number of neurons (10^11) is much closer to Landauer's estimate of human long 
term memory capacity (10^9 bits) than the number of synapses (10^15).

However, I don't mean to suggest that memory in either form can be inherited. 
There is no biological evidence for such a thing.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Bob Mottram
2008/12/11 Mike Tintner :
> But an image/movie can only be compared with a verbal statement in terms of
> what it *actually shows*  - the *surface, visible action.* His actual,
> observable dialogue and gestures and expressions - that and only that is
> what a movie records with high fidelity..

But does the movie really record those things?  What the movie
actually records is a particular pattern of photons hitting a receptor
over a period of time.  Things such as gestures and expressions are
concepts which you're superimposing into the pattern of photons that
you're observing.  To a large extent you're able to perform this
superposition because you yourself have similar and familiar dynmaics.

If you prefer you can consider the process of interpreting the movie
as a sort of error correction, similar to the way that error
correcting codes are able to fill in and restore missing or corrupted
information.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Eric Burton
You can see though how genetic memory encoding opens the door to
acquired phenotype changes over an organism's life, though, and those
could become communicable. I think Lysenko was onto something like
this. Let us hope all those Soviet farmers wouldn't have just starved!
;3

On 12/11/08, Matt Mahoney  wrote:
> --- On Thu, 12/11/08, Eric Burton  wrote:
>
>> It's all a big vindication for genetic memory, that's for certain. I
>> was comfortable with the notion of certain templates, archetypes,
>> being handed down as aspects of brain design via natural selection,
>> but this really clears the way for organisms' life experiences to
>> simply be copied in some form to their offspring. DNA form!
>
> No it's not.
>
> 1. There is no experimental evidence that learned memories are passed to
> offspring in humans or any other species.
>
> 2. If memory is encoded by DNA methylation as proposed in
> http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
> then how is the memory encoded in 10^11 separate neurons (not to mention
> connectivity information) transferred to a single egg or sperm cell with
> less than 10^5 genes? The proposed mechanism is to activate one gene and
> turn off another -- 1 or 2 bits.
>
> 3. The article at http://www.technologyreview.com/biomedicine/21801/ says
> nothing about where memory is encoded, only that memory might be enhanced by
> manipulating neuron chemistry. There is nothing controversial here. It is
> well known that certain drugs affect learning.
>
> 4. The memory mechanism proposed in
> http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
> is distinct from (2). It proposes protein regulation at the mRNA level near
> synapses (consistent with the Hebbian model) rather than DNA in the nucleus.
> Such changes could not make their way back to the nucleus unless there was a
> mechanism to chemically distinguish the tens of thousands of synapses and
> encode this information, along with the connectivity information (about 10^6
> bits per neuron) back to the nuclear DNA.
>
> Last week I showed how learning could occur in neurons rather than synapses
> in randomly and sparsely connected neural networks where all of the outputs
> of a neuron are constrained to have identical weights. The network is
> trained by tuning neurons toward excitation or inhibition to reduce the
> output error. In general an arbitrary X to Y bit binary function with N = Y
> 2^X bits of complexity can be learned using about 1.5N to 2N neurons with ~
> N^1/2 synapses each and ~N log N training cycles. As an example I posted a
> program that learns a 3 by 3 bit multiplier in about 20 minutes on a PC
> using 640 neurons with 36 connections each.
>
> This is slower than Hebbian learning by a factor of O(N^1/2) on sequential
> computers, as well as being inefficient because sparse networks cannot be
> simulated efficiently using typical vector processing parallel hardware or
> memory optimized for sequential access. However this architecture is what we
> actually observe in neural tissue, which nevertheless does everything in
> parallel. The presence of neuron-centered learning does not preclude Hebbian
> learning occurring at the same time (perhaps at a different rate). However,
> the number of neurons (10^11) is much closer to Landauer's estimate of human
> long term memory capacity (10^9 bits) than the number of synapses (10^15).
>
> However, I don't mean to suggest that memory in either form can be
> inherited. There is no biological evidence for such a thing.
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Matt Mahoney
--- On Thu, 12/11/08, Eric Burton  wrote:

> You can see though how genetic memory encoding opens the door to
> acquired phenotype changes over an organism's life, though, and those
> could become communicable. I think Lysenko was onto something like
> this. Let us hope all those Soviet farmers wouldn't have just starved!
> ;3

No, apparently you didn't understand anything I wrote.

Please explain how the memory encoded separately as one bit each in 10^11 
neurons through DNA methylation (the mechanism for cell differentiation, not 
genetic changes) is all collected together and encoded into genetic changes in 
a single egg or sperm cell, and back again to the brain when the organism 
matures.

And please explain why you think that Lysenko's work should not have been 
discredited. http://en.wikipedia.org/wiki/Trofim_Lysenko

-- Matt Mahoney, matmaho...@yahoo.com


> On 12/11/08, Matt Mahoney 
> wrote:
> > --- On Thu, 12/11/08, Eric Burton
>  wrote:
> >
> >> It's all a big vindication for genetic memory,
> that's for certain. I
> >> was comfortable with the notion of certain
> templates, archetypes,
> >> being handed down as aspects of brain design via
> natural selection,
> >> but this really clears the way for organisms'
> life experiences to
> >> simply be copied in some form to their offspring.
> DNA form!
> >
> > No it's not.
> >
> > 1. There is no experimental evidence that learned
> memories are passed to
> > offspring in humans or any other species.
> >
> > 2. If memory is encoded by DNA methylation as proposed
> in
> >
> http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
> > then how is the memory encoded in 10^11 separate
> neurons (not to mention
> > connectivity information) transferred to a single egg
> or sperm cell with
> > less than 10^5 genes? The proposed mechanism is to
> activate one gene and
> > turn off another -- 1 or 2 bits.
> >
> > 3. The article at
> http://www.technologyreview.com/biomedicine/21801/ says
> > nothing about where memory is encoded, only that
> memory might be enhanced by
> > manipulating neuron chemistry. There is nothing
> controversial here. It is
> > well known that certain drugs affect learning.
> >
> > 4. The memory mechanism proposed in
> >
> http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
> > is distinct from (2). It proposes protein regulation
> at the mRNA level near
> > synapses (consistent with the Hebbian model) rather
> than DNA in the nucleus.
> > Such changes could not make their way back to the
> nucleus unless there was a
> > mechanism to chemically distinguish the tens of
> thousands of synapses and
> > encode this information, along with the connectivity
> information (about 10^6
> > bits per neuron) back to the nuclear DNA.
> >
> > Last week I showed how learning could occur in neurons
> rather than synapses
> > in randomly and sparsely connected neural networks
> where all of the outputs
> > of a neuron are constrained to have identical weights.
> The network is
> > trained by tuning neurons toward excitation or
> inhibition to reduce the
> > output error. In general an arbitrary X to Y bit
> binary function with N = Y
> > 2^X bits of complexity can be learned using about 1.5N
> to 2N neurons with ~
> > N^1/2 synapses each and ~N log N training cycles. As
> an example I posted a
> > program that learns a 3 by 3 bit multiplier in about
> 20 minutes on a PC
> > using 640 neurons with 36 connections each.
> >
> > This is slower than Hebbian learning by a factor of
> O(N^1/2) on sequential
> > computers, as well as being inefficient because sparse
> networks cannot be
> > simulated efficiently using typical vector processing
> parallel hardware or
> > memory optimized for sequential access. However this
> architecture is what we
> > actually observe in neural tissue, which nevertheless
> does everything in
> > parallel. The presence of neuron-centered learning
> does not preclude Hebbian
> > learning occurring at the same time (perhaps at a
> different rate). However,
> > the number of neurons (10^11) is much closer to
> Landauer's estimate of human
> > long term memory capacity (10^9 bits) than the number
> of synapses (10^15).
> >
> > However, I don't mean to suggest that memory in
> either form can be
> > inherited. There is no biological evidence for such a
> thing.
> >
> > -- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Steve Richfield
Mike and Bob,

There seems to be a massive confusion between data and information here. To
illustrate:
1.  A movie is just data until it is analyzed to extract some (if any) *
useful* information.
2.  A verbal description is typically somewhere in between, as it contains
bits of ???, some of which may be useful, and some of which may NOT be
useful.
3.  A person interaction typically contains more *useful* information,
because it can be directed to extract useful information.

>From Shannon's Information theory, we get the effect of the signal to noise
ration (S/N) which determines just how much real information is in a bit of
data. However, often missed is the fact that there are some commonly missed
sources of noise.
1.  A parameter cannot be accurately extracted, e.g. is the dog happy? Sure
we can guess, but the result will probably contain substantially less than
one bit of information.
2.  Do we have any use for the "information". If not, then it is just more
noise to discard, like the news report on an adjacent channel to the one we
are listening to. In this example, what decisions hinge on the dog's
prospective happiness? This might be really important to know if I plan to
stick my hand into his mouth, but uninteresting if I am just going to feed
him some dog food,.

Hence, while there is a LOT more data in a movie than in a verbal
description, there may well be more useful  information in a verbal
description.

Continuing with your postings...

On 12/11/08, Mike Tintner  wrote:
>
>
> I thought someone would come up with the kind of objection you have put
> forward. Your objection is misplaced. ( And here we have an example of what
> I meant by "knowing [metacognitively about] imaginative intelligence" -
> understanding metacognitively how imagination works).
>

The important part here is the "meta", namely, the information that would
NOT appear even on a movie, e.g. the time, place, people's names, context,
etc.

>
> Consider what an image sequence/ movie of Ben talking to you would show.
> Ben talking to you, uttering certain words, making certain gestures; his
> face showing certain expressions.
>

Not very interesting unless Ben is talking about something that I am
interested in.

>
> What you're bringing up here is not what the movie shows, but what went on
> in your mind - your reactions - as you, in a sense, watched and listened to
> that movie - and wondered about what was going on in Ben's mind *behind the
> scenes*. "What does he really think and feel about this, that and the other
> (as distinct from the limited amount of information he is actually
> expressing)?" We all react and think similarly to other people all the time
> and wonder about what's going on *invisibly* in their minds.
>

I suspect that this process fails when applied to either Ben or myself.


> But an image/movie can only be compared with a verbal statement in terms of
> what it *actually shows*  - the *surface, visible action.* His actual,
> observable dialogue and gestures and expressions - that and only that is
> what a movie records with high fidelity..
>

You are presuming that the verbal statementment is ONLY based on the movie.
However, if I were to produce a report on Ben at Convergence08, it would
include lots of things that I probably would NOT have captured if I had a
video recorder, some of which would have been tested and refined in
conversation with Ben.


>
> In terms of those actual actions, an image/ image sequence is *infinitely*
> superior to any verbal description. No verbal description could *begin* to
> convey to you his particular smile and the curl of his lips, the drone of
> his voice, the precise music of his statements, the tangled state of his
> hair, the way he tosses his hair etc. etc.  No words could tell or show you
> his state of body - his posture and attitude - at 10.05.01-to-10.05.02  pm.
>

Lots of refined data, but little useful information. The nice thing about
verbal communication is that it typically only includes prospectively useful
information.


> [Words (along with other symbols, like numbers and algebraic symbols) can
> only provide the *formulae* for objects - their constitutents. They can
> never provide you, as images do, with actual maps of the whole *forms* of
> objects - and therefore the layout of those constituents. They can never
> give us - and we can never "get the picture". of those objects.
>

There is a BIG difference between a surface image and a complete ontological
understanding.

>
> Consider a verbal statement:
>
> "The Alsatian suddenly bit the man on the face."
>
> What you have there is a complex action decomposed into a set of words - a
> verbal formula. How much do you now know about the action being referred to?
>
>

> Now try an image:of the same:
>
> http://www.youtube.com/watch?v=nk9yHKQRE94&feature=related
>
> Notice the difference? Now you've seen the whole dog engaging in the whole
> action - and you've got the picture - the whole form, not ju

[agi] Vector processing and AGI

2008-12-11 Thread Ben Goertzel
Steve wrote:

> Bit#3: Did Ben realize that the prospective emergence of array processors
> (e.g. as I have been promoting) would obsolete much of his present
> work, because its structure isn't vectorizable, so he is in effect betting
> on continued stagnation in processor architecture, and may in fact be a
> small component in a large industry failure by denying market? Bit#3=
> probably FALSE.

Well, the conceptual and mathematical algorithms of NCE and OCP
(my AI systems under development) would go more naturally on MIMD
parallel systems than on SIMD (e.g. vector) or SISD systems.

I played around a bunch with MIMD parallel code on the Connection Machine
at ANU, back in the 90s

However, indeed the specific software code we've written for NCE and OCP
is intended for contemporary {distributed networks of multiprocessor machines}
rather than vector machines or Connection Machines or whatever...

If vector processing were to become a superior practical option for AGI,
what would happen to the code in OCP or NCE?

That would depend heavily on the vector architecture, of course.

But one viable possibility is: the AtomTable, ProcedureRepository and
other knowledge stores remain the same ... and the math tools like the
PLN rules/formulas and Reduct rules remain the same ... but the MindAgents
that use the former to carry out cognitive processes get totally rewritten...

This would be a big deal, but not the kind of thing that means you have to
scrap all your implementation work and go back to ground zero

OO and generic design patterns do buy you *something* ...

Vector processors aside, though ... it would be a much *smaller*
deal to tweak my AI systems to run on the 100-core chips Intel
will likely introduce within the next decade.

-- Ben G


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Mike Tintner

Bob,

I think you've been blinded by science here :).  You don't actually see - 
and science hasn't, in all its history, seen - photons hitting receptors. 
What you're talking about there is very sophisticated, and not at all 
immediately-obvious/evident inferences made from scientific experiments, 
about theoretical entities, i.e. photons.


There is no problem though seeing the entities and movements in a movie - 
Ben, say, raising his hand, or shaking Steve's hand, or laughing or making 
some other facial expression. Sure, we can argue and/or be confused about 
the significance and classification of what exactly is going on. Is he 
really laughing, and is it spontaneous, or slightly sarcastic etc? But there 
can be little to no confusion about the exact movements Ben is making - how 
his hand is grasping Steve's, say, how his lips have moved. We can agree 
pretty scientifically there.


By contrast, if you and I read a verbal description of Ben and Steve's 
exchange, we could form radically different pictures of what was going on, 
including the simplest movements  - and both our pictures could be far off 
the truth visible in a movie.


Note the consequences of this philosophical position - if we are really 
interested in understanding social exchanges like Ben and Steve's, (or 
indeed the science and scientific experiments re photons), we should, if 
possible, first look at movies, rather than verbal reports. (And clearly, in 
law, a video of a conversation will be vastly preferable to a verbal 
report/summary from a witness).


But it's only possible  to put this philosophy into practice now- now that 
the world is being flooded for the first time with personally editable 
movies, a la Youtube, (where you can also find scientific videos)...



Bob:


2008/12/11 Mike Tintner :
But an image/movie can only be compared with a verbal statement in terms 
of

what it *actually shows*  - the *surface, visible action.* His actual,
observable dialogue and gestures and expressions - that and only that is
what a movie records with high fidelity..


But does the movie really record those things?  What the movie
actually records is a particular pattern of photons hitting a receptor
over a period of time.  Things such as gestures and expressions are
concepts which you're superimposing into the pattern of photons that
you're observing.  To a large extent you're able to perform this
superposition because you yourself have similar and familiar dynmaics.

If you prefer you can consider the process of interpreting the movie
as a sort of error correction, similar to the way that error
correcting codes are able to fill in and restore missing or corrupted
information.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Bob Mottram
2008/12/11 Mike Tintner :
> There is no problem though seeing the entities and movements in a movie -
> Ben, say, raising his hand, or shaking Steve's hand, or laughing or making
> some other facial expression. Sure, we can argue and/or be confused about
> the significance and classification of what exactly is going on. Is he
> really laughing, and is it spontaneous, or slightly sarcastic etc? But there
> can be little to no confusion about the exact movements Ben is making - how
> his hand is grasping Steve's, say, how his lips have moved. We can agree
> pretty scientifically there.


This only comes courtesy of a long evolutionary history, such that the
ability to interpret such things is for us nearly effortless and
effectively built in as firmware.  My main point is that the
information doesn't really exist in any intrinsic sense within the
movie, but that you contain a lot of information (of both a learned
and inherited variety) which you're then using to interpret particular
types of optical pattern.

That video is a higher bandwidth communication channel than language
is undoubtedly true.  From an engineering point of view we have high
bandwidth inputs (vision, hearing, touch, smell, taste) but very low
bandwidth outputs (movement, speech).

The apparent unambiguousness of video evidence given within a court
room is only really because of the shared embodiment of the
protagonists, endowing them with similar electrochemical machinery
dedicated to the analysis of optical patterns together with a similar
developmental process.  However, if the jury were to consist of
different species (or AGI) the lack of ambiguity might break down.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Seeking CYC critiques PS

2008-12-11 Thread Mike Tintner

Bob:That video is a higher bandwidth communication channel than language
is undoubtedly true.

Bob,

Aaargh! You're repeating the primary fallacy of seemingly everyone here - 
i.e. all images and symbols are just different forms of "data" - 
"bandwidth."


(I should respond at length but I can't resist a brief response). No. Each 
sign system has its own unique properties. That's why we have so many, and 
don't just use one universal form.


Visual images, in particular, uniquely provide *isomorphic maps of objects.*

If you try and reduce those maps to any other form, e.g. some mathematical 
or program form,  you *lose the object.* It's equivalent to taking a jigsaw 
puzzle to pieces - all you have are the pieces, and you've lost the 
picture - the whole.


Produce a program for the Mona Lisa. Now show that program to someone else. 
Or another computer. By itself, it's meaningless.


Like a recipe - "Take two cups of flour, three spoonfuls of sugar, mix in a 
pan, add an egg, fry for two mins. etc... etc."  Now, with just those 
instructions, tell me what the recipe is for. The recipe, like a program, by 
itself is meaningless.


You need the whole picture and the whole map to see and recognize the 
object - and to compare that object with other objects. (Similarly you need 
the whole cake and not just the recipe).


AI'ers can't distinguish  - intellectually/metacognitively - between the 
parts and the whole, the jigsaw pieces and the puzzle picture as a whole (or 
the recipe and the cake)..


Look at any analysis of left brain/right brain types. It's standard - 
rational (AI) types are analytic, take things to pieces, and have 
considerable difficulty "looking at the big picture." Very crudely: 
Images - wholes. Symbols - words/logical symbols/numbers - parts (or 
features/ properties of wholes).


P.S. As a roboticist, you especially should be able to understand that an 
agent moving through a world of objects, needs images/maps of those objects 
(and not just symbolic formulae) in order to keep minutely and precisely 
aligning itself with those objects.




2008/12/11 Mike Tintner :

There is no problem though seeing the entities and movements in a movie -
Ben, say, raising his hand, or shaking Steve's hand, or laughing or 
making

some other facial expression. Sure, we can argue and/or be confused about
the significance and classification of what exactly is going on. Is he
really laughing, and is it spontaneous, or slightly sarcastic etc? But 
there
can be little to no confusion about the exact movements Ben is making - 
how

his hand is grasping Steve's, say, how his lips have moved. We can agree
pretty scientifically there.



This only comes courtesy of a long evolutionary history, such that the
ability to interpret such things is for us nearly effortless and
effectively built in as firmware.  My main point is that the
information doesn't really exist in any intrinsic sense within the
movie, but that you contain a lot of information (of both a learned
and inherited variety) which you're then using to interpret particular
types of optical pattern.

That video is a higher bandwidth communication channel than language
is undoubtedly true.  From an engineering point of view we have high
bandwidth inputs (vision, hearing, touch, smell, taste) but very low
bandwidth outputs (movement, speech).

The apparent unambiguousness of video evidence given within a court
room is only really because of the shared embodiment of the
protagonists, endowing them with similar electrochemical machinery
dedicated to the analysis of optical patterns together with a similar
developmental process.  However, if the jury were to consist of
different species (or AGI) the lack of ambiguity might break down.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?&;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Eric Burton
I don't think that each inheritor receives a full set of the
original's memories. But there may have *evolved* in spite of the
obvious barriers, a means of transferring primary or significant
experience from one organism to another in genetic form... we can
imagine such a thing given this news!

On 12/11/08, Matt Mahoney  wrote:
> --- On Thu, 12/11/08, Eric Burton  wrote:
>
>> You can see though how genetic memory encoding opens the door to
>> acquired phenotype changes over an organism's life, though, and those
>> could become communicable. I think Lysenko was onto something like
>> this. Let us hope all those Soviet farmers wouldn't have just starved!
>> ;3
>
> No, apparently you didn't understand anything I wrote.
>
> Please explain how the memory encoded separately as one bit each in 10^11
> neurons through DNA methylation (the mechanism for cell differentiation, not
> genetic changes) is all collected together and encoded into genetic changes
> in a single egg or sperm cell, and back again to the brain when the organism
> matures.
>
> And please explain why you think that Lysenko's work should not have been
> discredited. http://en.wikipedia.org/wiki/Trofim_Lysenko
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
>> On 12/11/08, Matt Mahoney 
>> wrote:
>> > --- On Thu, 12/11/08, Eric Burton
>>  wrote:
>> >
>> >> It's all a big vindication for genetic memory,
>> that's for certain. I
>> >> was comfortable with the notion of certain
>> templates, archetypes,
>> >> being handed down as aspects of brain design via
>> natural selection,
>> >> but this really clears the way for organisms'
>> life experiences to
>> >> simply be copied in some form to their offspring.
>> DNA form!
>> >
>> > No it's not.
>> >
>> > 1. There is no experimental evidence that learned
>> memories are passed to
>> > offspring in humans or any other species.
>> >
>> > 2. If memory is encoded by DNA methylation as proposed
>> in
>> >
>> http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
>> > then how is the memory encoded in 10^11 separate
>> neurons (not to mention
>> > connectivity information) transferred to a single egg
>> or sperm cell with
>> > less than 10^5 genes? The proposed mechanism is to
>> activate one gene and
>> > turn off another -- 1 or 2 bits.
>> >
>> > 3. The article at
>> http://www.technologyreview.com/biomedicine/21801/ says
>> > nothing about where memory is encoded, only that
>> memory might be enhanced by
>> > manipulating neuron chemistry. There is nothing
>> controversial here. It is
>> > well known that certain drugs affect learning.
>> >
>> > 4. The memory mechanism proposed in
>> >
>> http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
>> > is distinct from (2). It proposes protein regulation
>> at the mRNA level near
>> > synapses (consistent with the Hebbian model) rather
>> than DNA in the nucleus.
>> > Such changes could not make their way back to the
>> nucleus unless there was a
>> > mechanism to chemically distinguish the tens of
>> thousands of synapses and
>> > encode this information, along with the connectivity
>> information (about 10^6
>> > bits per neuron) back to the nuclear DNA.
>> >
>> > Last week I showed how learning could occur in neurons
>> rather than synapses
>> > in randomly and sparsely connected neural networks
>> where all of the outputs
>> > of a neuron are constrained to have identical weights.
>> The network is
>> > trained by tuning neurons toward excitation or
>> inhibition to reduce the
>> > output error. In general an arbitrary X to Y bit
>> binary function with N = Y
>> > 2^X bits of complexity can be learned using about 1.5N
>> to 2N neurons with ~
>> > N^1/2 synapses each and ~N log N training cycles. As
>> an example I posted a
>> > program that learns a 3 by 3 bit multiplier in about
>> 20 minutes on a PC
>> > using 640 neurons with 36 connections each.
>> >
>> > This is slower than Hebbian learning by a factor of
>> O(N^1/2) on sequential
>> > computers, as well as being inefficient because sparse
>> networks cannot be
>> > simulated efficiently using typical vector processing
>> parallel hardware or
>> > memory optimized for sequential access. However this
>> architecture is what we
>> > actually observe in neural tissue, which nevertheless
>> does everything in
>> > parallel. The presence of neuron-centered learning
>> does not preclude Hebbian
>> > learning occurring at the same time (perhaps at a
>> different rate). However,
>> > the number of neurons (10^11) is much closer to
>> Landauer's estimate of human
>> > long term memory capacity (10^9 bits) than the number
>> of synapses (10^15).
>> >
>> > However, I don't mean to suggest that memory in
>> either form can be
>> > inherited. There is no biological evidence for such a
>> thing.
>> >
>> > -- Matt Mahoney, matmaho...@yahoo.com
>
>
>
> -

Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Terren Suydam

Evolution is not magic. You haven't addressed the substance of Matt's questions 
at all. What you're suggesting is magical unless you can talk about specific 
mechanisms, as Richard did last week. Richard's idea - though it is extremely 
unlikely and lacks empirical evidence to support it - is technically plausible. 
He proposed a logical chain of ideas, which can be supported and/or criticized, 
something you need to do if you expect to be taken seriously. 

There are obvious parallels here with AGI. It's very easy to succumb to magical 
or pseudo-explanations of intelligence. So talk specifically and technically 
about *mechanisms* (even if extremely unlikely) and you're not wasting anyone's 
time.

Terren

--- On Thu, 12/11/08, Eric Burton  wrote:

> From: Eric Burton 
> Subject: Re: FW: [agi] Lamarck Lives!(?)
> To: agi@v2.listbox.com
> Date: Thursday, December 11, 2008, 6:33 PM
> I don't think that each inheritor receives a full set of
> the
> original's memories. But there may have *evolved* in
> spite of the
> obvious barriers, a means of transferring primary or
> significant
> experience from one organism to another in genetic form...
> we can
> imagine such a thing given this news!
> 
> On 12/11/08, Matt Mahoney 
> wrote:
> > --- On Thu, 12/11/08, Eric Burton
>  wrote:
> >
> >> You can see though how genetic memory encoding
> opens the door to
> >> acquired phenotype changes over an organism's
> life, though, and those
> >> could become communicable. I think Lysenko was
> onto something like
> >> this. Let us hope all those Soviet farmers
> wouldn't have just starved!
> >> ;3
> >
> > No, apparently you didn't understand anything I
> wrote.
> >
> > Please explain how the memory encoded separately as
> one bit each in 10^11
> > neurons through DNA methylation (the mechanism for
> cell differentiation, not
> > genetic changes) is all collected together and encoded
> into genetic changes
> > in a single egg or sperm cell, and back again to the
> brain when the organism
> > matures.
> >
> > And please explain why you think that Lysenko's
> work should not have been
> > discredited.
> http://en.wikipedia.org/wiki/Trofim_Lysenko
> >
> > -- Matt Mahoney, matmaho...@yahoo.com
> >
> >
> >> On 12/11/08, Matt Mahoney
> 
> >> wrote:
> >> > --- On Thu, 12/11/08, Eric Burton
> >>  wrote:
> >> >
> >> >> It's all a big vindication for
> genetic memory,
> >> that's for certain. I
> >> >> was comfortable with the notion of
> certain
> >> templates, archetypes,
> >> >> being handed down as aspects of brain
> design via
> >> natural selection,
> >> >> but this really clears the way for
> organisms'
> >> life experiences to
> >> >> simply be copied in some form to their
> offspring.
> >> DNA form!
> >> >
> >> > No it's not.
> >> >
> >> > 1. There is no experimental evidence that
> learned
> >> memories are passed to
> >> > offspring in humans or any other species.
> >> >
> >> > 2. If memory is encoded by DNA methylation as
> proposed
> >> in
> >> >
> >>
> http://www.newscientist.com/article/mg20026845.000-memories-may-be-stored-on-your-dna.html
> >> > then how is the memory encoded in 10^11
> separate
> >> neurons (not to mention
> >> > connectivity information) transferred to a
> single egg
> >> or sperm cell with
> >> > less than 10^5 genes? The proposed mechanism
> is to
> >> activate one gene and
> >> > turn off another -- 1 or 2 bits.
> >> >
> >> > 3. The article at
> >> http://www.technologyreview.com/biomedicine/21801/
> says
> >> > nothing about where memory is encoded, only
> that
> >> memory might be enhanced by
> >> > manipulating neuron chemistry. There is
> nothing
> >> controversial here. It is
> >> > well known that certain drugs affect
> learning.
> >> >
> >> > 4. The memory mechanism proposed in
> >> >
> >>
> http://www.ncbi.nlm.nih.gov/pubmed/16822969?ordinalpos=14&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DefaultReportPanel.Pubmed_RVDocSum
> >> > is distinct from (2). It proposes protein
> regulation
> >> at the mRNA level near
> >> > synapses (consistent with the Hebbian model)
> rather
> >> than DNA in the nucleus.
> >> > Such changes could not make their way back to
> the
> >> nucleus unless there was a
> >> > mechanism to chemically distinguish the tens
> of
> >> thousands of synapses and
> >> > encode this information, along with the
> connectivity
> >> information (about 10^6
> >> > bits per neuron) back to the nuclear DNA.
> >> >
> >> > Last week I showed how learning could occur
> in neurons
> >> rather than synapses
> >> > in randomly and sparsely connected neural
> networks
> >> where all of the outputs
> >> > of a neuron are constrained to have identical
> weights.
> >> The network is
> >> > trained by tuning neurons toward excitation
> or
> >> inhibition to reduce the
> >> > output error. In general an arbitrary X to Y
> bit
> >> binary function with N = Y
> >> > 2^X bits of complexity can be learned using
> about 1.5N
> >> to 2N neurons with ~
> >> > N^1/2 synapses e

Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Matt Mahoney
--- On Thu, 12/11/08, Eric Burton  wrote:

> I don't think that each inheritor receives a full set of the
> original's memories. But there may have *evolved* in spite of the
> obvious barriers, a means of transferring primary or significant
> experience from one organism to another in genetic form...
> we can imagine such a thing given this news!

Well, we could, if there was any evidence whatsoever for Lamarckian evolution, 
and if we thought with our reproductive organs.

To me, it suggests that AGI could be implemented with a 10^4 speedup over whole 
brain emulation -- maybe. Is it possible to emulate a sparse neural network 
with 10^11 adjustable neurons and 10^15 fixed, random connections using a 
non-sparse neural network with 10^11 adjustable connections?

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Eric Burton
I don't know how you derived the value 10^4, Matt, but that seems
reasonable to me. Terren, let me go back to the article and try to
understand what exactly it says is happening. Certainly that's my
editorial's crux

On 12/11/08, Matt Mahoney  wrote:
> --- On Thu, 12/11/08, Eric Burton  wrote:
>
>> I don't think that each inheritor receives a full set of the
>> original's memories. But there may have *evolved* in spite of the
>> obvious barriers, a means of transferring primary or significant
>> experience from one organism to another in genetic form...
>> we can imagine such a thing given this news!
>
> Well, we could, if there was any evidence whatsoever for Lamarckian
> evolution, and if we thought with our reproductive organs.
>
> To me, it suggests that AGI could be implemented with a 10^4 speedup over
> whole brain emulation -- maybe. Is it possible to emulate a sparse neural
> network with 10^11 adjustable neurons and 10^15 fixed, random connections
> using a non-sparse neural network with 10^11 adjustable connections?
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Eric Burton
Ok.

>"We think we're seeing short-term memories forming in the hippocampus and 
>slowly turning into
>long-term memories in the cortex," says Miller, who presented the results last 
>week at the Society
>for Neuroscience meeting in Washington DC.

It certainly sounds like the genetic changes are limited to the brain
itself. Perhaps there is some kind of extra DNA scratch space allotted
to cranial nerve cells. I understand that psilocybin, a phosphorylated
serotonin-like neurotransmitter found in fungal mycelia, may have
evolved as a phosphorous bank for all the DNA needed in spore
production. The structure of fungal mycelia closely approximates that
of the brains found in the animal kingdom, which may have evolved from
the same or some shared point. Then we see how the brain can be viewed
as a qualified, indeed purpose-built DNA recombination factory!

Fungal mycelia could be approaching all this from the opposite
direction, doing DNA computation incidentally so as to perform
short-term weather forecasts and other environmental calculations,
simply because there is so much of it about for the next sporulation.
A really compelling avenue for investigation

>"The cool idea here is that the brain could be borrowing a form of cellular 
>memory from
>developmental biology to use for what we think of as memory," says Marcelo 
>Wood, who
>researches long-term memory at the University of California, Irvine.

Yes. It is

Eric B

On 12/11/08, Eric Burton  wrote:
> I don't know how you derived the value 10^4, Matt, but that seems
> reasonable to me. Terren, let me go back to the article and try to
> understand what exactly it says is happening. Certainly that's my
> editorial's crux
>
> On 12/11/08, Matt Mahoney  wrote:
>> --- On Thu, 12/11/08, Eric Burton  wrote:
>>
>>> I don't think that each inheritor receives a full set of the
>>> original's memories. But there may have *evolved* in spite of the
>>> obvious barriers, a means of transferring primary or significant
>>> experience from one organism to another in genetic form...
>>> we can imagine such a thing given this news!
>>
>> Well, we could, if there was any evidence whatsoever for Lamarckian
>> evolution, and if we thought with our reproductive organs.
>>
>> To me, it suggests that AGI could be implemented with a 10^4 speedup over
>> whole brain emulation -- maybe. Is it possible to emulate a sparse neural
>> network with 10^11 adjustable neurons and 10^15 fixed, random connections
>> using a non-sparse neural network with 10^11 adjustable connections?
>>
>> -- Matt Mahoney, matmaho...@yahoo.com
>>
>>
>>
>> ---
>> agi
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/
>> Modify Your Subscription:
>> https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Terren Suydam

That made almost no sense to me. I'm not trying to be rude here, but that 
sounded like the ramblings of one who doesn't have the necessary grasp of the 
key ideas required to speculate intelligently about these things. The fact that 
you once again managed to mention psilocybin does nothing to help your cause, 
either... and that's coming from someone who believes that psychedelics can be 
valuable, if used properly.

Terren

--- On Thu, 12/11/08, Eric Burton  wrote:

> From: Eric Burton 
> Subject: Re: FW: [agi] Lamarck Lives!(?)
> To: agi@v2.listbox.com
> Date: Thursday, December 11, 2008, 9:11 PM
> Ok.
> 
> >"We think we're seeing short-term memories
> forming in the hippocampus and slowly turning into
> >long-term memories in the cortex," says Miller,
> who presented the results last week at the Society
> >for Neuroscience meeting in Washington DC.
> 
> It certainly sounds like the genetic changes are limited to
> the brain
> itself. Perhaps there is some kind of extra DNA scratch
> space allotted
> to cranial nerve cells. I understand that psilocybin, a
> phosphorylated
> serotonin-like neurotransmitter found in fungal mycelia,
> may have
> evolved as a phosphorous bank for all the DNA needed in
> spore
> production. The structure of fungal mycelia closely
> approximates that
> of the brains found in the animal kingdom, which may have
> evolved from
> the same or some shared point. Then we see how the brain
> can be viewed
> as a qualified, indeed purpose-built DNA recombination
> factory!
> 
> Fungal mycelia could be approaching all this from the
> opposite
> direction, doing DNA computation incidentally so as to
> perform
> short-term weather forecasts and other environmental
> calculations,
> simply because there is so much of it about for the next
> sporulation.
> A really compelling avenue for investigation
> 
> >"The cool idea here is that the brain could be
> borrowing a form of cellular memory from
> >developmental biology to use for what we think of as
> memory," says Marcelo Wood, who
> >researches long-term memory at the University of
> California, Irvine.
> 
> Yes. It is
> 
> Eric B
> 
> On 12/11/08, Eric Burton  wrote:
> > I don't know how you derived the value 10^4, Matt,
> but that seems
> > reasonable to me. Terren, let me go back to the
> article and try to
> > understand what exactly it says is happening.
> Certainly that's my
> > editorial's crux
> >
> > On 12/11/08, Matt Mahoney 
> wrote:
> >> --- On Thu, 12/11/08, Eric Burton
>  wrote:
> >>
> >>> I don't think that each inheritor receives
> a full set of the
> >>> original's memories. But there may have
> *evolved* in spite of the
> >>> obvious barriers, a means of transferring
> primary or significant
> >>> experience from one organism to another in
> genetic form...
> >>> we can imagine such a thing given this news!
> >>
> >> Well, we could, if there was any evidence
> whatsoever for Lamarckian
> >> evolution, and if we thought with our reproductive
> organs.
> >>
> >> To me, it suggests that AGI could be implemented
> with a 10^4 speedup over
> >> whole brain emulation -- maybe. Is it possible to
> emulate a sparse neural
> >> network with 10^11 adjustable neurons and 10^15
> fixed, random connections
> >> using a non-sparse neural network with 10^11
> adjustable connections?
> >>
> >> -- Matt Mahoney, matmaho...@yahoo.com
> >>
> >>
> >>
> >> ---
> >> agi
> >> Archives:
> https://www.listbox.com/member/archive/303/=now
> >> RSS Feed:
> https://www.listbox.com/member/archive/rss/303/
> >> Modify Your Subscription:
> >> https://www.listbox.com/member/?&;
> >> Powered by Listbox: http://www.listbox.com
> >>
> >
> 
> 
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Matt Mahoney
--- On Thu, 12/11/08, Eric Burton  wrote:

> I don't know how you derived the value 10^4, Matt, but that seems
> reasonable to me. Terren, let me go back to the article and try to
> understand what exactly it says is happening. Certainly that's my
> editorial's crux

A simulation of a neural network with 10^15 synapses requires 10^15 operations 
to update the activation levels of the neurons. If we assume 100 ms resolution, 
that is 10^16 operations per second.

If memory is stored in neurons rather than synapses, as suggested in the 
original paper (see http://www.cell.com/neuron/retrieve/pii/S0896627307001420 ) 
then the brain has a memory capacity of at most 10^11 bits, which could be 
simulated by a neural network with 10^11 connections (or 10^12 operations per 
second).

This assumes that (1) the networks are equivalent and (2) that there isn't any 
secondary storage in synapses in addition to neurons. The program I posted last 
week was intended to show (1). However (2) has not been shown. The fact that 
DNA methylation occurs in the cortex does not exclude the possibility of more 
than one memory mechanism. As a counter argument, the cortex has about 10^4 
times as much storage as the hippocampus (10^4 days vs. 1 day), but is not 10^4 
times larger.

-- Matt Mahoney, matmaho...@yahoo.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] Lamarck Lives!(?)

2008-12-11 Thread Eric Burton
I've actually got a pretty solid grasp on the underpinnings of this
stuff, Terren. I was agreeing with you: memory formation via gene
modification may be only endemic. Probably not all or the reproductive
cells have their nuclei written to by every, or any, given stimulus.
Yet, there are arguments from ancestral memory and morphogenic fields
and stranger things to explain.

What I see here is a blurring of the mechanisms of thought, memory,
and genetic storage, that I think is hinted at in our evolutionary
past. I could have expressed that a lot better. I apologise ;o

On 12/11/08, Matt Mahoney  wrote:
> --- On Thu, 12/11/08, Eric Burton  wrote:
>
>> I don't know how you derived the value 10^4, Matt, but that seems
>> reasonable to me. Terren, let me go back to the article and try to
>> understand what exactly it says is happening. Certainly that's my
>> editorial's crux
>
> A simulation of a neural network with 10^15 synapses requires 10^15
> operations to update the activation levels of the neurons. If we assume 100
> ms resolution, that is 10^16 operations per second.
>
> If memory is stored in neurons rather than synapses, as suggested in the
> original paper (see
> http://www.cell.com/neuron/retrieve/pii/S0896627307001420 ) then the brain
> has a memory capacity of at most 10^11 bits, which could be simulated by a
> neural network with 10^11 connections (or 10^12 operations per second).
>
> This assumes that (1) the networks are equivalent and (2) that there isn't
> any secondary storage in synapses in addition to neurons. The program I
> posted last week was intended to show (1). However (2) has not been shown.
> The fact that DNA methylation occurs in the cortex does not exclude the
> possibility of more than one memory mechanism. As a counter argument, the
> cortex has about 10^4 times as much storage as the hippocampus (10^4 days
> vs. 1 day), but is not 10^4 times larger.
>
> -- Matt Mahoney, matmaho...@yahoo.com
>
>
>
> ---
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Steve Richfield
Ben,

Before I comment on your reply, note that my former posting was about my
PERCEPTION rather than the REALITY of your understanding, with the
difference being taken up in the answer being less than 1.00 bit of
information.

Anyway, that said, on with a VERY interesting (to me) subject.

On 12/11/08, Ben Goertzel  wrote:
>
> Well, the conceptual and mathematical algorithms of NCE and OCP
> (my AI systems under development) would go more naturally on MIMD
> parallel systems than on SIMD (e.g. vector) or SISD systems.


There isn't much that an MIMD machine can do better than a similar-sized
SIMD machine. The usual problem is in finding a way to make such a large
SIMD machine. Anyway, my proposed architecture (now under consideration at
AMD) also provides for limited MIMD operation, where the processors could be
at different places in a single complex routine.

Anyway, I was looking at a 10,000:1 speedup over SISD, and then giving up
~10:1 to go from probabilistic logic equations to matrices that do the same
things, which is how I came up with the 1000:1 from the prior posting.

I played around a bunch with MIMD parallel code on the Connection Machine
> at ANU, back in the 90s


The challenge is in geometry - figuring out how to get the many processors
to communicate and coordinate with each other without spending 99% of their
cycles in coordination and communication.

However, indeed the specific software code we've written for NCE and OCP
> is intended for contemporary {distributed networks of multiprocessor
> machines}
> rather than vector machines or Connection Machines or whatever...
>
> If vector processing were to become a superior practical option for AGI,
> what would happen to the code in OCP or NCE?
>
> That would depend heavily on the vector architecture, of course.
>
> But one viable possibility is: the AtomTable, ProcedureRepository and
> other knowledge stores remain the same ... and the math tools like the
> PLN rules/formulas and Reduct rules remain the same ... but the MindAgents
> that use the former to carry out cognitive processes get totally
> rewritten...


I presume that everything is table driven, so the code could completely
vectorized to execute the table on any sort of architecture including SIMD.

However, if you are actually executing CODE, e.g. as compiled from a reality
representation, then things would be difficult for an SIMD architecture,
though again, you could also interpret tables containing the same
information at the usual 10:1 slowdown, which is what I was expecting
anyway.

This would be a big deal, but not the kind of thing that means you have to
> scrap all your implementation work and go back to ground zero


That's what I figured.

OO and generic design patterns do buy you *something* ...


OO is often impossible to vectorize.

Vector processors aside, though ... it would be a much *smaller*
> deal to tweak my AI systems to run on the 100-core chips Intel
> will likely introduce within the next decade.


There is an 80-core chip due out any time now. Intel has had BIG problems
finding anything to run on them, so I suspect that they would be more than
glad to give you a few if you promise to do something with them.

I listened to an inter-processor communications plan for the 80 core chip
last summer, and it sounded SLOW - like there was no reasonable plan for
global memory. I suspect that your plan in effect requires FAST global
memory (to avoid crushing communications bottlenecks), and this is NOT
entirely simple on MIMD architectures.

My SIMD architecture will deliver equivalent global memory speeds of ~100x
the clock speed, which still makes it a high-overhead operation on a machine
that peaks out at ~20K operations per clock cycle.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Ben Goertzel
Hi,

> There isn't much that an MIMD machine can do better than a similar-sized
> SIMD machine.

Hey, that's just not true.

There are loads of math theorems disproving this assertion...

>>
>> OO and generic design patterns do buy you *something* ...
>
>
> OO is often impossible to vectorize.

The point is that we've used OO design to wrap up all
processor-intensive code inside specific objects, which could then be
rewritten to be vector-processing friendly...

> There is an 80-core chip due out any time now. Intel has had BIG problems
> finding anything to run on them, so I suspect that they would be more than
> glad to give you a few if you promise to do something with them.

Indeed, AGI and physics simulation may be two of the app areas that have
the easiest times making use of these 80-core chips...

> I listened to an inter-processor communications plan for the 80 core chip
> last summer, and it sounded SLOW - like there was no reasonable plan for
> global memory.

I haven't put in the time to assess this for myself

> I suspect that your plan in effect requires FAST global
> memory (to avoid crushing communications bottlenecks),

True

>and this is NOT
> entirely simple on MIMD architectures.

True also

> My SIMD architecture will deliver equivalent global memory speeds of ~100x
> the clock speed, which still makes it a high-overhead operation on a machine
> that peaks out at ~20K operations per clock cycle.

Well, we're writing our code to run on the hardware we have now, while
making the design as flexible & modular as possible to as to minimize
the pain and suffering that will be incurred if/when radically
different hardware becomes the smartest option to use...

ben g


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


[agi] Images read from human brain-Old ground, new thoughts?

2008-12-11 Thread Robert Swaine
fMRI scanner reconstructing images seen by subjects, etc

http://www.yomiuri.co.jp/dy/features/science/20081211TDY01306.htm
copied below - Anyone read the actual article in "Neuron"?


//
Images read from human brain
The Yomiuri Shimbun

OSAKA--In a world first, a research group in Kyoto Prefecture has succeeded in 
processing and displaying optically received images directly from the human 
brain. 

The group of researchers at Advanced Telecommunications Research Institute 
International, including Yukiyasu Kamitani and Yoichi Miyawaki, from its 
NeuroInformatics Department, said about 100 million images can be read, adding 
that dreams as well as mental images are likely to be visualized in the future 
in the same manner. 

The research will be published Thursday in the U.S. scientific journal 
"Neuron." 

Optically received images are converted to electrical signals in the retina and 
treated in the brain's visual cortex. 

In the recent experiment, the research group asked two people to look at 440 
different still images one by one on a 100-pixel screen. Each of the images 
comprised random gray sections and flashing sections. 

The research group measured subtle differences in brain activity patterns in 
the visual cortexes of the two people with a functional magnetic resonance 
imaging (fMRI) scanner. They then subdivided the images and recorded the 
subjects' recognition patterns. 

The research group later measured the visual cortexes of the two people who 
were looking at the word "neuron" and five geometric figures such as a square 
and a cross. Based on the stored brain patterns, the research group analyzed 
the brain activities and reconstructed the images of Roman letters and other 
figures, succeeding in recreating optically received images. 

(Dec. 11, 2008)


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com


Re: [agi] Vector processing and AGI

2008-12-11 Thread Steve Richfield
Ben,

On 12/11/08, Ben Goertzel  wrote:
>
> > There isn't much that an MIMD machine can do better than a similar-sized
> > SIMD machine.
>
> Hey, that's just not true.
>
> There are loads of math theorems disproving this assertion...


Oops, I left out the presumed adjective "real-world". Of course there are
countless diophantine equations and other math trivia that aren't
vectorizable.

However, anything resembling a brain in that the process can be done by
billions of slow components must by its very nature vectorizable. Hence, in
the domain of our discussions, I think my statement still holds

>>
> >> OO and generic design patterns do buy you *something* ...
> >
> >
> > OO is often impossible to vectorize.
>
> The point is that we've used OO design to wrap up all
> processor-intensive code inside specific objects, which could then be
> rewritten to be vector-processing friendly...


As long as the OO is at a high enough level so as not to gobble up a bunch
of time in the SISD control processor, then no problem.

> There is an 80-core chip due out any time now. Intel has had BIG problems
> > finding anything to run on them, so I suspect that they would be more
> than
> > glad to give you a few if you promise to do something with them.
>
> Indeed, AGI and physics simulation may be two of the app areas that have
> the easiest times making use of these 80-core chips...


I don't think Intel is even looking at these. They are targeting embedded
applications.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=123753653-47f84b
Powered by Listbox: http://www.listbox.com