Derek,

On 7/21/08, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
>
> > > I have attached an earlier 2006 paper with *_pictures_* of the learned
> > > transfer functions, which look a LOT like what is seen in a cat's an
> > > money's visual processing.
> >
> > ... which is so low-level that it counts as peripheral wiring.
>
> True.  Still, it is kind of cool stuff for folks interested in how neural
> systems might self-organize from sensory data.  The visual world has edges
> and borders at various scales and degrees of sharpness and it is interesting
> to see how that can be learned.  Unfortunately, although the linearity
> assumptions of PCA might just barely allow this sort of "proto-V1" as in the
> paper, it doesn't seem likely to extend further up in a feature abstraction
> hierarchy where more complex relationships would seem to require
> nonlinearities.
>

THIS is a big question. Remembering that absolutely ANY function can be
performed by passing the inputs through a suitable non-linearity, adding
them up, and running the results through another suitable non-linearity, it
isn't clear what the limitations of "linear" operations are, given suitable
"translation" of units or point-of-view. Certainly, all fuzzy logical
functions can be performed this way. I even presented a paper at the very
1st NN conference in San Diego, showing that one of the two inhibitory
synapses ever to be characterized was precisely what was needed to perform
an AND NOT to the logarithms of probabilities of assertions being true,
right down to the discontinuity at 1.

>
> Assuming the author's analysis is correct, the observation that the
> discovered eigenvectors form groups that can express rotations of edge (etc)
> filters at various frequencies is kind of nifty, even if it turns out not to
> be biologically plausible.
>

Did you see anything there that was not biologically plausible?

 I don't see any broad generalityfor AGI beyond very low-level sensory
> processing given the limits of PCA
>

Make that present-day PCA. Several people are working on its limitations,
and there seems to be some reason for hope of much better things to come.

 and the sheer volume of training data required to sort out the principal
> components of high-dimensional inputs.
>

Given crummy shitforbrains Hebbian neurons, that aren't smart enough to
continuously normalize their synaptic weights, etc. This too needs MUCH more
work.

>
> For a much more detailed, capable, and perhaps more neurally plausible
> model of similar stuff, the work of Risto Miikkulainen's group is a lot of
> fun.
>

Do you have a hyperlink?

Thanks.

Steve Richfield



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to