Steve Richfield wrote:
Richard,

On 7/21/08, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    Principal component analysis is not new, it has a long history,

Yes, as I have just discovered. What I do NOT understand is why anyone bothers with clustering (except through ignorance - my own excuse), which seems on its face to be greatly inferior.

    and so far it is a very long way from being the basis for a complete
    AGI,

Maybe not "complete" AGI, but a good chunk of one.

Mercy me! It is not even a gleam in the eye of something that would be half adequate.



    let alone a theory of everything in computer science.

OK, so that may be a bit of an exaggeration, but nonetheless there looks like there is SOMETHING big out there that could potentially do the particular jobs that I have outlined.

    Is there any concrete reason to believe that this particular PCA
    paper is doing something that is some kind of quantum leap beyond
    what can be found in the (several thousand?) other PCA papers that
    have already been written?

Do you have any favorites?

No.  The ones I have seen are not worth a second look.


I have attached an earlier 2006 paper with *_pictures_* of the learned transfer functions, which look a LOT like what is seen in a cat's an money's visual processing.

... which is so low-level that it counts as peripheral wiring.


Note that in the last section where they consider multi-layer applications, that they apparently suggest using *_only one_* PCA layer!

Of course they do: that is what all these magic bullet people say. They can't figure out how to do things in more than one layer, and they do not really understand that it is *necessary* to do things in more than one layer, so guess what?, they suggest that we not *need* more than one layer.

Sigh.  Programmer Error.



    To give you an idea of what I am looking for, does the algorithm go
    beyond single-level encoding patterns?

Many of the articles, including the one above, make it clear that they are up against a computing "brick wall". It seems that algorithmic honing is necessary to prove whether the algorithms are any good. Hence, no one has shown any practical application (yet), though they note that JPEG encoding is a sort of grossly degenerative example of their approach. Of course, the present computational difficulties is NO indication that this isn't the right and best way to go, though I agree that this is yet to be proven.

Hmm... you did not eally answer the question here.



    Can it find patterns of patterns, up to arbitrary levels of depth?
     And is there empirical evidence that it really does find a set of
    patterns comparable to those found by the human cognitive mechanism,
    without missing any obvious cases?

Again, take a look at the pictures and provide your own opinion. It sounds like you are a LOT more familiar with this than I am.

    Bloated claims for the effectiveness of some form of PCA turn up
    frequently in cog sci, NN and AI.  It can look really impressive
    until you realize how limited and non-extensible it is.

Curiously, there were NO such claims in any of these articles. Just lots of murky math. The attached article is the least opaque of the bunch. I was just pointing out that if this ever really DOES come together, then WOW. Further, disparate people seem to be coming up with different pieces of the puzzle. Does your response indicate that you are willing to take a shot at explaining some of the math murk in more recent articles? I could certainly use any help that I can get. So far, it appears that a PCA and matrix algebra glossary of terms and abbreviations would go a LONG way to understanding these articles. I wonder if one already exists?

I'd like to help (and I could), but do you realise how pointless it is? I have enough other things to do that I am not getting on with seriously important tasks, never mind explaining PCA minutiae.


All this brings up another question to consider: Suppose that a magical processing method were discovered that did everything that AGIs needed, but took WAY more computing power than is presently available. What would people here do?
1.  Go work on better hardware.
2.  Work of faster/crummier approximations.
3.  Ignore it completely and look for some other breakthrough.

Steve, you raise a deeply interesting question, at one level, because of the answer that it provokes: if you did not have the computing power to prove that the "magical processing method" actually was capable of solving the problems of AGI, then you would not be in any position to *know* that it was capable of solving the problems if AGI.

Your question answers itself, in other words.




Richard Loosemore






There is a NN parallel in electric circuit simulation programs like SPICE. Here, the execution time goes up as the *~_square_* of the circuit complexity, yet NNs operate linearly with complexity by ignoring Thevenin's Theorem (that might provide better back propagation than do conventional forms of back propagation). Thanks for your comments. Steve Richfield
================
Steve Richfield wrote:

        Y'all,
         I have long predicted a coming "Theory of Everything" (TOE) in
        CS that would, among other things, be the "secret sauce" that
        AGI so desperately needs. This year at WORLDCOMP I saw two
        presentations that seem to be running in the right direction. An
        earlier IEEE article by one of the authors seems to be right on
        target. Here is my own take on this...
         Form:  The TOE would provide a way of unsupervised learning to
        rapidly form productive NNs, would provide a subroutine that AGI
        programs could throw observations into and SIGNIFICANT patterns
        would be identified, would be the key to excellent video
        compression, and indirectly, would provide the "perfect"
        encryption that nearly perfect compression would provide.
         Some video compression folks in Germany have come up with
        "Principal Component Analysis" that works a little like
        clustering, only it also includes temporal consideration, so
        that things that come and go together are presumed to be
        related, thereby eliminating the "superstitious clustering"
        problem of static cluster analysis. There is just one "catch":
        This is buried in array transforms and compression jargon that
        baffles even me, a former in-house numerical analysis consultant
        to the physics and astronomy departments of a major university.
        Further, it is computationally intensive.
         Teaser: Their article is entitled "A new method for Principal
        Component Analysis of high-dimensional data using Compressive
        Sensing" and applies methods that *_benefit_* from having many
        dimensions, rather than being plagued by them (e.g. as in
        cluster analysis).
         Enter a retired math professor who has come up with some clever
        "simplifications" (to the computer, but certainly not to me) to
        make these sorts of computations tractable for real-world use.
        It looks like this could be quickly put to use, if only someone
        could translate this stuff from linear algebra to English for us
        mere mortals. He also authored a textbook that Amazon provides
        peeks into, but in addition to its 3-digit price tag, it was
        also rather opaque.
         It's been ~40 years since I have had my head into matrix
        transforms, so I have ordered up some books to hopefully help me
        through it. Is there someone here who is fresh in this area who
        would like to take a shot at "translating" some obtuse
        mathematical articles into English - or at least providing a few
        pages of prosaic footnotes to explain their terminology?
         I will gladly forward the articles that seem to be relevant to
        anyone who wants to take a shot at this.
         Any takers?
         Steve Richfield
         
------------------------------------------------------------------------
        *agi* | Archives
        <https://www.listbox.com/member/archive/303/=now>
        <https://www.listbox.com/member/archive/rss/303/> | Modify
        <https://www.listbox.com/member/?&;
<https://www.listbox.com/member/?&;>> Your Subscription [Powered by Listbox] <http://www.listbox.com
        <http://www.listbox.com/>>




    -------------------------------------------
    agi
    Archives: https://www.listbox.com/member/archive/303/=now
    RSS Feed: https://www.listbox.com/member/archive/rss/303/
    Modify Your Subscription: https://www.listbox.com/member/?&;
    <https://www.listbox.com/member/?&;>
    Powered by Listbox: http://www.listbox.com <http://www.listbox.com/>


------------------------------------------------------------------------
*agi* | Archives <https://www.listbox.com/member/archive/303/=now> <https://www.listbox.com/member/archive/rss/303/> | Modify <https://www.listbox.com/member/?&;> Your Subscription [Powered by Listbox] <http://www.listbox.com>




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to