Steve Richfield wrote:
Richard,
You are confusing what PCA now is, and what it might become. I am more interested in the dream than in the present reality. Detailed comments follow... On 7/21/08, *Richard Loosemore* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

    Steve Richfield wrote:

         Maybe not "complete" AGI, but a good chunk of one.


    Mercy me!  It is not even a gleam in the eye of something that would
    be half adequate.

Who knows what the building blocks of the first successful AGI will be.

Ummm.... some of us believe we do. We see the ingredients, we see the obvious traps and pitfalls, we see the dead ends. We may even see the beginnings of a complete theory. But that is by the by: we see the dead ends clearly enough, and PCA is one of them.




Remember that YOU are made of wet neurons, and who knows, maybe they work by some as-yet-to-be-identified mathematics that will be uncovered in the quest for a better PCA.

         Do you have any favorites?


    No.  The ones I have seen are not worth a second look.

I had the same opinion.

        I have attached an earlier 2006 paper with *_pictures_* of the
        learned transfer functions, which look a LOT like what is seen
        in a cat's an money's visual processing.


    ... which is so low-level that it counts as peripheral wiring.

Agreed, but there is little difference between GOOD compression and understanding, so if these guys are truly able to (eventually) perform good compression, then maybe we are on the way to understanding.

Now that, I'm afraid, is simply not true. What woudl make you say that "there is little difference between GOOD compression and understanding"? That statement is unsupportable.



        Note that in the last section where they consider multi-layer
        applications, that they apparently suggest using *_only one_*
        PCA layer!


    Of course they do:  that is what all these magic bullet people say.
    They can't figure out how to do things in more than one layer, and
    they do not really understand that it is *necessary* to do things in
    more than one layer, so guess what?, they suggest that we not *need*
    more than one layer.

    Sigh.  Programmer Error.

I noted this comment because it didn't ring true for me either. However, my take on this is that a real/future/good PCA will work for many layers, and not just the first.

Well, there is a sense in which I would agree with this, but the problem is that by the time it has been extended sufficiently to become multilayer, it will not longer be recognisable as PCA, and it will not necessarily have any of the original good features of PCA, and there be other mechanisms that do the same multi-layer understanding much better than this hypothetical PCA+++, and, finally, those other mechanisms may be much easier to discover than PCA+++.

I simply do not believe that you can get there by starting from here. That is why I describe PCA as a dead end.




Note that the extensive training was LESS than what a baby sees during its first hour in the real world.

           To give you an idea of what I am looking for, does the
        algorithm go
           beyond single-level encoding patterns?

         Many of the articles, including the one above, make it clear
        that they are up against a computing "brick wall". It seems that
        algorithmic honing is necessary to prove whether the algorithms
        are any good. Hence, no one has shown any practical application
        (yet), though they note that JPEG encoding is a sort of grossly
        degenerative example of their approach.
         Of course, the present computational difficulties is NO
        indication that this isn't the right and best way to go, though
        I agree that this is yet to be proven.


    Hmm... you did not eally answer the question here.

Increasing bluntness: How are they supposed to test multiple-layer methods when they have to run their computers for days just to test a single layer? PCs just don't last that long, and Microsoft has provided no checkpoint capability to support year-long executions.

What I meant was: does anyone have any reason to believe that the scalability of the multilevel PCA will be such that we can get human-level intelligence out of it, using a computer smaller than the size of the entire universe? In that context, it would not be an answer to say ... "well, those folks haven't been able to get powerful enough computers yet, so we don't know...." :-)




         Does your response indicate that you are willing to take a shot
        at explaining some of the math murk in more recent articles? I
        could certainly use any help that I can get. So far, it appears
        that a PCA and matrix algebra glossary of terms and
        abbreviations would go a LONG way to understanding these
        articles. I wonder if one already exists?


    I'd like to help (and I could), but do you realise how pointless it is?

Not yet. I agree that it has't gone anywhere yet. Please make your case that this will never go anywhere.

My case is as follows: it shows signs of being flat (intrinsically single level) OR being multi-level but completely unscalable, and until someone in the PCA community can show good reason to believe that it can scale up while at the same time being able to discover the kinds of higher-level regularities that humans are capable of, there is no reason to believe that it will scale up. Just ask someone in the PCA community for such a demonstration. They cannot provide it. Until they do, it is pure speculation to suppose that it will scale.




        All this brings up another question to consider: Suppose that a
        magical processing method were discovered that did everything
        that AGIs needed, but took WAY more computing power than is
        presently available. What would people here do?
        1.  Go work on better hardware.
        2.  Work of faster/crummier approximations.
        3.  Ignore it completely and look for some other breakthrough.


    Steve, you raise a deeply interesting question, at one level,
    because of the answer that it provokes:  if you did not have the
    computing power to prove that the "magical processing method"
    actually was capable of solving the problems of AGI, then you would
    not be in any position to *know* that it was capable of solving the
    problems if AGI.

This all depends on the underlying theoretical case. Early Game Theory application was also limited by compute power, but holding the proof that this was as good as could be done, they pushed for more compute power rather than walking away and looking for some other approach. I remember when the RAND Corp required 5 hours to just to solve a 5X5 non-zero-sum game.

    Your question answers itself, in other words.

Only in the absence of theoretical support/proof of optimality. PCA looked like maybe such a proof might be in its future.

Aye, but there's the rub. I see no such proof in the near future. I see no principle on which such a proof could even be based.



Richard Loosemore


-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to