Sorry, meant lossy of course.

James

Mark Waser <[EMAIL PROTECTED]> wrote:     >> I  think that generaliziation via 
lossless compression could more readily be a  Requirement for an AGI.

 Human beings don't do lossless compression so  lossless compression clearly 
*isn't* a requirement.  Lossless compression  also clearly requires more 
resources than generalization where you are allowed  to lose some odd examples.
  
 >> Also I  must agree with Matt that you cant have knowledge seperate from 
 >> other knowledge,  everything is intertwined, and that is the problem.

 You're missing the point.  Yes, knowledge is  intertwined; however, look at 
how it works when humans argue/debate.   Knowledge is divided into a small 
number of concepts that both the humans  understand (although they may debate 
the truth value of the concepts).   Arguments are normally be easily resolved 
(even if the resolution is  "agree to disagree") when the humans quickly reach 
the concepts at the root of  the pyramid supporting the concept under question 
-- and that pyramid is *never*  very large because humans simply don't work 
that way.  Take any debate  (even the truly fiery ones) and you'll find that 
the number of concepts involved  is *well* less than 100 (if it even reaches 
twenty).
  
 >> It is  very difficult to teach a computer something without it knowing ALL 
 >> other things  related to that, because then Some inference it tries to make 
 >> will be wrong,  regardless.
  
 But this is  *precisely* how children are taught.  You have to start somewhere 
and you  start by saying that certain concepts are just true (even though they 
may not  *always* be true) and that it's not worthwhile to examine the concepts 
 underneath them unless there's a *really* good reason.  The way in which  you 
and Matt are arguing, I need to always know *and* use General Relativity  even 
for things that are adequately handled by Newtonian Physics.  Yes,  there 
*will* be errors when you reach edge cases (very high speeds in the  Physics 
case) but there is *absolutely* no way to avoid this because you  virtually 
never know when you're going to wander over a phase change when you're  in the 
realm of new experiences.
  
 >> There  is Nothing, that I know, that humans know that is not in terms of 
 >> something  else, that is one thing that adds to the complexity of the 
 >> issue.   

 Yes, but I  believe that there *is* a reasonably effective cognitive closure 
that contains a  reasonably small number of concepts which can then apply 
external lookups and  learning for everything else that it needs.
  
 >> But that means  that an architecture for AI will have to have a method for 
 >> finding these  inconsistencies and correcting them with good  effeciency.

Yes!  Exactly and absolutely!  In fact, I  would almost argue that this is 
*all* that intelligence does . . .  .
 

  
  
    ----- Original Message ----- 
   From:    James Ratcliff    
   To: agi@v2.listbox.com 
   Sent: Friday, November 17, 2006 9:13    AM
   Subject: Re: [agi] A question on the    symbol-system hypothesis
   

I think that generaliziation via lossless compression could    more readily be 
a Requirement for an AGI.

Also I must agree with Matt    that you cant have knowledge seperate from other 
knowledge, everything is    intertwined, and that is the problem.
There is Nothing, that I know, that    humans know that is not in terms of 
something else, that is one thing that    adds to the complexity of the issue.  
It is very difficult to teach a    computer something without it knowing ALL 
other things related to that,    because then Some inference it tries to make 
will be wrong,    regardless.
  But that means that an architecture for AI will have to    have a method for 
finding these inconsistencies and correcting them with good    effeciency.

James Ratcliff

Mark Waser    <[EMAIL PROTECTED]> wrote:        DIV {  MARGIN: 0px }            
>> I      don't believe it is true that better compression implies higher 
intelligence      (by these definitions) for every possible agent, environment, 
universal      Turing machine and pair of guessed programs. 
      
     Which I take to agree with my      point.
      
     >> I      also don't believe Hutter's paper proved it to be a general 
trend (by some      reasonable measure). 
      
     Again, which I take to be      agreement.
      
     >> But I wouldn't doubt it.

     Depending upon what you mean by compression, I      would strongly doubt 
it.  I believe that lossless compression is      emphatically *not* part of 
higher intelligence in most real-world conditions      and, in fact, that the 
gains provided by "losing" a lot of data makes a much      higher intelligence 
possible with the same limited resources than an      intelligence that is 
constrained by the requirement to not lose      data.
      
            -----        Original Message ----- 
       From:        Matt        Mahoney 
       To:        agi@v2.listbox.com 
       Sent:        Thursday, November 16, 2006 2:17 PM
       Subject:        Re: [agi] A question on the symbol-system hypothesis
       

       In        the context of AIXI, intelligence is measured by an 
accumulated reward        signal, and compression is defined by the size of a 
program (with respect        to some fixed universal Turing machine) guessed by 
the agent that is        consistent with the observed interaction with the 
environment.  I        don't believe it is true that better compression implies 
higher        intelligence (by these definitions) for every possible agent, 
environment,        universal Turing machine and pair of guessed programs.  I 
also don't        believe Hutter's paper proved it to be a general trend (by 
some reasonable        measure).  But I wouldn't doubt it.
        
-- Matt Mahoney, [EMAIL PROTECTED]        

       -----        Original Message ----
From: Mark Waser        <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Thursday,        November 16, 2006 12:18:46 PM
Subject: Re: [agi] A question on the        symbol-system hypothesis

               1. The fact that AIXI is intractable is not        relevant to 
the proof that compression = intelligence, any more than the        fact that 
AIXI is not computable.  In fact it is supporting because        it says that 
both are hard problems, in agreement with        observation.
        
       Wrong.  Compression may (and, I might        even be willing to admit, 
does) equal intelligence under the conditions of        perfect and total 
knowledge.  It is my contention, however, that        without those conditions 
that compression does not equal intelligence and        AIXI does absolutely 
nothing to disprove my contention since        it assumes (and requires) those 
conditions -- which emphatically do not        exist.
       
2. Do not confuse the two        compressions.  AIXI proves that the optimal 
behavior of a goal        seeking agent is to guess the shortest program 
consistent with its        interaction with the environment so far.  This is 
lossless        compression.  A typical implementation is to perform some 
pattern        recognition on the inputs to identify features that are useful 
for        prediction.  We sometimes call this "lossy compression" because we   
     are discarding irrelevant data.  If we anthropomorphise the agent,        
then we say that we are replacing the input with perceptually        
indistinguishable data, which is what we typically do when we compress        
video or sound.
        
       I haven't confused anything.  Under        perfect conditions, and only 
under perfect conditions, does AIXI        prove anything.  You don't have 
perfect conditions so AIXI proves        absolutely nothing.

       ----- Original Message -----        From: "Matt Mahoney" <[EMAIL 
PROTECTED]>
       To: <agi@v2.listbox.com>
       Sent: Wednesday, November 15, 2006 7:20        PM
       Subject: Re: [agi] A question on the        symbol-system hypothesis

       

1. The fact that AIXI^tl is intractable is not relevant        to the proof 
that compression = intelligence, any more than the fact that        AIXI is not 
computable.  In fact it is supporting because it says        that both are hard 
problems, in agreement with observation.

2. Do        not confuse the two compressions.  AIXI proves that the optimal    
    behavior of a goal seeking agent is to guess the shortest program        
consistent with its interaction with the environment so far.  This is        
lossless compression.  A typical implementation is to perform some        
pattern recognition on the inputs to identify features that are useful for      
  prediction.  We sometimes call this "lossy compression" because we        are 
discarding irrelevant data.  If we anthropomorphise the agent,        then we 
say that we are replacing the input with perceptually        indistinguishable 
data, which is what we typically do when we compress        video or sound.
 
-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Mark Waser <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent:        Wednesday, November 15, 2006 3:48:37 PM
Subject: Re: [agi] A question        on the symbol-system hypothesis

>> The connection between        intelligence and compression is not obvious.

The connection between        intelligence and compression *is* obvious -- but 
compression,        particularly lossless compression, is clearly *NOT*        
intelligence.

Intelligence compresses knowledge to ever simpler        rules because that is 
an 
effective way of dealing with the        world.  Discarding 
ineffective/unnecessary 
knowledge to make way        for more effective/necessary knowledge is an 
effective 
way of dealing        with the world.  Blindly maintaining *all* knowledge at   
     
tremendous costs is *not* an effective way of dealing with the world        
(i.e. 
it is *not* intelligent).

>>1. What Hutter proved        is that the optimal behavior of an agent is to 
>>guess 
>>that the        environment is controlled by the shortest program that is     
>>   
>>consistent with all of the interaction observed so far.         The problem 
>>of 
>>finding this program known as AIXI.
>>        2. The general problem is not computable [11], although Hutter proved 
>>        
>> that if we assume time bounds t and space bounds l on the        
>> environment, 
>> then this restricted problem, known as AIXItl,        can be solved in 
>> O(t2l) 
>> time

Very nice -- except that        O(t2l) time is basically equivalent to 
incomputable 
for any real        scenario.  Hutter's proof is useless because it relies upon 
the        
assumption that you have adequate resources (i.e. time) to calculate        
AIXI --  
which you *clearly* do not.  And like any other        proof, once you 
invalidate 
the assumptions, the proof becomes equally        invalid.  Except as an 
interesting but unobtainable edge case,        why do you believe that Hutter 
has 
any relevance at        all?


----- Original Message ----- 
From: "Matt Mahoney"        <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Wednesday, November 15, 2006 2:54 PM
Subject: Re:        [agi] A question on the symbol-system hypothesis


Richard, what        is your definition of "understanding"?  How would you test 
       
whether a person understands art?

Turing offered a behavioral        test for intelligence.  My understanding of 
"understanding" is        that it is something that requires intelligence.  The 
connection        between intelligence and compression is not obvious.  I have  
      
summarized the arguments here.
http://cs.fit.edu/~mmahoney/compression/rationale.html

-- Matt Mahoney, [EMAIL PROTECTED]

----- Original Message ----
From: Richard Loosemore        <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent:        Wednesday, November 15, 2006 2:38:49 PM
Subject: Re: [agi] A question        on the symbol-system hypothesis

Matt Mahoney wrote:
> Richard        Loosemore <[EMAIL PROTECTED]> wrote:
>> "Understanding" 10^9 bits of        information is not the same as storing 
>> 10^9
>> bits of        information.
>
> That is true.  "Understanding" n bits is        the same as compressing some 
> larger training set that has an        algorithmic complexity of n bits.  
> Once 
> you have done this,        you can use your probability model to make 
> predictions 
> about        unseen data generated by the same (unknown) Turing machine as 
> the 
>        training data.  The closer to n you can compress, the better your      
>   
> predictions will be.
>
> I am not sure what it means        to "understand" a painting, but let's say 
> that 
> you understand art        if you can identify the artists of paintings you 
> haven't seen        before with better accuracy than random guessing.  The 
>        relevant quantity of information is not the number of pixels and 
>        resolution, which depend on the limits of the eye, but the (much 
> smaller)        
> number of features that the high level perceptual centers of the        brain 
> are 
> capable of distinguishing and storing in memory.         (Experiments by 
> Standing 
> and Landauer suggest it is a few bits        per second for long term memory, 
> the 
> same rate as        language).  Then you guess the shortest program that 
>        generates a list of feature-artist pairs consistent with your 
> knowledge of        
> art and use it to predict artists given new        features.
>
> My estimate of 10^9 bits for a language model is        based on 4 lines of 
> evidence, one of which is the amount of        language you process in a 
> lifetime.  This is a rough        estimate of course.  I estimate 1 GB (8 x 
> 10^9 
> bits)        compressed to 1 bpc (Shannon) and assume you remember a 
> significant        
> fraction of that.

Matt,

So long as you keep        redefining "understand" to mean whatever something
trivial (or at        least, something different in different circumstances),
all you do is        reinforce the point I was trying to make.

In your definition of        "understanding" in the context of art, above, you
specifically choose        an interpretation that enables you to pick a
particular bit rate.         But if I chose a different interpretation (and I
certainly would - an        art historian would never say they understood a
painting just because        they could tell the artist's style better than a
random guess!), I        might come up with a different bit rate.  And if I
chose a        sufficiently subtle concept of "understand", I would be unable
to come        up with *any* bit rate, because that concept of "understand"
would not        lend itself to any easy bit rate analysis.

The lesson?         Talking about bits and bit rates is completely pointless
.... which was        my point.

You mainly identify the meaning of "understand" as a        variant of the
meaning of "compress".  I completely reject this -        this is the most
idiotic development in AI research since the early        attempts to do
natural language translation using word-by-word lookup        tables  -  and I
challenge you to say why anyone could        justify reducing the term in such
an extreme way.  Why have you        thrown out the real meaning of
"understand" and substituted another        meaning?  What have we gained by
dumbing the concept        down?

As I said in previously, this is as crazy as redefining the        complex
concept of "happiness" to be "a warm puppy".


Richard        Loosemore



> Landauer, Tom (1986), “How much do        people
> remember?  Some estimates of the quantity
> of        learned information in long term memory”, Cognitive Science (10) 
> pp.        
> 477-493
>
> Shannon, Cluade E. (1950), “Prediction        and
> Entropy of Printed English”, Bell Sys. Tech. J (3) p.        50-64.
>
> Standing, L. (1973), “Learning 10,000        Pictures”,
> Quarterly Journal of Experimental Psychology (25) pp.        207-222.
>
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> ----- Original Message ----
> From: Richard        Loosemore <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Wednesday, November 15, 2006 9:33:04 AM
> Subject:        Re: [agi] A question on the symbol-system hypothesis
>
> Matt        Mahoney wrote:
>> I will try to answer several posts here. I said        that the knowledge
>> base of an AGI must be opaque because it        has 10^9 bits of information,
>> which is more than a person can        comprehend. By opaque, I mean that you
>> can't do any better by        examining or modifying the internal
>> representation than you        could by examining or modifying the training
>> data. For a text        based AI with natural language ability, the 10^9 bits
>> of        training data would be about a gigabyte of text, about 1000 books. 
>>        Of
>> course you can sample it, add to it, edit it, search it,        run various
>> tests on it, and so on. What you can't do is read,        write, or know all 
>> of
>> it. There is no internal representation        that you could convert it to
>> that would allow you to do these        things, because you still have 10^9
>> bits of information. It is        a limitation of the human brain that it 
>> can't
>> store more        information than this.
>
> "Understanding" 10^9 bits of        information is not the same as storing 
> 10^9
> bits of        information.
>
> A typical painting in the Louvre might be 1        meter on a side.  At 
> roughly
> 16 pixels per millimeter, and a        perceivable color depth of about 20 
> bits
> that would be about 10^8        bits.  If an art specialist knew all about,
> say, 1000        paintings in the Louvre, that specialist would "understand" a
>        total of about 10^11 bits.
>
> You might be inclined to say        that not all of those bits count, that 
> many
> are redundant to        "understanding".
>
> Exactly.
>
> People can        easily comprehend 10^9 bits.  It makes no sense to argue
>        about degree of comprehension by quoting numbers of        bits.
>
>
> Richard Loosemore
>
>        -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go        to:
> http://v2.listbox.com/member/?list_id=303
>
>
>
> -----
> This list is        sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>        http://v2.listbox.com/member/?list_id=303
>
>

-----
This list is sponsored by        AGIRI: http://www.agiri.org/email
To        unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go        to:
http://v2.listbox.com/member/?list_id=303


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go        to:
http://v2.listbox.com/member/?list_id=303



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go        to:
http://v2.listbox.com/member/?list_id=303
This        list is sponsored by AGIRI: http://www.agiri.org/email To 
unsubscribe        or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303        



       
---------------------------------
       This list is sponsored by AGIRI: http://www.agiri.org/email To        
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303           
---------------------------------
     This list is sponsored by AGIRI: http://www.agiri.org/email To      
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303    


_______________________________________
James    Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads!    
http://www.falazar.com/projects/Torrents/tvtorrents_show.php      

---------------------------------
   Check out the    all-new Yahoo! Mail beta - Fire up a more powerful email 
and get things    done faster.   
---------------------------------
    This list is sponsored by AGIRI: http://www.agiri.org/email To    
unsubscribe or change your options, please go to: 
http://v2.listbox.com/member/?list_id=303  
---------------------------------
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or 
change your options, please go to: http://v2.listbox.com/member/?list_id=303 


_______________________________________
James Ratcliff - http://falazar.com
New Torrent Site, Has TV and Movie Downloads! 
http://www.falazar.com/projects/Torrents/tvtorrents_show.php
 
---------------------------------
Sponsored Link

   Mortgage rates as low as 4.625% - $150,000 loan for $579 a month. 
Intro-*Terms

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to