> However, I think that a lossless model can reasonably derive this information by observing that p(x, x') is approximately equal to p(x) or p(x').  In other words, knowing both x and x' does not tell you any more than x or x' alone, or CDM(x, x') ~ 0.5.  I think this is a reasonable way to model lossy behavior in humans.
How does a lossless model observe that "Jim is extremely fat" and "James continues to be morbidly obese" are approximately equal?  I would assume that it would have to be via the same world model that a lossy model would -- which is waaaay above the bitstream level. 
 
Also, I think that going at this via a probability model is not the way to go.
 
> knowing both x and x' does not tell you any more than x or x' alone
 
Can't you rephrase this with the following approximately equal phrases:
  1. You need to discard either x or x' to reach a canonical form, or
  2. Discarding either x or x' is not a lossy operation?
        Mark
 
----- Original Message -----
From: "Matt Mahoney" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Sunday, August 27, 2006 10:32 PM
Subject: Re: [agi] Lossy *&* lossless compressi

> In showing that compression implies AI, I first make the simplifying assumption that everyone shares the same language model.  Then I relax that assumption and argue that this makes it easier for a machine to pass the Turing test.
>
> But I see your point.  I argued that a lossless model knows everything that a lossy model does, plus more, because the lossless model knows p(x) and p(x'), while a lossy model only knows p(x) + p(x').  However I missed that the lossy model knows that x and x' are equivalent, while the lossless model does not.
>
> However, I think that a lossless model can reasonably derive this information by observing that p(x, x') is approximately equal to p(x) or p(x').  In other words, knowing both x and x' does not tell you any more than x or x' alone, or CDM(x, x') ~ 0.5.  I think this is a reasonable way to model lossy behavior in humans.

> -- Matt Mahoney,
[EMAIL PROTECTED]
>
> ----- Original Message ----
> From: Philip Goetz <
[EMAIL PROTECTED]>
> To:
agi@v2.listbox.com
> Sent: Sunday, August 27, 2006 9:23:25 PM
> Subject: Re: [agi] Lossy *&* lossless compressi
>
> On 8/25/06, Matt Mahoney <
[EMAIL PROTECTED]> wrote:
>> As I stated earlier, the fact that there is normal variation in human language models makes it easier for a machine to pass the Turing test.  However, a machine with a lossless model will still outperform one with a lossy model because the lossless model has more knowledge.
>
> That would be true only if there were one correct language model, AND
> you knew what it was.
> Besides which, every human has a lossy model.  It seems to me that by
> your argument, a machine with a lossless model would "out-perform" a
> human, and thus /fail/ the Turing test.
>
> - Phil
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your subscription,
> please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]
>
>
>
> -------
> To unsubscribe, change your address, or temporarily deactivate your subscription,
> please go to
http://v2.listbox.com/member/[EMAIL PROTECTED]
>
To unsubscribe, change your address, or temporarily deactivate your subscription, please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to