With consciousness I'm merely observing functional aspects and using that in 
building an engineering model of general intelligence based on >1 agent. I feel 
consciousness improves communication, is a component and is important. And even 
with just one agent it's important IMO.

If you think about it that way and then think about "perceptual lossless" it 
starts getting interesting.

At first blush one may say perceptual lossless is when you have a lossily 
compressed picture of a mountain that looks exactly like the uncompressed. 
Sure, that’s fine.

But you don't know if something is lossless or perceptual lossless.

And the questions begin:

If I give you a lossless file will you always perceive it as lossless? Can a 
lossless file be losslessly recompressed to eliminate non-perceptible 
information? Is it still lossless then?

Who is doing the perception? Decompressors and perceivers of the decompressed?

Are there different perceivers of different capabilities? Can a compressed file 
hold various stages or types of perceptibility for different perceivers?

With perceptual compression you start getting into third parties which involves 
multiparty communication complexity. Typically, it is assumed a compressor 
targets one decompressor type. 

In real life people rely on perceptual lossless compression in many ways when 
you think about it. You don’t really know what’s inside of things do you? You 
are relying on the unknown with confidence and certainty.

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T252d8aea50d6d8f9-Mb8b39e6ab1e3d01b5d690dea
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to