* Anthony Lalande [EMAIL PROTECTED] on Thu, 11 Jan 2001
| Does any loseless compression algorithm require the entire set of data for
| read access before it begins compression?
No. In fact none do. Conventional compression algorithms operate on
fixed-size blocks of data. Real-time
No. In fact none do. Conventional compression algorithms operate on
fixed-size blocks of data. Real-time compression of an audio stream is
easilly possible with a bit of buffering. The issue is not that but
compressing fast enough so that the buffer is not overrun.
Well, in effect, the
* Anthony Lalande [EMAIL PROTECTED] on Fri, 12 Jan 2001
| I'm wondering if you would get better compression by treating the whole
| stream as 1 block, and then compressing that, or compression in many smaller
| blocks. I guess it all depends on the compression used.
All data compression
32-64K blocks is the norm for high-level compression these days. That is
what bzip2 uses, and boy is it slow even on a fast Pentium-III. One minute
of linear PCM is ~8.75MB. You would need a supercomputer the size of a
refrigerator to utilize a block size that large.
Well, I can go to
Lossless compression is what people generally call programs like WinZip.
When you compress a file with WinZip, it takes up less space and when you
decompress it you get the exact same data that you compressed. In other
words, it doesn't lose any data in the compression and decompression
Does any loseless compression algorithm require the entire set of data for
read access before it begins compression? If you wanted to encode audio with
a loseless compression, could you do it in real-time or would you need to
wait until the entire recording is complete, and then compress