apurtell edited a comment on pull request #3244:
URL: https://github.com/apache/hbase/pull/3244#issuecomment-836903862


   > So we will only compress value?
   
   This is an enhancement to existing WAL compression. As you know the existing 
WAL compression already compresses other aspects of WAL entries _except_ for 
the value. This patch adds support for compressing values too. 
   
   > As we will do batching when writing WAL entries out, is it possible to 
compress when flushing? The data will be larger and compress may perform 
better. The structure of a WAL file will be multiple compressed blocks.
   
   This is not possible for two reasons:
   
   1. WALCellCodec does not compress the WAL file in blocks. The design is edit 
by edit. I want to introduce value compression without re-engineering the whole 
WAL format. Perhaps our WAL file format is due for a redesign, but I would like 
to see that be a different issue. 
   
   2. We flush the compressor at the end of every value to ensure each WALedit 
record persists all of the value data into the expected place. Otherwise the 
compressor would put some of the unflushed output of the previous value into 
the next/current value. But, we are not resetting the compressor. (That would 
be FULL_FLUSH. We are using SYNC_FLUSH.) By using the same Deflater instance 
for the whole WAL we already get the benefit you are thinking of. The (re-used) 
Deflater is able to build its dictionary across the contents of all of the 
values in the file, not just each value considered in isolation (that was the 
original patch but I pushed an improvement that aligns with this suggestion 
later), achieving a better compression. 
   
   Way back in the distant past our WAL format was based on Hadoop's 
SequenceFile, which supported both record-by-record and block based 
compression, where the blocks would contain multiple records. I don't remember 
why we moved away from it but I imagine it was because if there are corruptions 
of the WAL, a record by record codec is able to skip over the corrupt record 
and we lose only the record (or as many records as are actually corrupt), but 
with a block format we would lose the whole record and all of the edits 
contained within that record, especially if compression or encryption is 
enabled. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to