[ https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Mikhail Bautin updated HBASE-4218: ---------------------------------- Attachment: Delta-encoding.patch-2012-01-05_16_31_44_copy.patch Attaching a patch that applies. (A new unit test is coming for HFile v1 to encoded HFile v2 upgrade, so the patch is not final yet.) > Data Block Encoding of KeyValues (aka delta encoding / prefix compression) > --------------------------------------------------------------------------- > > Key: HBASE-4218 > URL: https://issues.apache.org/jira/browse/HBASE-4218 > Project: HBase > Issue Type: Improvement > Components: io > Affects Versions: 0.94.0 > Reporter: Jacek Migdal > Assignee: Mikhail Bautin > Labels: compression > Fix For: 0.94.0 > > Attachments: 0001-Delta-encoding-fixed-encoded-scanners.patch, > 0001-Delta-encoding.patch, 4218-v16.txt, 4218.txt, D447.1.patch, > D447.10.patch, D447.11.patch, D447.12.patch, D447.13.patch, D447.14.patch, > D447.15.patch, D447.16.patch, D447.17.patch, D447.18.patch, D447.19.patch, > D447.2.patch, D447.3.patch, D447.4.patch, D447.5.patch, D447.6.patch, > D447.7.patch, D447.8.patch, D447.9.patch, > Data-block-encoding-2011-12-23.patch, > Delta-encoding.patch-2011-12-22_11_52_07.patch, > Delta-encoding.patch-2012-01-05_15_16_43.patch, > Delta-encoding.patch-2012-01-05_16_31_44.patch, > Delta-encoding.patch-2012-01-05_16_31_44_copy.patch, > Delta_encoding_with_memstore_TS.patch, open-source.diff > > > A compression for keys. Keys are sorted in HFile and they are usually very > similar. Because of that, it is possible to design better compression than > general purpose algorithms, > It is an additional step designed to be used in memory. It aims to save > memory in cache as well as speeding seeks within HFileBlocks. It should > improve performance a lot, if key lengths are larger than value lengths. For > example, it makes a lot of sense to use it when value is a counter. > Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) > shows that I could achieve decent level of compression: > key compression ratio: 92% > total compression ratio: 85% > LZO on the same data: 85% > LZO after delta encoding: 91% > While having much better performance (20-80% faster decompression ratio than > LZO). Moreover, it should allow far more efficient seeking which should > improve performance a bit. > It seems that a simple compression algorithms are good enough. Most of the > savings are due to prefix compression, int128 encoding, timestamp diffs and > bitfields to avoid duplication. That way, comparisons of compressed data can > be much faster than a byte comparator (thanks to prefix compression and > bitfields). > In order to implement it in HBase two important changes in design will be > needed: > -solidify interface to HFileBlock / HFileReader Scanner to provide seeking > and iterating; access to uncompressed buffer in HFileBlock will have bad > performance > -extend comparators to support comparison assuming that N first bytes are > equal (or some fields are equal) > Link to a discussion about something similar: > http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira