[jira] [Created] (HBASE-4218) Delta Encoding of KeyValues (aka prefix compression)

2011-08-17 Thread Jacek Migdal (JIRA)
Delta Encoding of KeyValues  (aka prefix compression)
-

 Key: HBASE-4218
 URL: https://issues.apache.org/jira/browse/HBASE-4218
 Project: HBase
  Issue Type: Improvement
  Components: io
Reporter: Jacek Migdal


A compression for keys. Keys are sorted in HFile and they are usually very 
similar. Because of that, it is possible to design better compression than 
general purpose algorithms,

It is an additional step designed to be used in memory. It aims to save memory 
in cache as well as speeding seeks within HFileBlocks. It should improve 
performance a lot, if key lengths are larger than value lengths. For example, 
it makes a lot of sense to use it when value is a counter.

Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
shows that I could achieve decent level of compression:
 key compression ratio: 92%
 total compression ratio: 85%
 LZO on the same data: 85%
 LZO after delta encoding: 91%
While having much better performance (20-80% faster decompression ratio than 
LZO). Moreover, it should allow far more efficient seeking which should improve 
performance a bit.

It seems that a simple compression algorithms are good enough. Most of the 
savings are due to prefix compression, int128 encoding, timestamp diffs and 
bitfields to avoid duplication. That way, comparisons of compressed data can be 
much faster than a byte comparator (thanks to prefix compression and bitfields).

In order to implement it in HBase two important changes in design will be 
needed:
-solidify interface to HFileBlock / HFileReader Scanner to provide seeking and 
iterating; access to uncompressed buffer in HFileBlock will have bad performance
-extend comparators to support comparison assuming that N first bytes are equal 
(or some fields are equal)

Link to a discussion about something similar:
http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4218) Delta Encoding of KeyValues (aka prefix compression)

2011-08-17 Thread Jacek Migdal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13086556#comment-13086556
 ] 

Jacek Migdal commented on HBASE-4218:
-

Yes, I plan to measure seek performance within one block.

I haven't implement it yet, but I rather expect that it will make seeking and 
decompressing KeyValues as fast as operating on uncompressed bytes.

The primary goal is to save memory in buffers.

> Delta Encoding of KeyValues  (aka prefix compression)
> -
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Jacek Migdal
>  Labels: compression
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4218) Delta Encoding of KeyValues (aka prefix compression)

2011-08-17 Thread Jacek Migdal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13086615#comment-13086615
 ] 

Jacek Migdal commented on HBASE-4218:
-

Matt, I have already implemented a few algorithms which share common interface. 
I think we can add your method as another one. For the data I tested on, it 
seemed that stream compression was the best solution. However, the algorithm 
should be configurable so supporting a few algorithms should not be a problem. 

Basically, I need four methods:
-compress list of KeyValues (I operate on bytes)
-uncompress to list of KeyValues
-find in your structure certain key and return "position"
-materialize KeyValue on certain "position" and move to the next position

The only thing that could be challenging for you. I store all the data in 
ByteBuffer and need a tiny decompression state. That make things like direct 
buffers trivial to implement. However, As long as you use bunch of Java objects 
you would be unable to move it off the heap.

Once we have common interface you would be able to reuse some of my tests and 
benchmarks. 

Since I work on it almost full time, I could integrate it with HBase. Sooner or 
later you could add your algorithm. Does it sound good for you?

> Delta Encoding of KeyValues  (aka prefix compression)
> -
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Jacek Migdal
>  Labels: compression
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4218) Delta Encoding of KeyValues (aka prefix compression)

2011-08-17 Thread Jacek Migdal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13086650#comment-13086650
 ] 

Jacek Migdal commented on HBASE-4218:
-

So far the implemented interface looks like:
{noformat} 
/**
 * Fast compression of KeyValue. It aims to be fast and efficient
 * using assumptions:
 * - the KeyValue are stored sorted by key
 * - we know the structure of KeyValue
 * - the values are iterated always forward from beginning of block
 * - application specific knowledge 
 * 
 * It is designed to work fast enough to be feasible as in memory compression.
 */
public interface DeltaEncoder {
  /**
   * Compress KeyValues and write them to output buffer.
   * @param writeHere Where to write compressed data.
   * @param rawKeyValues Source of KeyValue for compression.
   * @throws IOException If there is an error in writeHere.
   */
  public void compressKeyValue(OutputStream writeHere, ByteBuffer rawKeyValues)
  throws IOException;
  
  /**
   * Uncompress assuming that original size is known.
   * @param source Compressed stream of KeyValues.
   * @param decompressedSize Size in bytes of uncompressed KeyValues.
   * @return Uncompressed block of KeyValues.
   * @throws IOException If there is an error in source.
   * @throws DeltaEncoderToSmallBufferException If specified uncompressed
   *size is too small.
   */
  public ByteBuffer uncompressKeyValue(DataInputStream source,
  int decompressedSize)
  throws IOException, DeltaEncoderToSmallBufferException;
}
{noformat}

I also need some kind of interface for iterating and seeking. I haven't got it 
yet but would like to have something like:
{noformat}
  public Iterator getIterator(ByteBuffer encodedKeyValues);
  public Iterator getIteratorStartingFrom(ByteBuffer 
encodedKeyValues, byte[] keyBuffer, int offset, int length);
{noformat}
For me it would work, but for you I might have changing it to something like:
{noformat}
  public EncodingIterator getState(ByteBuffer encodedKeyValues);
class EncodingIterator implements Iterator {
...
  public void seekToBeginning();
  public void seekTo(byte[] keyBuffer, int offset, int length);
{noformat}

I will figure out how we could share the code.

> Delta Encoding of KeyValues  (aka prefix compression)
> -
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Jacek Migdal
>  Labels: compression
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HBASE-4218) Delta Encoding of KeyValues (aka prefix compression)

2011-08-22 Thread Jacek Migdal (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13088827#comment-13088827
 ] 

Jacek Migdal commented on HBASE-4218:
-

Regarding variable byte encoding. There is also another option than VInt and 
FInt: within a block have the same width of int, but it could be different 
across blocks.
* exploit similarity of data within given block
* usually have the same size as VInt
* few branches
* the key value format is not uniform across all of the data

Having said that, in many Key Values there are only a few different sizes. That 
allows even more efficient encoding. On the other hand, when value lengths are 
getting longer, they vary a lot. But in that case keys are a tiny percent of 
whole file, so any savings from VB will be insignificant. Your mileage may vary.

> Delta Encoding of KeyValues  (aka prefix compression)
> -
>
> Key: HBASE-4218
> URL: https://issues.apache.org/jira/browse/HBASE-4218
> Project: HBase
>  Issue Type: Improvement
>  Components: io
>Reporter: Jacek Migdal
>  Labels: compression
>
> A compression for keys. Keys are sorted in HFile and they are usually very 
> similar. Because of that, it is possible to design better compression than 
> general purpose algorithms,
> It is an additional step designed to be used in memory. It aims to save 
> memory in cache as well as speeding seeks within HFileBlocks. It should 
> improve performance a lot, if key lengths are larger than value lengths. For 
> example, it makes a lot of sense to use it when value is a counter.
> Initial tests on real data (key length = ~ 90 bytes , value length = 8 bytes) 
> shows that I could achieve decent level of compression:
>  key compression ratio: 92%
>  total compression ratio: 85%
>  LZO on the same data: 85%
>  LZO after delta encoding: 91%
> While having much better performance (20-80% faster decompression ratio than 
> LZO). Moreover, it should allow far more efficient seeking which should 
> improve performance a bit.
> It seems that a simple compression algorithms are good enough. Most of the 
> savings are due to prefix compression, int128 encoding, timestamp diffs and 
> bitfields to avoid duplication. That way, comparisons of compressed data can 
> be much faster than a byte comparator (thanks to prefix compression and 
> bitfields).
> In order to implement it in HBase two important changes in design will be 
> needed:
> -solidify interface to HFileBlock / HFileReader Scanner to provide seeking 
> and iterating; access to uncompressed buffer in HFileBlock will have bad 
> performance
> -extend comparators to support comparison assuming that N first bytes are 
> equal (or some fields are equal)
> Link to a discussion about something similar:
> http://search-hadoop.com/m/5aqGXJEnaD1/hbase+windows&subj=Re+prefix+compression

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira