On 03/03/2016 03:29 PM, Slavomir Skopalik wrote:

>> Compression over compression is usually
>> inefficient and something I think should be avoided. But my main concern
> My test shows different result.
> And encoding supposed by Jim is not compression, it is different
> representation of values.

Who cares how effect of smaller space occupied by the record is called, 
important thing is a result - size and certainly time for compression 
and specially decompression.

>
> But back to original question.
> Is it right time to change record/fragment compression/encoding?

We are going to have soon TTG solution re. list of v.4 features.
If we decide not to change ODS major in v.4 commit of LZ4 appears 
suspicious. I.e. I think you should wait a bit.

What is also interesting - how does your compression work with records 
other than containing single varachar(8000)? Some mix of 
integrs/floats/strings?
Personally I (after my own experiments with LZ4 for wire compression) 
believe that it should fit well for records compression, but it's always 
good to provide more facts. What is looking strange for me is that LZ4 
works better when used over RLE. As far as I understand it's algorithm 
is should do it's job fine without RLE. Results of comparisons for data 
mix is very interesting.

Last but not least - did you run our test suites with your patch?



------------------------------------------------------------------------------
Site24x7 APM Insight: Get Deep Visibility into Application Performance
APM + Mobile APM + RUM: Monitor 3 App instances at just $35/Month
Monitor end-to-end web transactions and take corrective actions now
Troubleshoot faster and improve end-user experience. Signup Now!
http://pubads.g.doubleclick.net/gampad/clk?id=272487151&iu=/4140
Firebird-Devel mailing list, web interface at 
https://lists.sourceforge.net/lists/listinfo/firebird-devel

Reply via email to