Compression will be a gain for objects of large size, and probably
containing many String or byte array attributes

Jeff


On Fri, Jun 15, 2012 at 10:58 PM, Tatu Saloranta <[email protected]>wrote:

> On Fri, Jun 15, 2012 at 1:21 PM, Olivier Lamy <[email protected]> wrote:
> > 2012/6/15 Tatu Saloranta <[email protected]>:
> >> On Thu, Jun 14, 2012 at 11:25 PM, Simone Tripodi
> >> <[email protected]> wrote:
> >>> Hi all guys,
> >> ..
> >>>
> >>>  * We didn't think to apply a GZip compression - it is true that we
> >>> are working off-heap, but hopefully the allocated space can handle
> >>> more objects by compressing them
> >>
> >> Gzip is VERY CPU intensive, so maybe just say "support compression".
> >> LZF/Snappy/LZ4 are 4-6x faster to compress, 2-3x to uncompress, so
> >> they could be better choices here.
> >> So at least it would make sense to allow pluggable compression codecs.
> > +1
> > Good idea about having a pluggable mechanism for this feature.
> >
> > I just ask myself if compression of a serialization of a object will
> > be a huge gain ?
> > At least for the server side using the plain/text (i.e. String)
> > transfer mode the factor can be high but for serialization of an
> > Object, I'm septic (but I agree don't have any figures :-) )
>
> Hard to know, depends on what gets compressed (single entry, multiple,
> page/block) and so forth.
> The biggest gain would be if there's actual disk storage to slow disk
> (no SSD), as fast codecs can compress as fast or faster than disks can
> write, and uncompress faster.
> But it can also help by allowing bigger data sets to kept in working set.
>
> Anyway, it all depends, and very hard to say without trying things out. :)
>
> -+ Tatu +-
>



-- 
Jeff MAURY


"Legacy code" often differs from its suggested alternative by actually
working and scaling.
 - Bjarne Stroustrup

http://www.jeffmaury.com
http://riadiscuss.jeffmaury.com
http://www.twitter.com/jeffmaury

Reply via email to