Compression was just not something that we really though about all that much.  
The fastest route is probably to replace the tuple serializer with one that can 
handle compression.  We did something similar for encryption.
https://github.com/apache/storm/blob/master/storm-core/src/jvm/backtype/storm/security/serialization/BlowfishTupleSerializer.java
But compression is generic enough it might be nice to make it a part of the 
Real TupleSerializer.
https://github.com/EsotericSoftware/kryo#compression-and-encryption

I would also suggest that you look at Snappy and LZO or LZ4 for your 
compression.  As they tend to be much faster and still get good compression 
ratios. - Bobby 


     On Friday, September 18, 2015 11:51 AM, Onur Yalazı 
<[email protected]> wrote:
   

 Hello,

I'm very new to storm world and the list, so Hello from Turkey.

Because of a recent incident we had to increase our openstack network 
bandwidth soft limits from 1gb/s to 2gb/s.

And of course even though the problem resides in our tuples' size and 
topology size, I thought if storm's netty was using ZlibEncoding. 
Looking into backtype.storm.messaging.netty.StormServerPipelineFactory 
and client counterpart, pipline seems to not have any compression handlers.

Is it an intentional decision to not include compression in the 
pipeline? I know it would need more processing power and reduce topology 
performance, but I would like to know it it was considered before, and 
if not raise the issue.

Thank you.


  

Reply via email to