lamber-ken commented on issue #1253: [HUDI-558] Introduce ability to compress bloom filters while storing in parquet URL: https://github.com/apache/incubator-hudi/pull/1253#issuecomment-592855749 Hi @bvaradar, I test the pr, seems that the size of compressed bigger than original one. If wrong, correct me, thanks. ``` test random keys original size: 4792548 compress size: 4967672 test sequential keys original size: 4792548 compress size: 4967746 ``` ``` SimpleBloomFilter filter = new SimpleBloomFilter(1000000, 0.000001, Hash.MURMUR_HASH); System.out.println("test random keys"); for (int i = 0; i < 1000000; i++) { String key = UUID.randomUUID().toString(); filter.add(key); } System.out.println("original size: " + filter.serializeToString().length()); System.out.println("compress size: " + GzipCompressionUtils.compress(filter.serializeToString()).length()); System.out.println("\ntest sequential keys"); filter = new SimpleBloomFilter(1000000, 0.000001, Hash.MURMUR_HASH); for (int i = 0; i < 1000000; i++) { String key = "key-" + i; filter.add(key); } System.out.println("original size: " + filter.serializeToString().length()); System.out.println("compress size: " + GzipCompressionUtils.compress(filter.serializeToString()).length()); ```
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services