On (08/25/17 01:35), Nick Terrell wrote: > On 8/24/17, 5:49 PM, "Joonsoo Kim" <iamjoonsoo....@lge.com> wrote: > > On Thu, Aug 24, 2017 at 09:33:54PM +0000, Nick Terrell wrote: > > > On Thu, Aug 24, 2017 at 10:49:36AM +0900, Sergey Senozhatsky wrote: > > > > Add ZSTD to the list of supported compression algorithms. > > > > > > > > Official benchmarks [1]: > > > > > > Awesome! Let me know if you need anything from me. > > > > > Hello, Nick. > > > > Awesome work!!! > > > > Let me ask a question. > > Zram compress and decompress a small data (a page) and your github > > site says that using predefined dictionary would be helpful in this > > situation. However, it seems that compression crypto API for zstd > > doesn't use ZSTD_compress_usingDict(). Is there any plan to support > > it?
excellent question, Joonsoo. > I think using dictionaries in zram could be very interesting. We could for > example, take a random sample of the RAM and use that as the dictionary > for compression. E.g. take 32 512B samples from RAM and build a 16 KB > dictionary (sizes may vary). > > I'm not sure how you would pass a dictionary into the crypto compression > API, but I'm sure we can make something work if dictionary compression > proves to be beneficial enough. a dictionaty pointer can be in `struct zstd_ctx'. > What data have you, or anyone, used for benchmarking compression ratio and > speed for RAM? Since it is such a specialized application, the standard > compression benchmarks aren't very applicable. yeah, I thought that zstd uses dicts unconditionally. I used my own simple minded test script: https://github.com/sergey-senozhatsky/zram-perf-test it basically invokes fio with a 'static compression buffer', because we want to have the exactly same data to be compressed when I compare algorithms... I guess I need to improve it, somehow. -ss