On Sun, Jul 28, 2013 at 9:53 PM, Radio młodych bandytów < [email protected]> wrote:
> On 28/07/2013 21:08, Daniel Flores wrote: > > we don't reference previous blocks at this moment > Do you intend to add it? How? > At this moment I don't have plans regarding previous blocks, but maybe in the course of optimizing the write path there will be some ideas about it. We'll see. > No, they are not the same, but in a way that doesn't matter here - I set > 'sector size' to 32 KB, so even if a block compresses to less than 16, > it takes 32 KB. > I checked, the tail block compressed from 47875 to 32038. > Oh, I understand now why you got that block compressed and I didn't. It's just that my prototype application, that I used to test this algorithm and generate my test results, always tries to compress a whole 64KB block. So, in this case, instead of trying to compress just 47875 bytes of data, it tried to compress a block that contained that data and also some garbage, which, logically, didn't lead to a successful compression. > If you're open to researching other algorithm options, I suggest looking > at fsbench, it's going to be easier to adapt it to the way HAMMER2 works > than to add other algorithms to the kernel: > http://encode.ru/threads/1371-Filesystem-benchmark?highlight=fsbench > Note that the latest version crashes on your book2 with settings as > above, you made me find an integer underflow. Will upload an update > later today. > OK, this looks really interesting. This application can make the process of testing an algorithm a lot easier. I'll take a closer look at it soon. Thank you. Daniel
