Re: Possible bug with FIBMAP
zam, please review this unless vs is back. What is the file size? Is there anything special about the file (holes, etc.)? Thanks for finding what I assume is a bug. (I wonder if this has been sporadically affecting use of reiser4 with bootloaders.) Hans Brice Arnould wrote: Hi Two users of a hack I wrote told me that http://vleu.net/shake/fb_r4.c (also attached with the mail) returned FIBMAP=-22, FIGETBSZ=4096 on some of their files on reiser4 filesystems. Does this value of -22 have a special meaning (would be strange), or is it a bug in Reiser4 ? I can ask them for more details, if you want. Thanks Brice /* * Non released test software, distributed under GPL-2 licence by * Brice Arnould (c) 2006 * You shouldn't use it. */ #include stdio.h #include assert.h // assert() #include errno.h// errno #include error.h// error() #include sys/ioctl.h// ioctl() #include linux/fs.h // FIBMAP, FIGETBSZ #include sys/types.h// open() #include sys/stat.h // open() #include fcntl.h// open() int main (int argc, char **argv) { int fd, blocksize, block = 0; if (1 != argc) error (1, 0, usage : %s FILE, argv[0]); fd = open (argv[1], O_RDONLY); assert (0 fd); if (-1 == ioctl (fd, FIGETBSZ, blocksize) || -1 == ioctl (fd, FIBMAP, block)) error (1, 0, ioctl() failed, are you root ?\n); printf (FIBMAP=%i, FIGETBSZ=%i\n, block, blocksize); close (fd); }
Re: Possible bug with FIBMAP
On 27 August 2006 12:57, Brice Arnould wrote: Hi Two users of a hack I wrote told me that http://vleu.net/shake/fb_r4.c (also attached with the mail) returned FIBMAP=-22, FIGETBSZ=4096 on some of their files on reiser4 filesystems. Does this value of -22 have a special meaning (would be strange), or is it a bug in Reiser4 ? I can ask them for more details, if you want. Reiser4: restore FIBMAP ioctl support for packed files restore FIBMAP ioctl support for packed files, don't report block numbers for not yet mapped to disk nodes. Signed-off-by: Alexander Zarochentsev [EMAIL PROTECTED] --- fs/reiser4/plugin/item/item.c |2 +- fs/reiser4/plugin/item/tail.c |2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) --- linux-2.6-git.orig/fs/reiser4/plugin/item/item.c +++ linux-2.6-git/fs/reiser4/plugin/item/item.c @@ -614,7 +614,7 @@ item_plugin item_plugins[LAST_ITEM_ID] = .write = reiser4_write_tail, .read = reiser4_read_tail, .readpage = readpage_tail, - .get_block = NULL, + .get_block = get_block_address_tail, .append_key = append_key_tail, .init_coord_extension = init_coord_extension_tail --- linux-2.6-git.orig/fs/reiser4/plugin/item/tail.c +++ linux-2.6-git/fs/reiser4/plugin/item/tail.c @@ -791,6 +791,8 @@ get_block_address_tail(const coord_t * c assert(nikita-3252, znode_get_level(coord-node) == LEAF_LEVEL); *block = *znode_get_block(coord-node); + if (reiser4_blocknr_is_fake(block)) + *block = 0; return 0; }
Re: Reiser4 und LZO compression
On Sun, 27 August 2006 01:04:28 -0700, Andrew Morton wrote: Like lib/inflate.c (and this new code should arguably be in lib/). The problem is that if we clean this up, we've diverged very much from the upstream implementation. So taking in fixes and features from upstream becomes harder and more error-prone. I've had an identical argument with Linus about lib/zlib_*. He decided that he didn't care about diverging, I went ahead and changed the code. In the process, I merged a couple of outstanding bugfixes and reduced memory consumption by 25%. Looks like Linus was right on that one. I'd suspect that the maturity of these utilities is such that we could afford to turn them into kernel code in the expectation that any future changes will be small. But it's not a completely simple call. (iirc the inflate code had a buffer overrun a while back, which was found and fixed in the upstream version). Dito in lib/zlib_*. lib/inflage.c is only used for the various in-kernel bootloaders to uncompress a kernel image. Anyone tampering with the image to cause a buffer overrun already owns the machine anyway. Whether any of our experiences with zlib apply to lzo remains a question, though. Jörn -- I've never met a human being who would want to read 17,000 pages of documentation, and if there was, I'd kill him to get him out of the gene pool. -- Joseph Costello
Re: Reiser4 und LZO compression
Alexey Dobriyan wrote: Reiser4 developers, Andrew, The patch below is so-called reiser4 LZO compression plugin as extracted from 2.6.18-rc4-mm3. I think it is an unauditable piece of shit and thus should not enter mainline. Hmm. LZO is the best compression algorithm for the task as measured by the objectives of good compression effectiveness while still having very low CPU usage (the best of those written and GPL'd, there is a slightly better one which is proprietary and uses more CPU, LZRW if I remember right. The gzip code base uses too much CPU, though I think Edward made an option of it). Could you be kind enough to send me a plugin which is better at those two measures, I'd be quite grateful? By the way, could you tell me about this auditing stuff? Last I remember, when I mentioned that the US Defense community had coding practices worth adopting by the Kernel Community, I was pretty much disregarded. So, while I understand that the FSB has serious security issues what with all these Americans seeking to crack their Linux boxen, complaining to me about auditability seems a bit graceless.;-) Especially if there is no offer of replacement compression code. Oh, and this LZO code is not written by Namesys. You can tell by the utter lack of comments, assertions, etc. We are just seeking to reuse well known widely used code. I have in the past been capable of demanding that my programmers comment code not written by them before we use it, but this time I did not. I have mixed feeling about us adding our comments to code written by a compression specialist. If Andrew wants us to write our own compression code, or comment this code and fill it with asserts, we will grumble a bit and do it. It is not a task I am eager for, as compression code is a highly competitive field which gives me the surface impression that if you are not gripped by what you are sure is an inspiration you should stay out of it. Jorn wrote: I've had an identical argument with Linus about lib/zlib_*. He decided that he didn't care about diverging, I went ahead and changed the code. In the process, I merged a couple of outstanding bugfixes and reduced memory consumption by 25%. Looks like Linus was right on that one. Anyone sends myself or Edward a patch, that's great. Jorn, sounds like you did a good job on that one. Hans
Re: Reiser4 und LZO compression
On Sun, 27 Aug 2006 04:42:59 -0500 David Masover [EMAIL PROTECTED] wrote: Andrew Morton wrote: On Sun, 27 Aug 2006 04:34:26 +0400 Alexey Dobriyan [EMAIL PROTECTED] wrote: The patch below is so-called reiser4 LZO compression plugin as extracted from 2.6.18-rc4-mm3. I think it is an unauditable piece of shit and thus should not enter mainline. Like lib/inflate.c (and this new code should arguably be in lib/). The problem is that if we clean this up, we've diverged very much from the upstream implementation. So taking in fixes and features from upstream becomes harder and more error-prone. Well, what kinds of changes have to happen? I doubt upstream would care about moving some of it to lib/ -- and anyway, reiserfs-list is on the CC. We are speaking of upstream in the third party in the presence of upstream, so... The ifdef jungle is ugly, and especially the WIN / 16-bit DOS stuff is completely useless here. Maybe just ask upstream? I am not sure if Mr. Oberhumer still cares about LZO 1.x, AFAIK he now develops a new compressor under a commercial license. Regards, -- Jindrich Makovicka
Re: Reiser4 und LZO compression
On Mon, Aug 28, 2006 at 10:06:46AM -0700, Hans Reiser wrote: Hmm. LZO is the best compression algorithm for the task as measured by the objectives of good compression effectiveness while still having very low CPU usage (the best of those written and GPL'd, there is a slightly better one which is proprietary and uses more CPU, LZRW if I remember right. The gzip code base uses too much CPU, though I think Edward made I don't think that LZO beats LZF in both speed and compression ratio. LZF is also available under GPL (dual-licensed BSD) and was choosen in favor of LZO for the next generation suspend-to-disk code of the Linux kernel. see: http://www.goof.com/pcg/marc/liblzf.html -- ciao - Stefan
Re: Reiser4 und LZO compression
Jindrich Makovicka wrote: On Sun, 27 Aug 2006 04:42:59 -0500 David Masover [EMAIL PROTECTED] wrote: Andrew Morton wrote: On Sun, 27 Aug 2006 04:34:26 +0400 Alexey Dobriyan [EMAIL PROTECTED] wrote: The patch below is so-called reiser4 LZO compression plugin as extracted from 2.6.18-rc4-mm3. I think it is an unauditable piece of shit and thus should not enter mainline. Like lib/inflate.c (and this new code should arguably be in lib/). The problem is that if we clean this up, we've diverged very much from the upstream implementation. So taking in fixes and features from upstream becomes harder and more error-prone. Well, what kinds of changes have to happen? I doubt upstream would care about moving some of it to lib/ -- and anyway, reiserfs-list is on the CC. We are speaking of upstream in the third party in the presence of upstream, so... The ifdef jungle is ugly, and especially the WIN / 16-bit DOS stuff is completely useless here. I agree that it needs some brushing, putting in todo.. Maybe just ask upstream? I am not sure if Mr. Oberhumer still cares about LZO 1.x, AFAIK he now develops a new compressor under a commercial license. Regards,
Re: Reiser4 und LZO compression
Stefan Traby wrote: On Mon, Aug 28, 2006 at 10:06:46AM -0700, Hans Reiser wrote: Hmm. LZO is the best compression algorithm for the task as measured by the objectives of good compression effectiveness while still having very low CPU usage (the best of those written and GPL'd, there is a slightly better one which is proprietary and uses more CPU, LZRW if I remember right. The gzip code base uses too much CPU, though I think Edward made I don't think that LZO beats LZF in both speed and compression ratio. LZF is also available under GPL (dual-licensed BSD) and was choosen in favor of LZO for the next generation suspend-to-disk code of the Linux kernel. see: http://www.goof.com/pcg/marc/liblzf.html thanks for the info, we will compare them
Re: Reiser4 und LZO compression
Hi. On Mon, 2006-08-28 at 22:15 +0400, Edward Shishkin wrote: Stefan Traby wrote: On Mon, Aug 28, 2006 at 10:06:46AM -0700, Hans Reiser wrote: Hmm. LZO is the best compression algorithm for the task as measured by the objectives of good compression effectiveness while still having very low CPU usage (the best of those written and GPL'd, there is a slightly better one which is proprietary and uses more CPU, LZRW if I remember right. The gzip code base uses too much CPU, though I think Edward made I don't think that LZO beats LZF in both speed and compression ratio. LZF is also available under GPL (dual-licensed BSD) and was choosen in favor of LZO for the next generation suspend-to-disk code of the Linux kernel. see: http://www.goof.com/pcg/marc/liblzf.html thanks for the info, we will compare them For Suspend2, we ended up converting the LZF support to a cryptoapi plugin. Is there any chance that you could use cryptoapi modules? We could then have a hope of sharing the support. Regards, Nigel
Re: Reiser4 und LZO compression
Nigel Cunningham wrote: For Suspend2, we ended up converting the LZF support to a cryptoapi plugin. Is there any chance that you could use cryptoapi modules? We could then have a hope of sharing the support It is in principle a good idea, and I hope we will be able to say yes. However, I have to see the numbers, as we are more performance sensitive than you folks probably are, and every 10% is a big deal for us.
Re: Reiser4 und LZO compression
Hmm. LZO is the best compression algorithm for the task as measured by the objectives of good compression effectiveness while still having very low CPU usage (the best of those written and GPL'd, there is a slightly better one which is proprietary and uses more CPU, LZRW if I remember right. The gzip code base uses too much CPU, though I think Edward made I don't think that LZO beats LZF in both speed and compression ratio. LZF is also available under GPL (dual-licensed BSD) and was choosen in favor of LZO for the next generation suspend-to-disk code of the Linux kernel. see: http://www.goof.com/pcg/marc/liblzf.html thanks for the info, we will compare them For Suspend2, we ended up converting the LZF support to a cryptoapi plugin. Is there any chance that you could use cryptoapi modules? We could then have a hope of sharing the support. I am throwing in gzip: would it be meaningful to use that instead? The decoder (inflate.c) is already there. 06:04 shanghai:~/liblzf-1.6 l configure* -rwxr-xr-x 1 jengelh users 154894 Mar 3 2005 configure -rwxr-xr-x 1 jengelh users 26810 Mar 3 2005 configure.bz2 -rw-r--r-- 1 jengelh users 30611 Aug 28 20:32 configure.gz-z9 -rw-r--r-- 1 jengelh users 30693 Aug 28 20:32 configure.gz-z6 -rw-r--r-- 1 jengelh users 53077 Aug 28 20:32 configure.lzf Jan Engelhardt --
Re: Reiser4 und LZO compression
On Tue, Aug 29, 2006 at 07:48:25AM +1000, Nigel Cunningham wrote: For Suspend2, we ended up converting the LZF support to a cryptoapi plugin. Is there any chance that you could use cryptoapi modules? We could then have a hope of sharing the support. Using cryptoapi plugins for the compression methods is an interesting approach, there's a few other places in the kernel that could probably benefit from this as well, such as jffs2 (which at the moment rolls its own compression subsystem), and the out-of-tree page and swap cache compression work. Assuming you were wrapping in to LZF directly prior to the cryptoapi integration, do you happen to have before and after numbers to determine how heavyweight the rest of the cryptoapi overhead is? It would be interesting to profile this and consider migrating the in-tree users, rather than duplicating the compress/decompress routines all over the place.
Re: Reiser4 und LZO compression
Hi. On Tue, 2006-08-29 at 06:05 +0200, Jan Engelhardt wrote: Hmm. LZO is the best compression algorithm for the task as measured by the objectives of good compression effectiveness while still having very low CPU usage (the best of those written and GPL'd, there is a slightly better one which is proprietary and uses more CPU, LZRW if I remember right. The gzip code base uses too much CPU, though I think Edward made I don't think that LZO beats LZF in both speed and compression ratio. LZF is also available under GPL (dual-licensed BSD) and was choosen in favor of LZO for the next generation suspend-to-disk code of the Linux kernel. see: http://www.goof.com/pcg/marc/liblzf.html thanks for the info, we will compare them For Suspend2, we ended up converting the LZF support to a cryptoapi plugin. Is there any chance that you could use cryptoapi modules? We could then have a hope of sharing the support. I am throwing in gzip: would it be meaningful to use that instead? The decoder (inflate.c) is already there. 06:04 shanghai:~/liblzf-1.6 l configure* -rwxr-xr-x 1 jengelh users 154894 Mar 3 2005 configure -rwxr-xr-x 1 jengelh users 26810 Mar 3 2005 configure.bz2 -rw-r--r-- 1 jengelh users 30611 Aug 28 20:32 configure.gz-z9 -rw-r--r-- 1 jengelh users 30693 Aug 28 20:32 configure.gz-z6 -rw-r--r-- 1 jengelh users 53077 Aug 28 20:32 configure.lzf We used gzip when we first implemented compression support, and found it to be far too slow. Even with the fastest compression options, we were only getting a few megabytes per second. Perhaps I did something wrong in configuring it, but there's not that many things to get wrong! In contrast, with LZF, we get very high throughput. My current laptop is an 1.8MHz Turion with a 7200 RPM (PATA) drive. Without LZF compression, my throughput in writing an image is the maximum the drive interface can manage - 38MB/s. With LZF, I get roughly that divided by compression ratio achieved, so if the compression ratio is ~50%, as it generally is, I'm reading and writing the image at 75-80MB/s. During this time, all the computer is doing is compressing pages using LZF and submitting bios, with the odd message being send to the userspace interface app via netlink. I realise this is very different to the workload you'll be doing, but hopefully the numbers are somewhat useful: [EMAIL PROTECTED]:~$ cat /sys/power/suspend2/debug_info Suspend2 debugging info: - SUSPEND core : 2.2.7.4 - Kernel Version : 2.6.18-rc4 - Compiler vers. : 4.1 - Attempt number : 1 - Parameters : 0 32785 0 0 0 0 - Overall expected compression percentage: 0. - Compressor is 'lzf'. Compressed 820006912 bytes into 430426371 (47 percent compression). - Swapwriter active. Swap available for image: 487964 pages. - Filewriter inactive. - I/O speed: Write 74 MB/s, Read 70 MB/s. - Extra pages: 1913 used/2100. [EMAIL PROTECTED]:~$ (Modify hibernate.conf to disable compression, suspend again...) [EMAIL PROTECTED]:~$ cat /sys/power/suspend2/debug_info Suspend2 debugging info: - SUSPEND core : 2.2.7.4 - Kernel Version : 2.6.18-rc4 - Compiler vers. : 4.1 - Attempt number : 2 - Parameters : 0 32785 0 0 0 0 - Overall expected compression percentage: 0. - Swapwriter active. Swap available for image: 487964 pages. - Filewriter inactive. - I/O speed: Write 38 MB/s, Read 39 MB/s. - Extra pages: 1907 used/2100. [EMAIL PROTECTED]:~$ Oh, I also have a debugging mode where I can get Suspend2 to just compress the pages but not actually write anything. If I do that, it says it can do 80MB/s on my kernel image, so the disk is still the bottleneck, it seems. Hope this all helps (and isn't information overload!) Nigel
Re: Reiser4 und LZO compression
Hi. On Tue, 2006-08-29 at 13:59 +0900, Paul Mundt wrote: On Tue, Aug 29, 2006 at 07:48:25AM +1000, Nigel Cunningham wrote: For Suspend2, we ended up converting the LZF support to a cryptoapi plugin. Is there any chance that you could use cryptoapi modules? We could then have a hope of sharing the support. Using cryptoapi plugins for the compression methods is an interesting approach, there's a few other places in the kernel that could probably benefit from this as well, such as jffs2 (which at the moment rolls its own compression subsystem), and the out-of-tree page and swap cache compression work. Assuming you were wrapping in to LZF directly prior to the cryptoapi integration, do you happen to have before and after numbers to determine how heavyweight the rest of the cryptoapi overhead is? It would be interesting to profile this and consider migrating the in-tree users, rather than duplicating the compress/decompress routines all over the place. I was, but I don't have numbers right now. I'm about to go out, but will see if I can find them when I get back later. From memory, it wasn't a huge change in terms of lines of code. Regards, Nigel