Re: Low file-system performance for 2.6.11 compared to 2.4.26
On Thu, 31 Mar 2005, Nick Piggin wrote: linux-os wrote: For those interested, some file-system tests and a test-tools are attached. I'll give it a run when I get a chance. Thanks. In the meantime, can you try with different io schedulers? I was trying to emulate some old servers that had new kernels installed. These servers are used to send medical images around. One used to get a 512x512x16-bit image to a work-station in a few hundred milliseconds. It takes seconds with the newer kernels. I found out that the SCSI disk(s) were running continuously and sampled their I/O patterns. The 'C' code comes very close to emulating that. The installation involved an "ugrade" to linux-2.6.5-1.358 that came with a RedHat Fedora distibution. The provided code will even HALT that distibution. Everything goes into the 'D' state and waits forever (at least overnight). Later versions like linux-2.6.8 will run to completion but with verrry slow through-put. Never versions, like linux-2.6.11 will run, but with a strange slowness, everything in 'D' and the file-system can end up with missing files. The SCSI controller is AIC7XXX, and the fs is ext3 with jbd, just as it comes from Red Hat. Also, my .signature disappeared during the file-system tests. There were no errors and e2fsck thinks everything is fine! You seem to be constantly plagued by gremlins. I don't know whether to laugh or cry. I test many, many (too many) systems as part of my job. When somebody writes a hardware device-driver and I get to check it, sometimes it blows up or doesn't otherwise work. I then test the bare kernel(s) and I often find some really strange things going on. For instance, there was a recent change that make the BKL be held during an ioctl(). This has devistating performance consequences with a lot of drivers. For instance, the stuff that writes CD/ROMs. It does a lot of the work using ioctl(), the firewire drivers also use ioctl() for I/O. Cheers, Dick Johnson Penguin : Linux version 2.6.11 on an i686 machine (5537.79 BogoMips). Notice : All mail here is now cached for review by Dictator Bush. 98.36% of all statistics are fiction. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Low file-system performance for 2.6.11 compared to 2.4.26
On Thu, 31 Mar 2005, Nick Piggin wrote: linux-os wrote: For those interested, some file-system tests and a test-tools are attached. I'll give it a run when I get a chance. Thanks. In the meantime, can you try with different io schedulers? I was trying to emulate some old servers that had new kernels installed. These servers are used to send medical images around. One used to get a 512x512x16-bit image to a work-station in a few hundred milliseconds. It takes seconds with the newer kernels. I found out that the SCSI disk(s) were running continuously and sampled their I/O patterns. The 'C' code comes very close to emulating that. The installation involved an ugrade to linux-2.6.5-1.358 that came with a RedHat Fedora distibution. The provided code will even HALT that distibution. Everything goes into the 'D' state and waits forever (at least overnight). Later versions like linux-2.6.8 will run to completion but with verrry slow through-put. Never versions, like linux-2.6.11 will run, but with a strange slowness, everything in 'D' and the file-system can end up with missing files. The SCSI controller is AIC7XXX, and the fs is ext3 with jbd, just as it comes from Red Hat. Also, my .signature disappeared during the file-system tests. There were no errors and e2fsck thinks everything is fine! You seem to be constantly plagued by gremlins. I don't know whether to laugh or cry. I test many, many (too many) systems as part of my job. When somebody writes a hardware device-driver and I get to check it, sometimes it blows up or doesn't otherwise work. I then test the bare kernel(s) and I often find some really strange things going on. For instance, there was a recent change that make the BKL be held during an ioctl(). This has devistating performance consequences with a lot of drivers. For instance, the stuff that writes CD/ROMs. It does a lot of the work using ioctl(), the firewire drivers also use ioctl() for I/O. Cheers, Dick Johnson Penguin : Linux version 2.6.11 on an i686 machine (5537.79 BogoMips). Notice : All mail here is now cached for review by Dictator Bush. 98.36% of all statistics are fiction. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Low file-system performance for 2.6.11 compared to 2.4.26
linux-os wrote: For those interested, some file-system tests and a test-tools are attached. I'll give it a run when I get a chance. Thanks. In the meantime, can you try with different io schedulers? Also, my .signature disappeared during the file-system tests. There were no errors and e2fsck thinks everything is fine! You seem to be constantly plagued by gremlins. I don't know whether to laugh or cry. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Low file-system performance for 2.6.11 compared to 2.4.26
At 02:34 AM 1/04/2005, linux-os wrote: For those interested, some file-system tests and a test-tools are attached. in high-performance-I/O-testing i perform regularly, i notice no slowdown in 2.6 compared to 2.4. looking at your test-tools, i would hardly call your workload anywhere near 'realistic' in terms of its I/O patterns. a few suggestions / constructive comments: (1) 100 processes each performing i/o in the pattern of write 8MB, fsync(), wait, read 8MB, wait, delete probably isn't realistic (2) you don't mention whether you're performing testing on ext2 or ext3 (3) you also don't mention what i/o scheduled is being used (4) your benchmark doesn't measure 'fairness' between processes (5) your benchmark sleeps for a random amount of time at various parts it is well known that in 2.4 kernels, processes can 'hog' the i/o channel - which may result in higher overall throughput but at the detriment of being 'fair' to the rest of the system. you should address point (4) above. can you modify your program to present the time-taken-per-process? if i'm a betting man, i'd say that 2.6 will be a lot more 'fair' compared to 2.4. default settings for 2.6 likely also means that there is a lot less data outstanding in the buffer-cache. 2.6's fsync() behavior is also quite different to that of 2.4. also note that if you're using a journalled filesystem, fsync() likely does different things ... you don't seed rand, so the random numbers out of rand() aren't actually random. it probably doesn't matter so much since we're only talking microseconds here (up to 0.511 msec) - but given 2.4 kernels will have HZ of 100 and 2.6 will have HZ of 1000, you're clearly going to get a different end result - perhaps with 2.6 resulting in a busy-wait from usleep(). cheers, lincoln. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Low file-system performance for 2.6.11 compared to 2.4.26
At 02:34 AM 1/04/2005, linux-os wrote: For those interested, some file-system tests and a test-tools are attached. in high-performance-I/O-testing i perform regularly, i notice no slowdown in 2.6 compared to 2.4. looking at your test-tools, i would hardly call your workload anywhere near 'realistic' in terms of its I/O patterns. a few suggestions / constructive comments: (1) 100 processes each performing i/o in the pattern of write 8MB, fsync(), wait, read 8MB, wait, delete probably isn't realistic (2) you don't mention whether you're performing testing on ext2 or ext3 (3) you also don't mention what i/o scheduled is being used (4) your benchmark doesn't measure 'fairness' between processes (5) your benchmark sleeps for a random amount of time at various parts it is well known that in 2.4 kernels, processes can 'hog' the i/o channel - which may result in higher overall throughput but at the detriment of being 'fair' to the rest of the system. you should address point (4) above. can you modify your program to present the time-taken-per-process? if i'm a betting man, i'd say that 2.6 will be a lot more 'fair' compared to 2.4. default settings for 2.6 likely also means that there is a lot less data outstanding in the buffer-cache. 2.6's fsync() behavior is also quite different to that of 2.4. also note that if you're using a journalled filesystem, fsync() likely does different things ... you don't seed rand, so the random numbers out of rand() aren't actually random. it probably doesn't matter so much since we're only talking microseconds here (up to 0.511 msec) - but given 2.4 kernels will have HZ of 100 and 2.6 will have HZ of 1000, you're clearly going to get a different end result - perhaps with 2.6 resulting in a busy-wait from usleep(). cheers, lincoln. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Low file-system performance for 2.6.11 compared to 2.4.26
linux-os wrote: For those interested, some file-system tests and a test-tools are attached. I'll give it a run when I get a chance. Thanks. In the meantime, can you try with different io schedulers? Also, my .signature disappeared during the file-system tests. There were no errors and e2fsck thinks everything is fine! You seem to be constantly plagued by gremlins. I don't know whether to laugh or cry. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/