huge improvement with per-device dirty throttling
I tested 2.6.23-rc2-mm + Peter's per-BDI v9 patches, versus 2.6.20 as shipped in Ubuntu 7.04. I realize there is a large delta between these two kernels. I load the system with du -hs /bigtree, where bigtree has millions of files, and dd if=/dev/zero of=bigfile bs=1048576. I test how long it takes to ls /*, how long it takes to launch gnome-terminal, and how long it takes to launch firefox. 2.6.23-rc2-mm-bdi is better than 2.6.20 by a factor between 50x and 100x. 1. sleep 60 && time ls -l /var /home /usr /lib /etc /boot /root /tmp 2.6.20: 53s, 57s 2.6.23: .652s, .870s, .819s improvement: ~70x 2. sleep 60 && time gnome-terminal 2.6.20: 1m50s, 1m50s 2.6.23: 3s, 2s, 2s improvement: ~40x 3. sleep 60 && time firefox 2.6.20: >30m 2.6.23: 30s, 32s, 37s improvement: +inf Yes, you read that correctly. In the presence of a sustained writer and a competing reader, it takes more than 30 minutes to start firefox. 4. du -hs /bigtree Under 2.6.20, lstat64 has a mean latency of 75ms in the presence of a sustained writer. Under 2.6.23-rc2-mm+bdi, the mean latency of lstat64 is only 5ms (15x improvement). The worst case latency I observed was more than 2.9 seconds for a single lstat64 call. Here's the stem plot of lstat64 latency under 2.6.20 The decimal point is 1 digit(s) to the left of the | 0 | +1737 2 | 17789122334455678889 4 | 022334477888+69 6 | 0111222334557778999344677788899 8 | 0123484589 10 | 020045 12 | 1448239 14 | 1 16 | 5 18 | 399 20 | 32 22 | 80 24 | 26 | 2 28 | 1 Here's the same plot for 2.6.23-rc2-mm+bdi. Note the scale The decimal point is 1 digit(s) to the left of the | 0 | +2243 1 | 15567778899 2 | 0011122257 3 | 237 4 | 3 5 | 6 | 7 | 3 8 | 45 9 | 10 | 11 | 12 | 13 | 9 In other words, under 2.6.20, only writing processes make progress. Readers never make progress. 5. dd writeout speed 2.6.20: 36.3MB/s, 35.3MB/s, 33.9MB/s 2.6.23: 20.9MB/s, 22.2MB/s 2.6.23 is slower when writing out, because other processes make progress My system is a Core 2 Duo, 2GB, single SATA disk. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
huge improvement with per-device dirty throttling
I tested 2.6.23-rc2-mm + Peter's per-BDI v9 patches, versus 2.6.20 as shipped in Ubuntu 7.04. I realize there is a large delta between these two kernels. I load the system with du -hs /bigtree, where bigtree has millions of files, and dd if=/dev/zero of=bigfile bs=1048576. I test how long it takes to ls /*, how long it takes to launch gnome-terminal, and how long it takes to launch firefox. 2.6.23-rc2-mm-bdi is better than 2.6.20 by a factor between 50x and 100x. 1. sleep 60 time ls -l /var /home /usr /lib /etc /boot /root /tmp 2.6.20: 53s, 57s 2.6.23: .652s, .870s, .819s improvement: ~70x 2. sleep 60 time gnome-terminal 2.6.20: 1m50s, 1m50s 2.6.23: 3s, 2s, 2s improvement: ~40x 3. sleep 60 time firefox 2.6.20: 30m 2.6.23: 30s, 32s, 37s improvement: +inf Yes, you read that correctly. In the presence of a sustained writer and a competing reader, it takes more than 30 minutes to start firefox. 4. du -hs /bigtree Under 2.6.20, lstat64 has a mean latency of 75ms in the presence of a sustained writer. Under 2.6.23-rc2-mm+bdi, the mean latency of lstat64 is only 5ms (15x improvement). The worst case latency I observed was more than 2.9 seconds for a single lstat64 call. Here's the stem plot of lstat64 latency under 2.6.20 The decimal point is 1 digit(s) to the left of the | 0 | +1737 2 | 17789122334455678889 4 | 022334477888+69 6 | 0111222334557778999344677788899 8 | 0123484589 10 | 020045 12 | 1448239 14 | 1 16 | 5 18 | 399 20 | 32 22 | 80 24 | 26 | 2 28 | 1 Here's the same plot for 2.6.23-rc2-mm+bdi. Note the scale The decimal point is 1 digit(s) to the left of the | 0 | +2243 1 | 15567778899 2 | 0011122257 3 | 237 4 | 3 5 | 6 | 7 | 3 8 | 45 9 | 10 | 11 | 12 | 13 | 9 In other words, under 2.6.20, only writing processes make progress. Readers never make progress. 5. dd writeout speed 2.6.20: 36.3MB/s, 35.3MB/s, 33.9MB/s 2.6.23: 20.9MB/s, 22.2MB/s 2.6.23 is slower when writing out, because other processes make progress My system is a Core 2 Duo, 2GB, single SATA disk. -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Trouble Booting Linux PPC 2000 On Mac G4
On Fri, 6 Jul 2001, Tim McDaniel wrote: > I think what we are seeing is XBoot rather than yaboot and we tried just > about all conceivable "kernel options", as exposed by Xboot. When Xboot > comes up it shows a ramdisk_size=8192 as the only default parameter. > Rapidly growing to hate the non-intuitive nature of the MAC OS we are > not experts on the Mac OS. P.S. We are running Mac OS 9.1. > > Oops, We just discovered that Xboot does not work with MacOS 9.1 (geez) > you MUST use yaboot. > > > We will try the Q4 release. I endorse Debian PPC. LinuxPPC is a quadruple hack that has never worked properly on the three Macs I tried to inflict it upon. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Trouble Booting Linux PPC On Mac G4 2000
On Fri, 6 Jul 2001, Tim McDaniel wrote: > > We are having a great degree of difficulty getting Linux PPC2 > running on a Mac G4 466 tower with 128MB of memory, One 30MB HD and one > CR RW. This is not a NuBus based system. To the best of our knowledge we > have followed the user manual to the tee, and even tried forcing video > settings at the Xboot screen. > > > But still, when we encounter the screen where you must chose between > MacOS and Linux and we chose linux, the screen goes black and for all > practical purposes the box appears to be locked. We've also tried > editing yaboot.conf but can't seem to save the new file. > > Any help would be greatly appreatiated. add "video=ofonly" to your boot command line. That is, at the yaboot "boot: " prompt, type "linux video=ofonly" If that doesn't work, try something else :) -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Trouble Booting Linux PPC On Mac G4 2000
On Fri, 6 Jul 2001, Tim McDaniel wrote: We are having a great degree of difficulty getting Linux PPC2 running on a Mac G4 466 tower with 128MB of memory, One 30MB HD and one CR RW. This is not a NuBus based system. To the best of our knowledge we have followed the user manual to the tee, and even tried forcing video settings at the Xboot screen. But still, when we encounter the screen where you must chose between MacOS and Linux and we chose linux, the screen goes black and for all practical purposes the box appears to be locked. We've also tried editing yaboot.conf but can't seem to save the new file. Any help would be greatly appreatiated. add video=ofonly to your boot command line. That is, at the yaboot boot: prompt, type linux video=ofonly If that doesn't work, try something else :) -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Trouble Booting Linux PPC 2000 On Mac G4
On Fri, 6 Jul 2001, Tim McDaniel wrote: I think what we are seeing is XBoot rather than yaboot and we tried just about all conceivable kernel options, as exposed by Xboot. When Xboot comes up it shows a ramdisk_size=8192 as the only default parameter. Rapidly growing to hate the non-intuitive nature of the MAC OS we are not experts on the Mac OS. P.S. We are running Mac OS 9.1. Oops, We just discovered that Xboot does not work with MacOS 9.1 (geez) you MUST use yaboot. We will try the Q4 release. I endorse Debian PPC. LinuxPPC is a quadruple hack that has never worked properly on the three Macs I tried to inflict it upon. -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: >128 MB RAM stability problems (again)
On 4 Jul 2001, Ronald Bultje wrote: > Hi, > > you might remember an e-mail from me (two weeks ago) with my problems > where linux would not boot up or be highly instable on a machine with > 256 MB RAM, while it was 100% stable with 128 MB RAM. Basically, I still > have this problem, so I am running with 128 MB RAM again. I suggest you look into the memory settings in your BIOS, and change them to the most conservative available. Or, throw out your memory and buy some from a reputable manufacturer. Your problem is definitely hardware. There are racks full of linux machines with more than 128 MB RAM running kernel 2.4 all over the world. I personally installed a dozen. It always works fine. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: 128 MB RAM stability problems (again)
On 4 Jul 2001, Ronald Bultje wrote: Hi, you might remember an e-mail from me (two weeks ago) with my problems where linux would not boot up or be highly instable on a machine with 256 MB RAM, while it was 100% stable with 128 MB RAM. Basically, I still have this problem, so I am running with 128 MB RAM again. I suggest you look into the memory settings in your BIOS, and change them to the most conservative available. Or, throw out your memory and buy some from a reputable manufacturer. Your problem is definitely hardware. There are racks full of linux machines with more than 128 MB RAM running kernel 2.4 all over the world. I personally installed a dozen. It always works fine. -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: How to change DVD-ROM speed?
On Wed, 27 Jun 2001, Jens Axboe wrote: > On Wed, Jun 27 2001, Jeffrey W. Baker wrote: > > > > I will be happy to :) Should I hang conditional code off the existing > > ioctl (CDROM_SELECT_SPEED, ide_cdrom_select_speed) or use a new one? > > Excellent. I'd say use the same ioctl if you can, but default to using > SET_STREAMING for DVD drives. Hrmm, ah, hrmm. Perhaps I need a little help with this one :) Just for testing, I modified cdrom_select_speed in ide-cd.c to use SET STREAMING. Working from the Fuji spec, I created a 28-byte buffer, set the starting lba to 0, the ending lba to 0x, the read speed to 0x00ff, and the read time to 0x00ff, expecting a resulting speed of 1KB/ms or 1000KB/s[1]. I assign the buffer to pc.buffer and send it on its way to cdrom_queue_packet_command(). The result is: CDROM_SELECT_SPEED failed: Input/output error hdc: status timeout: status 0x80 { Busy } hdc: DMA disabled hdc: ATAPI reset complete Am I barking up the wrong tree? Do I need to use a different function, or a generic command instead of a packet command? Regards, Jeffrey [1] Interesting that there appears to be enough room in the spec for a drive transferring 2^32 * 1000 KB/s. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: How to change DVD-ROM speed?
On Wed, 27 Jun 2001, Jens Axboe wrote: On Wed, Jun 27 2001, Jeffrey W. Baker wrote: I will be happy to :) Should I hang conditional code off the existing ioctl (CDROM_SELECT_SPEED, ide_cdrom_select_speed) or use a new one? Excellent. I'd say use the same ioctl if you can, but default to using SET_STREAMING for DVD drives. Hrmm, ah, hrmm. Perhaps I need a little help with this one :) Just for testing, I modified cdrom_select_speed in ide-cd.c to use SET STREAMING. Working from the Fuji spec, I created a 28-byte buffer, set the starting lba to 0, the ending lba to 0x, the read speed to 0x00ff, and the read time to 0x00ff, expecting a resulting speed of 1KB/ms or 1000KB/s[1]. I assign the buffer to pc.buffer and send it on its way to cdrom_queue_packet_command(). The result is: CDROM_SELECT_SPEED failed: Input/output error hdc: status timeout: status 0x80 { Busy } hdc: DMA disabled hdc: ATAPI reset complete Am I barking up the wrong tree? Do I need to use a different function, or a generic command instead of a packet command? Regards, Jeffrey [1] Interesting that there appears to be enough room in the spec for a drive transferring 2^32 * 1000 KB/s. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: How to change DVD-ROM speed?
On Wed, 27 Jun 2001, Jesse Pollard wrote: > As long as it still works for the combo drives - CD/CD-RW/DVD > Sony VIAO high end laptops, Toshiba has one, maybe others by now. OK when I send the patch I'll assume you will test it :) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: How to change DVD-ROM speed?
On Wed, 27 Jun 2001, Jens Axboe wrote: > On Wed, Jun 27 2001, Jeffrey W. Baker wrote: > > I am trying to change the spin rate of my IDE DVD-ROM drive. My system is > > an Apple PowerBook G4, and I am using kernel 2.4. I want the drive to > > spin at 1X when I watch movies. Currently, it spins at its highest speed, > > which is very loud and a large power load. > > > > /proc/sys/dev/cdrom/info indicates that the speed of the drive can be > > changed. I use hdparm -E 1 /dev/dvd to attempt to set the speed, and it > > reports success. However, the drive continues to spin at its highest > > speed. > > Linux still uses the old-style SET_SPEED command, which is probably not > supported correctly by your newer drive. Just checking, I see latest Mt > Fuji only lists it for CD-RW. For DVD, we're supposed to do > SET_STREAMING to specify such requirements. > > Feel free to implement it :-) I will be happy to :) Should I hang conditional code off the existing ioctl (CDROM_SELECT_SPEED, ide_cdrom_select_speed) or use a new one? Best, Jeffrey - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
How to change DVD-ROM speed?
I am trying to change the spin rate of my IDE DVD-ROM drive. My system is an Apple PowerBook G4, and I am using kernel 2.4. I want the drive to spin at 1X when I watch movies. Currently, it spins at its highest speed, which is very loud and a large power load. /proc/sys/dev/cdrom/info indicates that the speed of the drive can be changed. I use hdparm -E 1 /dev/dvd to attempt to set the speed, and it reports success. However, the drive continues to spin at its highest speed. Is this ioctl supposed to work on DVD drives? PPC? In MacOS, the DVD spins quietly when watching movies. Regards, Jeffrey - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
How to change DVD-ROM speed?
I am trying to change the spin rate of my IDE DVD-ROM drive. My system is an Apple PowerBook G4, and I am using kernel 2.4. I want the drive to spin at 1X when I watch movies. Currently, it spins at its highest speed, which is very loud and a large power load. /proc/sys/dev/cdrom/info indicates that the speed of the drive can be changed. I use hdparm -E 1 /dev/dvd to attempt to set the speed, and it reports success. However, the drive continues to spin at its highest speed. Is this ioctl supposed to work on DVD drives? PPC? In MacOS, the DVD spins quietly when watching movies. Regards, Jeffrey - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: How to change DVD-ROM speed?
On Wed, 27 Jun 2001, Jens Axboe wrote: On Wed, Jun 27 2001, Jeffrey W. Baker wrote: I am trying to change the spin rate of my IDE DVD-ROM drive. My system is an Apple PowerBook G4, and I am using kernel 2.4. I want the drive to spin at 1X when I watch movies. Currently, it spins at its highest speed, which is very loud and a large power load. /proc/sys/dev/cdrom/info indicates that the speed of the drive can be changed. I use hdparm -E 1 /dev/dvd to attempt to set the speed, and it reports success. However, the drive continues to spin at its highest speed. Linux still uses the old-style SET_SPEED command, which is probably not supported correctly by your newer drive. Just checking, I see latest Mt Fuji only lists it for CD-RW. For DVD, we're supposed to do SET_STREAMING to specify such requirements. Feel free to implement it :-) I will be happy to :) Should I hang conditional code off the existing ioctl (CDROM_SELECT_SPEED, ide_cdrom_select_speed) or use a new one? Best, Jeffrey - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: VM Requirement Document - v0.0
On Wed, 27 Jun 2001, Stefan Hoffmeister wrote: > : On Tue, 26 Jun 2001 18:42:56 -0300 (BRST), Rik van Riel wrote: > > >On Tue, 26 Jun 2001, John Stoffel wrote: > > > >> Or that we're doing big sequential reads of file(s) which are > >> larger than memory, in which case expanding the cache size buys > >> us nothing, and can actually hurt us alot. > > > >That's a big "OR". I think we should have an algorithm to > >see which of these two is the case, otherwise we're just > >making the wrong decision half of the time. > > Windows NT/2000 has flags that can be for each CreateFile operation > ("open" in Unix terms), for instance > > FILE_ATTRIBUTE_TEMPORARY > > FILE_FLAG_WRITE_THROUGH > FILE_FLAG_NO_BUFFERING > FILE_FLAG_RANDOM_ACCESS > FILE_FLAG_SEQUENTIAL_SCAN > > If Linux does not have mechanism that would allow the signalling of > specific use case, it might be helpful to implement such a hinting system? These flags would be really handy. We already have the raw device for sequential reading of e.g. CDROM and DVD devices. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: VM Requirement Document - v0.0
On Wed, 27 Jun 2001, Stefan Hoffmeister wrote: : On Tue, 26 Jun 2001 18:42:56 -0300 (BRST), Rik van Riel wrote: On Tue, 26 Jun 2001, John Stoffel wrote: Or that we're doing big sequential reads of file(s) which are larger than memory, in which case expanding the cache size buys us nothing, and can actually hurt us alot. That's a big OR. I think we should have an algorithm to see which of these two is the case, otherwise we're just making the wrong decision half of the time. Windows NT/2000 has flags that can be for each CreateFile operation (open in Unix terms), for instance FILE_ATTRIBUTE_TEMPORARY FILE_FLAG_WRITE_THROUGH FILE_FLAG_NO_BUFFERING FILE_FLAG_RANDOM_ACCESS FILE_FLAG_SEQUENTIAL_SCAN If Linux does not have mechanism that would allow the signalling of specific use case, it might be helpful to implement such a hinting system? These flags would be really handy. We already have the raw device for sequential reading of e.g. CDROM and DVD devices. -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Break 2.4 VM in five easy steps
On 6 Jun 2001, Eric W. Biederman wrote: > "Jeffrey W. Baker" <[EMAIL PROTECTED]> writes: > > > On Tue, 5 Jun 2001, Derek Glidden wrote: > > > > > > > > After reading the messages to this list for the last couple of weeks and > > > playing around on my machine, I'm convinced that the VM system in 2.4 is > > > still severely broken. > > > > > > This isn't trying to test extreme low-memory pressure, just how the > > > system handles recovering from going somewhat into swap, which is a real > > > day-to-day problem for me, because I often run a couple of apps that > > > most of the time live in RAM, but during heavy computation runs, can go > > > a couple hundred megs into swap for a few minutes at a time. Whenever > > > that happens, my machine always starts acting up afterwards, so I > > > started investigating and found some really strange stuff going on. > > > > I reboot each of my machines every week, to take them offline for > > intrusion detection. I use 2.4 because I need advanced features of > > iptables that ipchains lacks. Because the 2.4 VM is so broken, and > > because my machines are frequently deeply swapped, they can sometimes take > > over 30 minutes to shutdown. They hang of course when the shutdown rc > > script turns off the swap. The first few times this happened I assumed > > they were dead. > > Interesting. Is it constant disk I/O? Or constant CPU utilization. > In any case you should be able to comment that line out of your shutdown > rc script and be in perfectly good shape. Well I can't exactly run top(1) at shutdown time, but the disks aren't running at all. Either the system is using the CPUs, or it is blocked waiting for something to happen. You're right about swapoff, we removed it from our shutdown script. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Break 2.4 VM in five easy steps
On 6 Jun 2001, Eric W. Biederman wrote: Jeffrey W. Baker [EMAIL PROTECTED] writes: On Tue, 5 Jun 2001, Derek Glidden wrote: After reading the messages to this list for the last couple of weeks and playing around on my machine, I'm convinced that the VM system in 2.4 is still severely broken. This isn't trying to test extreme low-memory pressure, just how the system handles recovering from going somewhat into swap, which is a real day-to-day problem for me, because I often run a couple of apps that most of the time live in RAM, but during heavy computation runs, can go a couple hundred megs into swap for a few minutes at a time. Whenever that happens, my machine always starts acting up afterwards, so I started investigating and found some really strange stuff going on. I reboot each of my machines every week, to take them offline for intrusion detection. I use 2.4 because I need advanced features of iptables that ipchains lacks. Because the 2.4 VM is so broken, and because my machines are frequently deeply swapped, they can sometimes take over 30 minutes to shutdown. They hang of course when the shutdown rc script turns off the swap. The first few times this happened I assumed they were dead. Interesting. Is it constant disk I/O? Or constant CPU utilization. In any case you should be able to comment that line out of your shutdown rc script and be in perfectly good shape. Well I can't exactly run top(1) at shutdown time, but the disks aren't running at all. Either the system is using the CPUs, or it is blocked waiting for something to happen. You're right about swapoff, we removed it from our shutdown script. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Break 2.4 VM in five easy steps
On Wed, 6 Jun 2001, Andrew Morton wrote: > "Jeffrey W. Baker" wrote: > > > > Because the 2.4 VM is so broken, and > > because my machines are frequently deeply swapped, > > The swapoff algorithms in 2.2 and 2.4 are basically identical. > The problem *appears* worse in 2.4 because it uses lots > more swap. > > > they can sometimes take over 30 minutes to shutdown. > > Yes. The sys_swapoff() system call can take many minutes > of CPU time. It basically does: > > for (each page in swap device) { > for (each process) { > for (each page used by this process) > stuff Sure, and at shutdown time when swapoff is called, there is only 1 process, init, which isn't swapped out anymore. So this should run like lightning. Repeat: something is horribly wrong with the VM's management of pages, lists, swap, cache, etc. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Break 2.4 VM in five easy steps
On Tue, 5 Jun 2001, Derek Glidden wrote: > > After reading the messages to this list for the last couple of weeks and > playing around on my machine, I'm convinced that the VM system in 2.4 is > still severely broken. > > This isn't trying to test extreme low-memory pressure, just how the > system handles recovering from going somewhat into swap, which is a real > day-to-day problem for me, because I often run a couple of apps that > most of the time live in RAM, but during heavy computation runs, can go > a couple hundred megs into swap for a few minutes at a time. Whenever > that happens, my machine always starts acting up afterwards, so I > started investigating and found some really strange stuff going on. I reboot each of my machines every week, to take them offline for intrusion detection. I use 2.4 because I need advanced features of iptables that ipchains lacks. Because the 2.4 VM is so broken, and because my machines are frequently deeply swapped, they can sometimes take over 30 minutes to shutdown. They hang of course when the shutdown rc script turns off the swap. The first few times this happened I assumed they were dead. So, unlike what certain people like to repeatedly claim, the 2.4 VM problems are causing havoc in the real world. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Break 2.4 VM in five easy steps
On Tue, 5 Jun 2001, Derek Glidden wrote: After reading the messages to this list for the last couple of weeks and playing around on my machine, I'm convinced that the VM system in 2.4 is still severely broken. This isn't trying to test extreme low-memory pressure, just how the system handles recovering from going somewhat into swap, which is a real day-to-day problem for me, because I often run a couple of apps that most of the time live in RAM, but during heavy computation runs, can go a couple hundred megs into swap for a few minutes at a time. Whenever that happens, my machine always starts acting up afterwards, so I started investigating and found some really strange stuff going on. I reboot each of my machines every week, to take them offline for intrusion detection. I use 2.4 because I need advanced features of iptables that ipchains lacks. Because the 2.4 VM is so broken, and because my machines are frequently deeply swapped, they can sometimes take over 30 minutes to shutdown. They hang of course when the shutdown rc script turns off the swap. The first few times this happened I assumed they were dead. So, unlike what certain people like to repeatedly claim, the 2.4 VM problems are causing havoc in the real world. -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Break 2.4 VM in five easy steps
On Wed, 6 Jun 2001, Andrew Morton wrote: Jeffrey W. Baker wrote: Because the 2.4 VM is so broken, and because my machines are frequently deeply swapped, The swapoff algorithms in 2.2 and 2.4 are basically identical. The problem *appears* worse in 2.4 because it uses lots more swap. they can sometimes take over 30 minutes to shutdown. Yes. The sys_swapoff() system call can take many minutes of CPU time. It basically does: for (each page in swap device) { for (each process) { for (each page used by this process) stuff Sure, and at shutdown time when swapoff is called, there is only 1 process, init, which isn't swapped out anymore. So this should run like lightning. Repeat: something is horribly wrong with the VM's management of pages, lists, swap, cache, etc. -jwb - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: bdflush/mm performance drop-out defect (more info)
On Tue, 22 May 2001, null wrote: > Here is some additional info about the 2.4 performance defect. > > Only one person offered a suggestion about the use of HIGHMEM. I tried > with and without HIGHMEM enabled with the same results. However, it does > appear to take a bit longer to reach performance drop-out condition when > HIGHMEM is disabled. > > The same system degradation also appears when using partitions on a single > internal SCSI drive, but seems to happen only when performing the I/O in > parallel processes. It appears that the load must be sustained long > enough to affect some buffer cache behavior. Parallel dd commands > (if=/dev/zero) also reveal the problem. I still need to do some > benchmarks, but it looks like 2.4 kernels achieve roughly 25% (or less?) > of the throughput of the 2.2 kernels under heavy parallel loads (on > identical hardware). I've also confirmed the defect on a dual-processor > Xeon system with 2.4. The defect exists whether drivers are built-in or > compiled as modules, altho the parallel mkfs test duration improves by as > much as 50% in some cases when using a kernel with built-in SCSI drivers. That's a very interesting observation. May I suggest that the problem may be the driver for your SCSI device? I just ran some tests of parallel I/O on a 2 CPU Intel Pentium III 800 MHz with 2GB main memory, on a single Seagate Barracuda ST336704LWV attached to a AIC7896. The system controller is Intel 440GX. The kernel is 2.4.3-ac7: jwb@windmill:~$ for i in 1 2 3 4 5 6 7 8 9 10; do time dd if=/dev/zero of=/tmp/$i bs=4096 count=25600 & done; This spawns 10 writers of 100MB files on the same filesystem. While all this went on, the system was responsive, and vmstat showed a steady block write of at least 2 blocks/second. Meanwhile this machine also has constantly used mysql and postgresql database systems and a few interactive users. The test completed in 19 seconds and 24 seconds on separate runs. I also performed this test on a machine with 2 Intel Pentium III 933 MHz CPUs, 512MB main memory, an Intel 840 system controller, and a Quantum 10K II 9GB drive attached to an Adaptec 7899P controller, using kernel 2.4.4-ac8. I had no problems there either, and the test completed in 30 seconds (with a nearly full disk). I also didn't see this problem on an Apple Powerbook G4 nor on another Intel machine with a DAC960 RAID. In short, I'm not seeing this problem. Regards, Jeffrey Baker - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: bdflush/mm performance drop-out defect (more info)
On Tue, 22 May 2001, null wrote: Here is some additional info about the 2.4 performance defect. Only one person offered a suggestion about the use of HIGHMEM. I tried with and without HIGHMEM enabled with the same results. However, it does appear to take a bit longer to reach performance drop-out condition when HIGHMEM is disabled. The same system degradation also appears when using partitions on a single internal SCSI drive, but seems to happen only when performing the I/O in parallel processes. It appears that the load must be sustained long enough to affect some buffer cache behavior. Parallel dd commands (if=/dev/zero) also reveal the problem. I still need to do some benchmarks, but it looks like 2.4 kernels achieve roughly 25% (or less?) of the throughput of the 2.2 kernels under heavy parallel loads (on identical hardware). I've also confirmed the defect on a dual-processor Xeon system with 2.4. The defect exists whether drivers are built-in or compiled as modules, altho the parallel mkfs test duration improves by as much as 50% in some cases when using a kernel with built-in SCSI drivers. That's a very interesting observation. May I suggest that the problem may be the driver for your SCSI device? I just ran some tests of parallel I/O on a 2 CPU Intel Pentium III 800 MHz with 2GB main memory, on a single Seagate Barracuda ST336704LWV attached to a AIC7896. The system controller is Intel 440GX. The kernel is 2.4.3-ac7: jwb@windmill:~$ for i in 1 2 3 4 5 6 7 8 9 10; do time dd if=/dev/zero of=/tmp/$i bs=4096 count=25600 done; This spawns 10 writers of 100MB files on the same filesystem. While all this went on, the system was responsive, and vmstat showed a steady block write of at least 2 blocks/second. Meanwhile this machine also has constantly used mysql and postgresql database systems and a few interactive users. The test completed in 19 seconds and 24 seconds on separate runs. I also performed this test on a machine with 2 Intel Pentium III 933 MHz CPUs, 512MB main memory, an Intel 840 system controller, and a Quantum 10K II 9GB drive attached to an Adaptec 7899P controller, using kernel 2.4.4-ac8. I had no problems there either, and the test completed in 30 seconds (with a nearly full disk). I also didn't see this problem on an Apple Powerbook G4 nor on another Intel machine with a DAC960 RAID. In short, I'm not seeing this problem. Regards, Jeffrey Baker - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Seems to be a lot of confusion about aic7xxx in linux 2.4.3
I've been seeing a lot of complaints about aic7xxx in the 2.4.3 kernel. I think that people are missing the crucial point: aic7xxx won't compile if you patch up from 2.4.2, but if you download the complete 2.4.3 tarball, it compiles fine. So, I conclude that the patch was created incorrectly, or that something changed between cutting the patch and the tarball. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Seems to be a lot of confusion about aic7xxx in linux 2.4.3
I've been seeing a lot of complaints about aic7xxx in the 2.4.3 kernel. I think that people are missing the crucial point: aic7xxx won't compile if you patch up from 2.4.2, but if you download the complete 2.4.3 tarball, it compiles fine. So, I conclude that the patch was created incorrectly, or that something changed between cutting the patch and the tarball. -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[PATCH] 2.4.3-pre3 add PBG4 native LCD mode to modedb.c
The attached patch adds the a new mode to the modedb, used by the ATI, 3Dfx, and Amiga frame buffer devices. The new mode is the native, slightly wide resolution of the new Apple laptops. It isn't obvious how popular a mode has to be before it goes into modedb.c. -jwb --- drivers/video/modedb.c.orig Tue Mar 13 04:20:43 2001 +++ drivers/video/modedb.c Tue Mar 13 04:16:20 2001 @@ -91,6 +91,10 @@ NULL, 60, 1024, 768, 15384, 168, 8, 29, 3, 144, 6, 0, FB_VMODE_NONINTERLACED }, { + /* 1152x768 @ 55 Hz, 44.154 kHz hsync, PowerBook G4 LCD */ +NULL, 55, 1152, 768, 15386, 126, 58, 29, 3, 136, 6, +0, FB_VMODE_NONINTERLACED +}, { /* 640x480 @ 100 Hz, 53.01 kHz hsync */ NULL, 100, 640, 480, 21834, 96, 32, 36, 8, 96, 6, 0, FB_VMODE_NONINTERLACED
[PATCH] 2.4.3-pre3 add PBG4 native LCD mode to modedb.c
The attached patch adds the a new mode to the modedb, used by the ATI, 3Dfx, and Amiga frame buffer devices. The new mode is the native, slightly wide resolution of the new Apple laptops. It isn't obvious how popular a mode has to be before it goes into modedb.c. -jwb --- drivers/video/modedb.c.orig Tue Mar 13 04:20:43 2001 +++ drivers/video/modedb.c Tue Mar 13 04:16:20 2001 @@ -91,6 +91,10 @@ NULL, 60, 1024, 768, 15384, 168, 8, 29, 3, 144, 6, 0, FB_VMODE_NONINTERLACED }, { + /* 1152x768 @ 55 Hz, 44.154 kHz hsync, PowerBook G4 LCD */ +NULL, 55, 1152, 768, 15386, 126, 58, 29, 3, 136, 6, +0, FB_VMODE_NONINTERLACED +}, { /* 640x480 @ 100 Hz, 53.01 kHz hsync */ NULL, 100, 640, 480, 21834, 96, 32, 36, 8, 96, 6, 0, FB_VMODE_NONINTERLACED
Re: 2.4.0-pre9: mga drm still not working
On Thu, 5 Oct 2000, Michael Meding wrote: > Hi there, > > just a side note. It is recommended that you use the mga.o from dri tree > anyway... > not the one from the kernel tree. That won't help much about the > underlying problem with the loading order but since there is no way to > compile the mga.o in from the dri tree the problem itself vanishes ;-) I downloaded the DRI CVS tree this morning and tried to build it, but unfortunately it failed in numerous ways. Starting here: as -o 3dnow_xform_masked4.o 3dnow_xform_masked4.i Assembler messages: Error: Can't open 3dnow_xform_masked4.i for reading. 3dnow_xform_masked4.i: No such file or directory make[6]: *** [3dnow_xform_masked4.o] Error 1 Maybe I'll try building without the 3DNow business. Do you think that will help? -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: 2.4.0-pre9: mga drm still not working
On Thu, 5 Oct 2000, Michael Meding wrote: Hi there, just a side note. It is recommended that you use the mga.o from dri tree anyway... not the one from the kernel tree. That won't help much about the underlying problem with the loading order but since there is no way to compile the mga.o in from the dri tree the problem itself vanishes ;-) I downloaded the DRI CVS tree this morning and tried to build it, but unfortunately it failed in numerous ways. Starting here: as -o 3dnow_xform_masked4.o 3dnow_xform_masked4.i Assembler messages: Error: Can't open 3dnow_xform_masked4.i for reading. 3dnow_xform_masked4.i: No such file or directory make[6]: *** [3dnow_xform_masked4.o] Error 1 Maybe I'll try building without the 3DNow business. Do you think that will help? -jwb - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/