Re: Linux disk performance.
On 12/22/06, Manish Regmi <[EMAIL PROTECTED]> wrote: On 12/22/06, Bhanu Kalyan Chetlapalli <[EMAIL PROTECTED]> wrote: > > I am assuming that your program is not seeking inbetween writes. > > Try disabling the Disk Cache, now-a-days some disks can have as much > as 8MB write cache. so the disk might be buffering as much as it can, > and trying to write only when it can no longer buffer. Since you have > an app which continously write copious amounts of data, in order, > disabling write cache might make some sense. > Thanks for the suggestion but the performance was terrible when write cache was disabled. Performance degradation is expected. But the point is - did the anomaly, that you have pointed out, go away? Because if it did, then it is the disk cache which is causing the issue, and you will have to live with it. Else you will have to look elsewhere. -- --- regards Manish Regmi --- UNIX without a C Compiler is like eating Spaghetti with your mouth sewn shut. It just doesn't make sense. -- There is only one success - to be able to spend your life in your own way. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux disk performance.
On 12/20/06, Manish Regmi <[EMAIL PROTECTED]> wrote: On 12/19/06, Nick Piggin <[EMAIL PROTECTED]> wrote: > When you submit a request to an empty block device queue, it can > get "plugged" for a number of timer ticks before any IO is actually > started. This is done for efficiency reasons and is independent of > the IO scheduler used. > Thanks for the information.. > Use the noop IO scheduler, as well as the attached patch, and let's > see what your numbers look like. > Unfortunately i got the same results even after applying your patch. I also tried putting q->unplug_delay = 1; But it did not work. The result was similar. I am assuming that your program is not seeking inbetween writes. Try disabling the Disk Cache, now-a-days some disks can have as much as 8MB write cache. so the disk might be buffering as much as it can, and trying to write only when it can no longer buffer. Since you have an app which continously write copious amounts of data, in order, disabling write cache might make some sense. -- --- regards Manish Regmi Bhanu --- UNIX without a C Compiler is like eating Spaghetti with your mouth sewn shut. It just doesn't make sense. -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/ -- There is only one success - to be able to spend your life in your own way. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux disk performance.
On 12/20/06, Manish Regmi [EMAIL PROTECTED] wrote: On 12/19/06, Nick Piggin [EMAIL PROTECTED] wrote: When you submit a request to an empty block device queue, it can get plugged for a number of timer ticks before any IO is actually started. This is done for efficiency reasons and is independent of the IO scheduler used. Thanks for the information.. Use the noop IO scheduler, as well as the attached patch, and let's see what your numbers look like. Unfortunately i got the same results even after applying your patch. I also tried putting q-unplug_delay = 1; But it did not work. The result was similar. I am assuming that your program is not seeking inbetween writes. Try disabling the Disk Cache, now-a-days some disks can have as much as 8MB write cache. so the disk might be buffering as much as it can, and trying to write only when it can no longer buffer. Since you have an app which continously write copious amounts of data, in order, disabling write cache might make some sense. -- --- regards Manish Regmi Bhanu --- UNIX without a C Compiler is like eating Spaghetti with your mouth sewn shut. It just doesn't make sense. -- Kernelnewbies: Help each other learn about the Linux kernel. Archive: http://mail.nl.linux.org/kernelnewbies/ FAQ: http://kernelnewbies.org/faq/ -- There is only one success - to be able to spend your life in your own way. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux disk performance.
On 12/22/06, Manish Regmi [EMAIL PROTECTED] wrote: On 12/22/06, Bhanu Kalyan Chetlapalli [EMAIL PROTECTED] wrote: I am assuming that your program is not seeking inbetween writes. Try disabling the Disk Cache, now-a-days some disks can have as much as 8MB write cache. so the disk might be buffering as much as it can, and trying to write only when it can no longer buffer. Since you have an app which continously write copious amounts of data, in order, disabling write cache might make some sense. Thanks for the suggestion but the performance was terrible when write cache was disabled. Performance degradation is expected. But the point is - did the anomaly, that you have pointed out, go away? Because if it did, then it is the disk cache which is causing the issue, and you will have to live with it. Else you will have to look elsewhere. -- --- regards Manish Regmi --- UNIX without a C Compiler is like eating Spaghetti with your mouth sewn shut. It just doesn't make sense. -- There is only one success - to be able to spend your life in your own way. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: opening linux char device file in user thread.
On 8/4/05, P.Manohar <[EMAIL PROTECTED]> wrote: > > hai, > >I have written a daemon which is running in user space, will send some > data periodically to kernel space. This I have done with the help of a > device file. > > It is working, but I want to apply threads mechanism in that daemon. But > when I split that daemon functionality into a thread and a original > process. I am unable to > open the device file. This is happening in both places(either in thread or > original process). Try opening the device, get the FD and THEN spawn the thread. this will help, as the device is opened only once as far as the driver is concerned. The presence of usage from the thread is felt only in the reference count of the fd (which should be transparent to user space and the device driver). Race conditions are assumed to be taken care of in the kernel module though. The other way is to open device, write data, close device every time u write something. This is beneficial if the time between the writes is seperated by more than a minute. There will be no races etc to take care of. > The device is opening when threading is not there. > > Can anybody suggest me? > > Regards, > P.Manohar. > Bhanu. > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > -- The difference between Theory and Practice is more so in Practice than in Theory. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: opening linux char device file in user thread.
On 8/4/05, P.Manohar [EMAIL PROTECTED] wrote: hai, I have written a daemon which is running in user space, will send some data periodically to kernel space. This I have done with the help of a device file. It is working, but I want to apply threads mechanism in that daemon. But when I split that daemon functionality into a thread and a original process. I am unable to open the device file. This is happening in both places(either in thread or original process). Try opening the device, get the FD and THEN spawn the thread. this will help, as the device is opened only once as far as the driver is concerned. The presence of usage from the thread is felt only in the reference count of the fd (which should be transparent to user space and the device driver). Race conditions are assumed to be taken care of in the kernel module though. The other way is to open device, write data, close device every time u write something. This is beneficial if the time between the writes is seperated by more than a minute. There will be no races etc to take care of. The device is opening when threading is not there. Can anybody suggest me? Regards, P.Manohar. Bhanu. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ -- The difference between Theory and Practice is more so in Practice than in Theory. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Whats in this vaddr segment 0xffffe000-0xfffff000 ---p ?
To the best of my knowledge, It is the vsyscall page. The manner in which system calls were implemented has changed from using the 80h interrupt directly to using the vsyscall page (on the x86 arch). This makes for better throughput while running frequently used system calls which do not affect the kernel, but merely retrieve the information. A very good example is the system call to retrieve the current time, which is used extensively esp during logging. Google for vsyscall page and you will get more information. > on a 64-bit(uname --all == 'Linux host 2.6.5-7.97.smp #1 > x86_64 x86_64 x86_64 GNU/Linux) machine which is running the same > kernel, I try to write the contents of the virtual address on to file > with (r = write(fd,0xe000,4096) ). The write on this machine is > successful. But if I try to write the same segment on 32-bit machine > (uname --all == Linux host 2.6.5-7.97-smp #1 i686 i686 > i386 GNU/Linux). The location of the vsyscall page is different on 32 and 64 bit machines. So 0xe000 is NOT the address you are looking for while dealing with the 64 bit machine. Rather 0xff60 is the correct location (on x86-64). Regards, Bhanu. On 7/22/05, vamsi krishna <[EMAIL PROTECTED]> wrote: > Hello All, > > Sorry to interrupt you. > > I have been facing a wierd problem on same kernel version > (2.6.5-7.97.smp) but running on different machines 32-bit and 64-bit > (which can run 32-bit also). > > I found that every process running in this kernel version has a > virtual address mapping in /proc//maps file as follows > <--> > e000-000 ---p 00:00 0 > <--> > > You can find this vaddr mapping at end of maps file. > > on a 64-bit(uname --all == 'Linux host 2.6.5-7.97.smp #1 > x86_64 x86_64 x86_64 GNU/Linux) machine which is running the same > kernel, I try to write the contents of the virtual address on to file > with > (r = write(fd,0xe000,4096) ). The write on this machine is > successful. But if I try to write the same segment on 32-bit machine > (uname --all == Linux host 2.6.5-7.97-smp #1 i686 i686 > i386 GNU/Linux). > > The write on this 32-bit machine fails with EFAULT(14), but if memcpy > to a buffer from this virtual address seems to work fine i.e if I do > 'memcpy(buf1,0xe000,4096)' it write perfectly the contents of this > virtual address segment into the buf1. > > I had a hard time googling about this I could'nt find any information > on why this happens. May be some mm hackers may share some of their > thoughts. > > Really appreciate your inputs on this. > > Sincerely, > Vamsi kundeti > > PS: BTW I'am running suse distribution and will glibc will have any > effect on write behaviour ? (I though that since write is a syscall > the issue might be with the kernel the thus skipping the glibc > details) > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > -- The difference between Theory and Practice is more so in Practice than in Theory. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Whats in this vaddr segment 0xffffe000-0xfffff000 ---p ?
To the best of my knowledge, It is the vsyscall page. The manner in which system calls were implemented has changed from using the 80h interrupt directly to using the vsyscall page (on the x86 arch). This makes for better throughput while running frequently used system calls which do not affect the kernel, but merely retrieve the information. A very good example is the system call to retrieve the current time, which is used extensively esp during logging. Google for vsyscall page and you will get more information. on a 64-bit(uname --all == 'Linux host 2.6.5-7.97.smp #1 time stamp x86_64 x86_64 x86_64 GNU/Linux) machine which is running the same kernel, I try to write the contents of the virtual address on to file with (r = write(fd,0xe000,4096) ). The write on this machine is successful. But if I try to write the same segment on 32-bit machine (uname --all == Linux host 2.6.5-7.97-smp #1 timestamp i686 i686 i386 GNU/Linux). The location of the vsyscall page is different on 32 and 64 bit machines. So 0xe000 is NOT the address you are looking for while dealing with the 64 bit machine. Rather 0xff60 is the correct location (on x86-64). Regards, Bhanu. On 7/22/05, vamsi krishna [EMAIL PROTECTED] wrote: Hello All, Sorry to interrupt you. I have been facing a wierd problem on same kernel version (2.6.5-7.97.smp) but running on different machines 32-bit and 64-bit (which can run 32-bit also). I found that every process running in this kernel version has a virtual address mapping in /proc/pid/maps file as follows -- e000-000 ---p 00:00 0 -- You can find this vaddr mapping at end of maps file. on a 64-bit(uname --all == 'Linux host 2.6.5-7.97.smp #1 time stamp x86_64 x86_64 x86_64 GNU/Linux) machine which is running the same kernel, I try to write the contents of the virtual address on to file with (r = write(fd,0xe000,4096) ). The write on this machine is successful. But if I try to write the same segment on 32-bit machine (uname --all == Linux host 2.6.5-7.97-smp #1 timestamp i686 i686 i386 GNU/Linux). The write on this 32-bit machine fails with EFAULT(14), but if memcpy to a buffer from this virtual address seems to work fine i.e if I do 'memcpy(buf1,0xe000,4096)' it write perfectly the contents of this virtual address segment into the buf1. I had a hard time googling about this I could'nt find any information on why this happens. May be some mm hackers may share some of their thoughts. Really appreciate your inputs on this. Sincerely, Vamsi kundeti PS: BTW I'am running suse distribution and will glibc will have any effect on write behaviour ? (I though that since write is a syscall the issue might be with the kernel the thus skipping the glibc details) - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ -- The difference between Theory and Practice is more so in Practice than in Theory. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/