Re: Linux Disk Performance/File IO per process
It depends on what the performance hit is 'after coding'. If the code is say less than 5% overhead I honestly don't see there being a problem then just to compile it in the kernel and keep it active all the time. Only people who would need it would compile it in, and from experience 5% or less for the systems that would be keeping this data is negligible considering functionality/statistics gained. Steve - Original Message - From: "James Sutherland" <[EMAIL PROTECTED]> To: "List User" <[EMAIL PROTECTED]> Cc: "Chris Evans" <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Monday, January 29, 2001 20:18 Subject: Re: Linux Disk Performance/File IO per process > On Mon, 29 Jan 2001, List User wrote: > > > Just wanted to 'chime' in here. Yes this would be noisy and will have > > an affect on system performance however these statistics are what are > > used in conjunction with several others to size systems as well as to > > plan on growth. If Linux is to be put into an enterprise environment > > these types of statistics will be needed. > > > > When you start hooking up 100's of 'physical volumes' (be it real > > disks or raided logical drives) this data helps you pin-point > > problems. I think the idea of having the ability to turn such > > accounting on/off via /proc entry a very nice method of doing things. > > Question: how will the extra overhead of checking this configuration > compare with just doing it anyway? > > If the code ends up as: > > if (stats_enabled) > counter++; > > then you'd be better off keeping stats enabled all the time... > > Obviously it'll be a bit more complex, but will the stats code be able to > remove itself completely when disabled, even at runtime?? > > Might be possible with IBM's dprobes, perhaps...? > > > That way you can leave it off for normal run-time but when users > > complain or DBA's et al you can turn it on get some stats for a couple > > hours/days whatever, then turn it back off and plan an upgrade or > > re-create a logical volume or stripping set. > > NT allows boot-time (en|dis)abling of stats; they quote a percentage for > the performance hit caused - 4%, or something like that?? Of course, they > don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so > the accuracy of that figure is a bit questionable... > > > James. > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > Please read the FAQ at http://www.tux.org/lkml/ > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001, List User wrote: > Just wanted to 'chime' in here. Yes this would be noisy and will have > an affect on system performance however these statistics are what are > used in conjunction with several others to size systems as well as to > plan on growth. If Linux is to be put into an enterprise environment > these types of statistics will be needed. > > When you start hooking up 100's of 'physical volumes' (be it real > disks or raided logical drives) this data helps you pin-point > problems. I think the idea of having the ability to turn such > accounting on/off via /proc entry a very nice method of doing things. Question: how will the extra overhead of checking this configuration compare with just doing it anyway? If the code ends up as: if (stats_enabled) counter++; then you'd be better off keeping stats enabled all the time... Obviously it'll be a bit more complex, but will the stats code be able to remove itself completely when disabled, even at runtime?? Might be possible with IBM's dprobes, perhaps...? > That way you can leave it off for normal run-time but when users > complain or DBA's et al you can turn it on get some stats for a couple > hours/days whatever, then turn it back off and plan an upgrade or > re-create a logical volume or stripping set. NT allows boot-time (en|dis)abling of stats; they quote a percentage for the performance hit caused - 4%, or something like that?? Of course, they don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so the accuracy of that figure is a bit questionable... James. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Linux Disk Performance/File IO per process
Just wanted to 'chime' in here. Yes this would be noisy and will have an affect on system performance however these statistics are what are used in conjunction with several others to size systems as well as to plan on growth. If Linux is to be put into an enterprise environment these types of statistics will be needed. When you start hooking up 100's of 'physical volumes' (be it real disks or raided logical drives) this data helps you pin-point problems. I think the idea of having the ability to turn such accounting on/off via /proc entry a very nice method of doing things. That way you can leave it off for normal run-time but when users complain or DBA's et al you can turn it on get some stats for a couple hours/days whatever, then turn it back off and plan an upgrade or re-create a logical volume or stripping set. Steve - Original Message - From: "Chris Evans" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Cc: <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]>; <[EMAIL PROTECTED]> Sent: Monday, January 29, 2001 07:26 Subject: RE: Linux Disk Performance/File IO per process > > On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: > > > Thanks to both Jens and Chris - this provides the information I need to > > obtain our busy rate > > It's unfortunate that the kernel needs to be patched to provide this > > information - hopefully it will become part of the kernel soon. > > > > I had a response saying that this shouldn't become part of the kernel due to > > the performance cost that obtaining such data will involve. I agree that a > > cost is involved here, however I think it's up to the user to decide which > > cost is more expensive to them - getting the data, or not being able to see > > how busy their disks are. My feeling here is that this support could be user > > configurable at run time - eg 'cat 1 > /proc/getdiskperf'. > > Hi, > > I disagree with this runtime variable. It is unnecessary complexity. > Maintaining a few counts is total noise compared with the time I/O takes. > > Cheers > Chris > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to [EMAIL PROTECTED] > Please read the FAQ at http://www.tux.org/lkml/ > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[SLUG] Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001, Szabolcs Szakacsits wrote: > On Mon, 29 Jan 2001, Chris Evans wrote: > > > Stephen Tweedie has a rather funky i/o stats enhancement patch which > > should provide what you need. It comes with RedHat7.0 and gives decent > > disk statistics in /proc/partitions. > > Monitoring via /proc [not just IO but close to anything] has the > features: > - slow, not atomic, not scalable > - if kernel decides explicitely or due to a "bug" to refuse doing >IO, you get something like this [even using a mlocked, RT monitor], >procsmemoryswap io system cpu > r b w swpd free buff cache si sobibo incs us sy id > 0 1 1 27116 1048 736 152832 128 1972 2544 869 44 1812 2 43 55 > 5 0 2 27768 1048 744 153372 52 1308 2668 777 43 1772 2 61 37 > 0 2 1 28360 1048 752 153900 332 564 2311 955 49 2081 1 68 31 > > 1 7 2 28356 1048 752 153708 3936 0 2175 29091 494 27348 0 1 99 > 1 0 2 28356 1048 792 153656 172 0 7166 0 144 838 4 17 80 > > In short, monitoring via /proc is unreliable. Not really unreliable, but definitely with _serious_ latency issues :) due to taking the mmap_sem. Acquiring the mmap_sem semaphore can take a really long time under load.. and sys_brk downs this semaphore first thing, as does task_mem() and proc_pid_stat()... If someone has the mmap_sem you want, and is pushing disk I/O when that disk is saturated, you are in for a long wait. This I think is what you see with your mlocked RT monitor (pretty similar to my mlocked RT monitor I suspect) In fact, that darn monitor can have a decidedly negative impact on system performance because it can take an arbitrary task's mana connection and then fault while throttling it... I think ;-) -Mike -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001, Chris Evans wrote: > Stephen Tweedie has a rather funky i/o stats enhancement patch which > should provide what you need. It comes with RedHat7.0 and gives decent > disk statistics in /proc/partitions. Monitoring via /proc [not just IO but close to anything] has the features: - slow, not atomic, not scalable - if kernel decides explicitely or due to a "bug" to refuse doing IO, you get something like this [even using a mlocked, RT monitor], procsmemoryswap io system cpu r b w swpd free buff cache si sobibo incs us sy id 0 1 1 27116 1048 736 152832 128 1972 2544 869 44 1812 2 43 55 5 0 2 27768 1048 744 153372 52 1308 2668 777 43 1772 2 61 37 0 2 1 28360 1048 752 153900 332 564 2311 955 49 2081 1 68 31 1 7 2 28356 1048 752 153708 3936 0 2175 29091 494 27348 0 1 99 1 0 2 28356 1048 792 153656 172 0 7166 0 144 838 4 17 80 In short, monitoring via /proc is unreliable. Szaka - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[SLUG] RE: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: > Thanks to both Jens and Chris - this provides the information I need to > obtain our busy rate > It's unfortunate that the kernel needs to be patched to provide this > information - hopefully it will become part of the kernel soon. > > I had a response saying that this shouldn't become part of the kernel due to > the performance cost that obtaining such data will involve. I agree that a > cost is involved here, however I think it's up to the user to decide which > cost is more expensive to them - getting the data, or not being able to see > how busy their disks are. My feeling here is that this support could be user > configurable at run time - eg 'cat 1 > /proc/getdiskperf'. Hi, I disagree with this runtime variable. It is unnecessary complexity. Maintaining a few counts is total noise compared with the time I/O takes. Cheers Chris -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] RE: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: Thanks to both Jens and Chris - this provides the information I need to obtain our busy rate It's unfortunate that the kernel needs to be patched to provide this information - hopefully it will become part of the kernel soon. I had a response saying that this shouldn't become part of the kernel due to the performance cost that obtaining such data will involve. I agree that a cost is involved here, however I think it's up to the user to decide which cost is more expensive to them - getting the data, or not being able to see how busy their disks are. My feeling here is that this support could be user configurable at run time - eg 'cat 1 /proc/getdiskperf'. Hi, I disagree with this runtime variable. It is unnecessary complexity. Maintaining a few counts is total noise compared with the time I/O takes. Cheers Chris -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001, Chris Evans wrote: Stephen Tweedie has a rather funky i/o stats enhancement patch which should provide what you need. It comes with RedHat7.0 and gives decent disk statistics in /proc/partitions. Monitoring via /proc [not just IO but close to anything] has the features: - slow, not atomic, not scalable - if kernel decides explicitely or due to a "bug" to refuse doing IO, you get something like this [even using a mlocked, RT monitor], procsmemoryswap io system cpu r b w swpd free buff cache si sobibo incs us sy id 0 1 1 27116 1048 736 152832 128 1972 2544 869 44 1812 2 43 55 5 0 2 27768 1048 744 153372 52 1308 2668 777 43 1772 2 61 37 0 2 1 28360 1048 752 153900 332 564 2311 955 49 2081 1 68 31 frozen 1 7 2 28356 1048 752 153708 3936 0 2175 29091 494 27348 0 1 99 1 0 2 28356 1048 792 153656 172 0 7166 0 144 838 4 17 80 In short, monitoring via /proc is unreliable. Szaka - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[SLUG] Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001, Szabolcs Szakacsits wrote: On Mon, 29 Jan 2001, Chris Evans wrote: Stephen Tweedie has a rather funky i/o stats enhancement patch which should provide what you need. It comes with RedHat7.0 and gives decent disk statistics in /proc/partitions. Monitoring via /proc [not just IO but close to anything] has the features: - slow, not atomic, not scalable - if kernel decides explicitely or due to a "bug" to refuse doing IO, you get something like this [even using a mlocked, RT monitor], procsmemoryswap io system cpu r b w swpd free buff cache si sobibo incs us sy id 0 1 1 27116 1048 736 152832 128 1972 2544 869 44 1812 2 43 55 5 0 2 27768 1048 744 153372 52 1308 2668 777 43 1772 2 61 37 0 2 1 28360 1048 752 153900 332 564 2311 955 49 2081 1 68 31 frozen 1 7 2 28356 1048 752 153708 3936 0 2175 29091 494 27348 0 1 99 1 0 2 28356 1048 792 153656 172 0 7166 0 144 838 4 17 80 In short, monitoring via /proc is unreliable. Not really unreliable, but definitely with _serious_ latency issues :) due to taking the mmap_sem. Acquiring the mmap_sem semaphore can take a really long time under load.. and sys_brk downs this semaphore first thing, as does task_mem() and proc_pid_stat()... If someone has the mmap_sem you want, and is pushing disk I/O when that disk is saturated, you are in for a long wait. This I think is what you see with your mlocked RT monitor (pretty similar to my mlocked RT monitor I suspect) In fact, that darn monitor can have a decidedly negative impact on system performance because it can take an arbitrary task's mana connection and then fault while throttling it... I think ;-) -Mike -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
Re: Linux Disk Performance/File IO per process
Just wanted to 'chime' in here. Yes this would be noisy and will have an affect on system performance however these statistics are what are used in conjunction with several others to size systems as well as to plan on growth. If Linux is to be put into an enterprise environment these types of statistics will be needed. When you start hooking up 100's of 'physical volumes' (be it real disks or raided logical drives) this data helps you pin-point problems. I think the idea of having the ability to turn such accounting on/off via /proc entry a very nice method of doing things. That way you can leave it off for normal run-time but when users complain or DBA's et al you can turn it on get some stats for a couple hours/days whatever, then turn it back off and plan an upgrade or re-create a logical volume or stripping set. Steve - Original Message - From: "Chris Evans" [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Monday, January 29, 2001 07:26 Subject: RE: Linux Disk Performance/File IO per process On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: Thanks to both Jens and Chris - this provides the information I need to obtain our busy rate It's unfortunate that the kernel needs to be patched to provide this information - hopefully it will become part of the kernel soon. I had a response saying that this shouldn't become part of the kernel due to the performance cost that obtaining such data will involve. I agree that a cost is involved here, however I think it's up to the user to decide which cost is more expensive to them - getting the data, or not being able to see how busy their disks are. My feeling here is that this support could be user configurable at run time - eg 'cat 1 /proc/getdiskperf'. Hi, I disagree with this runtime variable. It is unnecessary complexity. Maintaining a few counts is total noise compared with the time I/O takes. Cheers Chris - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Linux Disk Performance/File IO per process
It depends on what the performance hit is 'after coding'. If the code is say less than 5% overhead I honestly don't see there being a problem then just to compile it in the kernel and keep it active all the time. Only people who would need it would compile it in, and from experience 5% or less for the systems that would be keeping this data is negligible considering functionality/statistics gained. Steve - Original Message - From: "James Sutherland" [EMAIL PROTECTED] To: "List User" [EMAIL PROTECTED] Cc: "Chris Evans" [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Monday, January 29, 2001 20:18 Subject: Re: Linux Disk Performance/File IO per process On Mon, 29 Jan 2001, List User wrote: Just wanted to 'chime' in here. Yes this would be noisy and will have an affect on system performance however these statistics are what are used in conjunction with several others to size systems as well as to plan on growth. If Linux is to be put into an enterprise environment these types of statistics will be needed. When you start hooking up 100's of 'physical volumes' (be it real disks or raided logical drives) this data helps you pin-point problems. I think the idea of having the ability to turn such accounting on/off via /proc entry a very nice method of doing things. Question: how will the extra overhead of checking this configuration compare with just doing it anyway? If the code ends up as: if (stats_enabled) counter++; then you'd be better off keeping stats enabled all the time... Obviously it'll be a bit more complex, but will the stats code be able to remove itself completely when disabled, even at runtime?? Might be possible with IBM's dprobes, perhaps...? That way you can leave it off for normal run-time but when users complain or DBA's et al you can turn it on get some stats for a couple hours/days whatever, then turn it back off and plan an upgrade or re-create a logical volume or stripping set. NT allows boot-time (en|dis)abling of stats; they quote a percentage for the performance hit caused - 4%, or something like that?? Of course, they don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so the accuracy of that figure is a bit questionable... James. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001, List User wrote: Just wanted to 'chime' in here. Yes this would be noisy and will have an affect on system performance however these statistics are what are used in conjunction with several others to size systems as well as to plan on growth. If Linux is to be put into an enterprise environment these types of statistics will be needed. When you start hooking up 100's of 'physical volumes' (be it real disks or raided logical drives) this data helps you pin-point problems. I think the idea of having the ability to turn such accounting on/off via /proc entry a very nice method of doing things. Question: how will the extra overhead of checking this configuration compare with just doing it anyway? If the code ends up as: if (stats_enabled) counter++; then you'd be better off keeping stats enabled all the time... Obviously it'll be a bit more complex, but will the stats code be able to remove itself completely when disabled, even at runtime?? Might be possible with IBM's dprobes, perhaps...? That way you can leave it off for normal run-time but when users complain or DBA's et al you can turn it on get some stats for a couple hours/days whatever, then turn it back off and plan an upgrade or re-create a logical volume or stripping set. NT allows boot-time (en|dis)abling of stats; they quote a percentage for the performance hit caused - 4%, or something like that?? Of course, they don't say whether that's a 486 on a RAID array or a quad Xeon on IDE, so the accuracy of that figure is a bit questionable... James. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[SLUG] RE: Linux Disk Performance/File IO per process
> -Original Message- > From: Chris Evans [mailto:[EMAIL PROTECTED]] > Sent: Monday, 29 January 2001 13:04 > To: Tony Young > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: Linux Disk Performance/File IO per process > > > > On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: > > > All, > > > > I work for a company that develops a systems and > performance management > > product for Unix (as well as PC and TANDEM) called > PROGNOSIS. Currently we > > support AIX, HP, Solaris, UnixWare, IRIX, and Linux. > > > > I've hit a bit of a wall trying to expand the data provided > by our Linux > > solution - I can't seem to find anywhere that provides the > metrics needed to > > calculate disk busy in the kernel! This is a major piece of > information that > > any mission critical system administrator needs to > successfully monitor > > their systems. > > Stephen Tweedie has a rather funky i/o stats enhancement patch which > should provide what you need. It comes with RedHat7.0 and gives decent > disk statistics in /proc/partitions. > > Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. > I'd like to > see it make the kernel as a 2.4.x item. Failing that, it'll > probably make > the 2.5 kernel. > > Cheers > Chris > Thanks to both Jens and Chris - this provides the information I need to obtain our busy rate It's unfortunate that the kernel needs to be patched to provide this information - hopefully it will become part of the kernel soon. I had a response saying that this shouldn't become part of the kernel due to the performance cost that obtaining such data will involve. I agree that a cost is involved here, however I think it's up to the user to decide which cost is more expensive to them - getting the data, or not being able to see how busy their disks are. My feeling here is that this support could be user configurable at run time - eg 'cat 1 > /proc/getdiskperf'. Thanks for your quick responses. Tony... -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: > All, > > I work for a company that develops a systems and performance management > product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we > support AIX, HP, Solaris, UnixWare, IRIX, and Linux. > > I've hit a bit of a wall trying to expand the data provided by our Linux > solution - I can't seem to find anywhere that provides the metrics needed to > calculate disk busy in the kernel! This is a major piece of information that > any mission critical system administrator needs to successfully monitor > their systems. Stephen Tweedie has a rather funky i/o stats enhancement patch which should provide what you need. It comes with RedHat7.0 and gives decent disk statistics in /proc/partitions. Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. I'd like to see it make the kernel as a 2.4.x item. Failing that, it'll probably make the 2.5 kernel. Cheers Chris - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[SLUG] Re: Linux Disk Performance/File IO per process
On Mon, Jan 29 2001, [EMAIL PROTECTED] wrote: > All, > > I work for a company that develops a systems and performance management > product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we > support AIX, HP, Solaris, UnixWare, IRIX, and Linux. > > I've hit a bit of a wall trying to expand the data provided by our Linux > solution - I can't seem to find anywhere that provides the metrics needed to > calculate disk busy in the kernel! This is a major piece of information that > any mission critical system administrator needs to successfully monitor > their systems. The stock kernel doesn't provide either, but at least with Stephen's sard patches you can get system wide I/O metrics. ftp.linux.org.uk/pub/linux/sct/fs/profiling -- Jens Axboe -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] Linux Disk Performance/File IO per process
All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. I've looked in /proc - it provides I/O rates, but no time related information (which is required to calculate busy%) I've looked in the 2.4 kernel source (drivers/block/ll_rw_blk.c,include/linux/kernel_stat.h - dk_drive* arrays) - but can only see those /proc I/O rates being calculated. Is this data provided somewhere that I haven't looked? Or does the kernel really not provide the data necessary to calculate a busy rate? I'm also interested in finding out file I/O metrics on a per process basis. The CSA project run by SGI (http://oss.sgi.com/projects/csa) seems to provide summarised I/O metrics per process using a loadable kernel module. That is, it provides I/O rates for a process, but not for each file open by that process. Are there any existing methods to obtain this data? If so, can someone point me in the right direction? If not, what is the possibility of 'people-in-the-know' working towards making these sort of metrics available from the kernel? Could some of these metrics be added to the CSA project? (directed at the CSA people of course.) I'm more than willing to put in time to get these metrics into the kernel. However, I'm new to kernel development, so it would take longer for me than for someone who knows the code. But if none of the above questions can really be answered I'd appreciate some direction as to where in the kernel would be a good place to calculate/extract these metrics. I believe that the lack of these metrics will make it difficult for Linux to move into the mission critical server market. For this reason I'm keen to see this information made available. Thank you all for any help you may be able to provide. I'm not actually subscribed to either the CSA or the linux-kernel mailing lists, so I'd appreciate being CC'ed on any replies. Thanks. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 [EMAIL PROTECTED] www.ir.com -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] Re: Linux Disk Performance/File IO per process
On Mon, Jan 29 2001, [EMAIL PROTECTED] wrote: All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. The stock kernel doesn't provide either, but at least with Stephen's sard patches you can get system wide I/O metrics. ftp.linux.org.uk/pub/linux/sct/fs/profiling -- Jens Axboe -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
Re: Linux Disk Performance/File IO per process
On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. Stephen Tweedie has a rather funky i/o stats enhancement patch which should provide what you need. It comes with RedHat7.0 and gives decent disk statistics in /proc/partitions. Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. I'd like to see it make the kernel as a 2.4.x item. Failing that, it'll probably make the 2.5 kernel. Cheers Chris - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[SLUG] RE: Linux Disk Performance/File IO per process
-Original Message- From: Chris Evans [mailto:[EMAIL PROTECTED]] Sent: Monday, 29 January 2001 13:04 To: Tony Young Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Linux Disk Performance/File IO per process On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. Stephen Tweedie has a rather funky i/o stats enhancement patch which should provide what you need. It comes with RedHat7.0 and gives decent disk statistics in /proc/partitions. Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. I'd like to see it make the kernel as a 2.4.x item. Failing that, it'll probably make the 2.5 kernel. Cheers Chris Thanks to both Jens and Chris - this provides the information I need to obtain our busy rate It's unfortunate that the kernel needs to be patched to provide this information - hopefully it will become part of the kernel soon. I had a response saying that this shouldn't become part of the kernel due to the performance cost that obtaining such data will involve. I agree that a cost is involved here, however I think it's up to the user to decide which cost is more expensive to them - getting the data, or not being able to see how busy their disks are. My feeling here is that this support could be user configurable at run time - eg 'cat 1 /proc/getdiskperf'. Thanks for your quick responses. Tony... -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] Linux Disk Performance/File IO per process
All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. I've looked in /proc - it provides I/O rates, but no time related information (which is required to calculate busy%) I've looked in the 2.4 kernel source (drivers/block/ll_rw_blk.c,include/linux/kernel_stat.h - dk_drive* arrays) - but can only see those /proc I/O rates being calculated. Is this data provided somewhere that I haven't looked? Or does the kernel really not provide the data necessary to calculate a busy rate? I'm also interested in finding out file I/O metrics on a per process basis. The CSA project run by SGI (http://oss.sgi.com/projects/csa) seems to provide summarised I/O metrics per process using a loadable kernel module. That is, it provides I/O rates for a process, but not for each file open by that process. Are there any existing methods to obtain this data? If so, can someone point me in the right direction? If not, what is the possibility of 'people-in-the-know' working towards making these sort of metrics available from the kernel? Could some of these metrics be added to the CSA project? (directed at the CSA people of course.) I'm more than willing to put in time to get these metrics into the kernel. However, I'm new to kernel development, so it would take longer for me than for someone who knows the code. But if none of the above questions can really be answered I'd appreciate some direction as to where in the kernel would be a good place to calculate/extract these metrics. I believe that the lack of these metrics will make it difficult for Linux to move into the mission critical server market. For this reason I'm keen to see this information made available. Thank you all for any help you may be able to provide. I'm not actually subscribed to either the CSA or the linux-kernel mailing lists, so I'd appreciate being CC'ed on any replies. Thanks. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 [EMAIL PROTECTED] www.ir.com -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug