RE: /proc/stat disk_io entries
The structure tables size is a concern for me too. Hence the suggestion that perhaps a hash table implementationg is better suited to the job. The larger sizes that you need for your Array seem to bear this out. Until a fix is implemented I'm just going to modify DK_MAX_MAJOR to fix my own requirements. > -Original Message- > From: Dupuis, Don [mailto:[EMAIL PROTECTED]] > Sent: Saturday, 24 March 2001 02:02 > To: Tony Young; [EMAIL PROTECTED] > Subject: RE: /proc/stat disk_io entries > > > I have sent a patch to Alan and Linus about this also. We > have cpqarray and > cciss controllers that use major 72-79 and 104-111. Alan > said he doesn't > have time to look at it till mid April and Linus hasn't > responded to me at > all about it. The best way is to actually rewrite the kstat > architecture, > but the patch I sent it will do the job. There is concern > about structure > table size I believe. My patch increased DK_MAX_MAJOR to 112 > and added > about 4 lines to genhd.h to support the cpqarray and cciss > driver. This > works on Compaq servers and I get the data that is needed. > Any thoughts? > > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] > Sent: Thursday, March 22, 2001 9:55 PM > To: [EMAIL PROTECTED] > Subject: /proc/stat disk_io entries > > > All, > > Firstly, my relevant system stats: > kernel linux 2.4.3-pre6 > hda IDE Drive > hdb CD drive > hdc IDE Drive > hdd IDE Drive > sda SCSI Drive > > The problem I'm seeing is that IO stats (disk_io) aren't > being shown in > /proc/stats for the 2 harddrives on the second ide controller > (hdc and hdd). > > I checked the kernel code and found the function kstat_read_proc in > fs/proc/proc_misc.c which loops through from 0 up to > DK_MAX_MAJOR and prints > out the stats to /proc/stat for each drive. However, > DK_MAX_MAJOR is set to > 16 in include/linux/kernel_stat.h, which means that the > drives on my second > ide controller, with a major number of 22, aren't included in > the loop. > > I modified the value of DK_MAX_MAJOR to 23 and rebuilt and > /proc/stats now > shows the 2 missing harddrives. I'm uncomfortable sending in > a patch for > this as I'm not familiar enough with the code to understand the full > ramifications of changing this value. Considering also that > the value 23 > stills doesn't include any tertiary or quaternary ide > controllers (33 and > 34) makes me wonder what the correct value should really be. > > I'm also curious, after considering the above, about whether > or not a hash > table(s) would be better suited to the current implementation of > 2-dimensional arrays for disk stats (dk_drive, dk_drive_rio, > dk_drive_wio, > etc). > > I've brought this to the list because I'm not sure of the > correct solution > and I couldn't work out if there was a specific maintainer of > this code. > > It also seems strange to me that the identifiers for the > values for disk_io > in /proc/stat are (major_number,disk_number) tuples rather than > (major,minor). The current implementation with my change now > shows my first > ide drive to be identified as (8,0), while my second and > third ide drives > (hdc and hdd) are identified as (22,2) and (22,3) > respectively rather than > (22,0) and (22,1) - I presume because they are the in the 3rd > and 4th ide > positions. Using disk_number instead of minor number also > makes it more > difficult for any user programs reading /proc/stat to trace > the entry back > to a physical device. Any program must make assumptions that > major numbers 8 > and 22 refer to /dev/hd* entries, and that disk number 0 > translates to 'a', > 1 to 'b', 2 to 'c', etc and can then work out that 22,2 means > /dev/hdc. > These assumption, of course, break with the use of devfs when > not using > devfsd to provide the necessary links. > > I welcome any comments, but please CC me directly as I'm not > subscribed to > the list. > > Tony... > -- > Tony Young > Senior Software Engineer > Integrated Research Limited > Level 10, 168 Walker St > North Sydney, NSW 2060, Australia > Ph: +61 2 9966 1066 > Fax: +61 2 9966 1042 > Mob: 0414 649942 > > - > To unsubscribe from this list: send the line "unsubscribe > linux-kernel" in > the body of a message to [EMAIL PROTECTED] > More majordomo info at http://vger.kernel.org/majordomo-info.html > Please read the FAQ at http://www.tux.org/lkml/ > - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: /proc/stat disk_io entries
The structure tables size is a concern for me too. Hence the suggestion that perhaps a hash table implementationg is better suited to the job. The larger sizes that you need for your Array seem to bear this out. Until a fix is implemented I'm just going to modify DK_MAX_MAJOR to fix my own requirements. -Original Message- From: Dupuis, Don [mailto:[EMAIL PROTECTED]] Sent: Saturday, 24 March 2001 02:02 To: Tony Young; [EMAIL PROTECTED] Subject: RE: /proc/stat disk_io entries I have sent a patch to Alan and Linus about this also. We have cpqarray and cciss controllers that use major 72-79 and 104-111. Alan said he doesn't have time to look at it till mid April and Linus hasn't responded to me at all about it. The best way is to actually rewrite the kstat architecture, but the patch I sent it will do the job. There is concern about structure table size I believe. My patch increased DK_MAX_MAJOR to 112 and added about 4 lines to genhd.h to support the cpqarray and cciss driver. This works on Compaq servers and I get the data that is needed. Any thoughts? -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]] Sent: Thursday, March 22, 2001 9:55 PM To: [EMAIL PROTECTED] Subject: /proc/stat disk_io entries All, Firstly, my relevant system stats: kernel linux 2.4.3-pre6 hda IDE Drive hdb CD drive hdc IDE Drive hdd IDE Drive sda SCSI Drive The problem I'm seeing is that IO stats (disk_io) aren't being shown in /proc/stats for the 2 harddrives on the second ide controller (hdc and hdd). I checked the kernel code and found the function kstat_read_proc in fs/proc/proc_misc.c which loops through from 0 up to DK_MAX_MAJOR and prints out the stats to /proc/stat for each drive. However, DK_MAX_MAJOR is set to 16 in include/linux/kernel_stat.h, which means that the drives on my second ide controller, with a major number of 22, aren't included in the loop. I modified the value of DK_MAX_MAJOR to 23 and rebuilt and /proc/stats now shows the 2 missing harddrives. I'm uncomfortable sending in a patch for this as I'm not familiar enough with the code to understand the full ramifications of changing this value. Considering also that the value 23 stills doesn't include any tertiary or quaternary ide controllers (33 and 34) makes me wonder what the correct value should really be. I'm also curious, after considering the above, about whether or not a hash table(s) would be better suited to the current implementation of 2-dimensional arrays for disk stats (dk_drive, dk_drive_rio, dk_drive_wio, etc). I've brought this to the list because I'm not sure of the correct solution and I couldn't work out if there was a specific maintainer of this code. It also seems strange to me that the identifiers for the values for disk_io in /proc/stat are (major_number,disk_number) tuples rather than (major,minor). The current implementation with my change now shows my first ide drive to be identified as (8,0), while my second and third ide drives (hdc and hdd) are identified as (22,2) and (22,3) respectively rather than (22,0) and (22,1) - I presume because they are the in the 3rd and 4th ide positions. Using disk_number instead of minor number also makes it more difficult for any user programs reading /proc/stat to trace the entry back to a physical device. Any program must make assumptions that major numbers 8 and 22 refer to /dev/hd* entries, and that disk number 0 translates to 'a', 1 to 'b', 2 to 'c', etc and can then work out that 22,2 means /dev/hdc. These assumption, of course, break with the use of devfs when not using devfsd to provide the necessary links. I welcome any comments, but please CC me directly as I'm not subscribed to the list. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney, NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
/proc/stat disk_io entries
All, Firstly, my relevant system stats: kernel linux 2.4.3-pre6 hda IDE Drive hdb CD drive hdc IDE Drive hdd IDE Drive sda SCSI Drive The problem I'm seeing is that IO stats (disk_io) aren't being shown in /proc/stats for the 2 harddrives on the second ide controller (hdc and hdd). I checked the kernel code and found the function kstat_read_proc in fs/proc/proc_misc.c which loops through from 0 up to DK_MAX_MAJOR and prints out the stats to /proc/stat for each drive. However, DK_MAX_MAJOR is set to 16 in include/linux/kernel_stat.h, which means that the drives on my second ide controller, with a major number of 22, aren't included in the loop. I modified the value of DK_MAX_MAJOR to 23 and rebuilt and /proc/stats now shows the 2 missing harddrives. I'm uncomfortable sending in a patch for this as I'm not familiar enough with the code to understand the full ramifications of changing this value. Considering also that the value 23 stills doesn't include any tertiary or quaternary ide controllers (33 and 34) makes me wonder what the correct value should really be. I'm also curious, after considering the above, about whether or not a hash table(s) would be better suited to the current implementation of 2-dimensional arrays for disk stats (dk_drive, dk_drive_rio, dk_drive_wio, etc). I've brought this to the list because I'm not sure of the correct solution and I couldn't work out if there was a specific maintainer of this code. It also seems strange to me that the identifiers for the values for disk_io in /proc/stat are (major_number,disk_number) tuples rather than (major,minor). The current implementation with my change now shows my first ide drive to be identified as (8,0), while my second and third ide drives (hdc and hdd) are identified as (22,2) and (22,3) respectively rather than (22,0) and (22,1) - I presume because they are the in the 3rd and 4th ide positions. Using disk_number instead of minor number also makes it more difficult for any user programs reading /proc/stat to trace the entry back to a physical device. Any program must make assumptions that major numbers 8 and 22 refer to /dev/hd* entries, and that disk number 0 translates to 'a', 1 to 'b', 2 to 'c', etc and can then work out that 22,2 means /dev/hdc. These assumption, of course, break with the use of devfs when not using devfsd to provide the necessary links. I welcome any comments, but please CC me directly as I'm not subscribed to the list. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney, NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
/proc/stat disk_io entries
All, Firstly, my relevant system stats: kernel linux 2.4.3-pre6 hda IDE Drive hdb CD drive hdc IDE Drive hdd IDE Drive sda SCSI Drive The problem I'm seeing is that IO stats (disk_io) aren't being shown in /proc/stats for the 2 harddrives on the second ide controller (hdc and hdd). I checked the kernel code and found the function kstat_read_proc in fs/proc/proc_misc.c which loops through from 0 up to DK_MAX_MAJOR and prints out the stats to /proc/stat for each drive. However, DK_MAX_MAJOR is set to 16 in include/linux/kernel_stat.h, which means that the drives on my second ide controller, with a major number of 22, aren't included in the loop. I modified the value of DK_MAX_MAJOR to 23 and rebuilt and /proc/stats now shows the 2 missing harddrives. I'm uncomfortable sending in a patch for this as I'm not familiar enough with the code to understand the full ramifications of changing this value. Considering also that the value 23 stills doesn't include any tertiary or quaternary ide controllers (33 and 34) makes me wonder what the correct value should really be. I'm also curious, after considering the above, about whether or not a hash table(s) would be better suited to the current implementation of 2-dimensional arrays for disk stats (dk_drive, dk_drive_rio, dk_drive_wio, etc). I've brought this to the list because I'm not sure of the correct solution and I couldn't work out if there was a specific maintainer of this code. It also seems strange to me that the identifiers for the values for disk_io in /proc/stat are (major_number,disk_number) tuples rather than (major,minor). The current implementation with my change now shows my first ide drive to be identified as (8,0), while my second and third ide drives (hdc and hdd) are identified as (22,2) and (22,3) respectively rather than (22,0) and (22,1) - I presume because they are the in the 3rd and 4th ide positions. Using disk_number instead of minor number also makes it more difficult for any user programs reading /proc/stat to trace the entry back to a physical device. Any program must make assumptions that major numbers 8 and 22 refer to /dev/hd* entries, and that disk number 0 translates to 'a', 1 to 'b', 2 to 'c', etc and can then work out that 22,2 means /dev/hdc. These assumption, of course, break with the use of devfs when not using devfsd to provide the necessary links. I welcome any comments, but please CC me directly as I'm not subscribed to the list. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney, NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
[SLUG] RE: Linux Disk Performance/File IO per process
> -Original Message- > From: Chris Evans [mailto:[EMAIL PROTECTED]] > Sent: Monday, 29 January 2001 13:04 > To: Tony Young > Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] > Subject: Re: Linux Disk Performance/File IO per process > > > > On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: > > > All, > > > > I work for a company that develops a systems and > performance management > > product for Unix (as well as PC and TANDEM) called > PROGNOSIS. Currently we > > support AIX, HP, Solaris, UnixWare, IRIX, and Linux. > > > > I've hit a bit of a wall trying to expand the data provided > by our Linux > > solution - I can't seem to find anywhere that provides the > metrics needed to > > calculate disk busy in the kernel! This is a major piece of > information that > > any mission critical system administrator needs to > successfully monitor > > their systems. > > Stephen Tweedie has a rather funky i/o stats enhancement patch which > should provide what you need. It comes with RedHat7.0 and gives decent > disk statistics in /proc/partitions. > > Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. > I'd like to > see it make the kernel as a 2.4.x item. Failing that, it'll > probably make > the 2.5 kernel. > > Cheers > Chris > Thanks to both Jens and Chris - this provides the information I need to obtain our busy rate It's unfortunate that the kernel needs to be patched to provide this information - hopefully it will become part of the kernel soon. I had a response saying that this shouldn't become part of the kernel due to the performance cost that obtaining such data will involve. I agree that a cost is involved here, however I think it's up to the user to decide which cost is more expensive to them - getting the data, or not being able to see how busy their disks are. My feeling here is that this support could be user configurable at run time - eg 'cat 1 > /proc/getdiskperf'. Thanks for your quick responses. Tony... -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] Linux Disk Performance/File IO per process
All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. I've looked in /proc - it provides I/O rates, but no time related information (which is required to calculate busy%) I've looked in the 2.4 kernel source (drivers/block/ll_rw_blk.c,include/linux/kernel_stat.h - dk_drive* arrays) - but can only see those /proc I/O rates being calculated. Is this data provided somewhere that I haven't looked? Or does the kernel really not provide the data necessary to calculate a busy rate? I'm also interested in finding out file I/O metrics on a per process basis. The CSA project run by SGI (http://oss.sgi.com/projects/csa) seems to provide summarised I/O metrics per process using a loadable kernel module. That is, it provides I/O rates for a process, but not for each file open by that process. Are there any existing methods to obtain this data? If so, can someone point me in the right direction? If not, what is the possibility of 'people-in-the-know' working towards making these sort of metrics available from the kernel? Could some of these metrics be added to the CSA project? (directed at the CSA people of course.) I'm more than willing to put in time to get these metrics into the kernel. However, I'm new to kernel development, so it would take longer for me than for someone who knows the code. But if none of the above questions can really be answered I'd appreciate some direction as to where in the kernel would be a good place to calculate/extract these metrics. I believe that the lack of these metrics will make it difficult for Linux to move into the mission critical server market. For this reason I'm keen to see this information made available. Thank you all for any help you may be able to provide. I'm not actually subscribed to either the CSA or the linux-kernel mailing lists, so I'd appreciate being CC'ed on any replies. Thanks. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 [EMAIL PROTECTED] www.ir.com -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] RE: Linux Disk Performance/File IO per process
-Original Message- From: Chris Evans [mailto:[EMAIL PROTECTED]] Sent: Monday, 29 January 2001 13:04 To: Tony Young Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: Linux Disk Performance/File IO per process On Mon, 29 Jan 2001 [EMAIL PROTECTED] wrote: All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. Stephen Tweedie has a rather funky i/o stats enhancement patch which should provide what you need. It comes with RedHat7.0 and gives decent disk statistics in /proc/partitions. Unfortunately this patch is not yet in the 2.2 or 2.4 kernel. I'd like to see it make the kernel as a 2.4.x item. Failing that, it'll probably make the 2.5 kernel. Cheers Chris Thanks to both Jens and Chris - this provides the information I need to obtain our busy rate It's unfortunate that the kernel needs to be patched to provide this information - hopefully it will become part of the kernel soon. I had a response saying that this shouldn't become part of the kernel due to the performance cost that obtaining such data will involve. I agree that a cost is involved here, however I think it's up to the user to decide which cost is more expensive to them - getting the data, or not being able to see how busy their disks are. My feeling here is that this support could be user configurable at run time - eg 'cat 1 /proc/getdiskperf'. Thanks for your quick responses. Tony... -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug
[SLUG] Linux Disk Performance/File IO per process
All, I work for a company that develops a systems and performance management product for Unix (as well as PC and TANDEM) called PROGNOSIS. Currently we support AIX, HP, Solaris, UnixWare, IRIX, and Linux. I've hit a bit of a wall trying to expand the data provided by our Linux solution - I can't seem to find anywhere that provides the metrics needed to calculate disk busy in the kernel! This is a major piece of information that any mission critical system administrator needs to successfully monitor their systems. I've looked in /proc - it provides I/O rates, but no time related information (which is required to calculate busy%) I've looked in the 2.4 kernel source (drivers/block/ll_rw_blk.c,include/linux/kernel_stat.h - dk_drive* arrays) - but can only see those /proc I/O rates being calculated. Is this data provided somewhere that I haven't looked? Or does the kernel really not provide the data necessary to calculate a busy rate? I'm also interested in finding out file I/O metrics on a per process basis. The CSA project run by SGI (http://oss.sgi.com/projects/csa) seems to provide summarised I/O metrics per process using a loadable kernel module. That is, it provides I/O rates for a process, but not for each file open by that process. Are there any existing methods to obtain this data? If so, can someone point me in the right direction? If not, what is the possibility of 'people-in-the-know' working towards making these sort of metrics available from the kernel? Could some of these metrics be added to the CSA project? (directed at the CSA people of course.) I'm more than willing to put in time to get these metrics into the kernel. However, I'm new to kernel development, so it would take longer for me than for someone who knows the code. But if none of the above questions can really be answered I'd appreciate some direction as to where in the kernel would be a good place to calculate/extract these metrics. I believe that the lack of these metrics will make it difficult for Linux to move into the mission critical server market. For this reason I'm keen to see this information made available. Thank you all for any help you may be able to provide. I'm not actually subscribed to either the CSA or the linux-kernel mailing lists, so I'd appreciate being CC'ed on any replies. Thanks. Tony... -- Tony Young Senior Software Engineer Integrated Research Limited Level 10, 168 Walker St North Sydney NSW 2060, Australia Ph: +61 2 9966 1066 Fax: +61 2 9966 1042 Mob: 0414 649942 [EMAIL PROTECTED] www.ir.com -- SLUG - Sydney Linux User Group Mailing List - http://slug.org.au/ More Info: http://slug.org.au/lists/listinfo/slug