AIX 5.1, actually. Though eventually linux, windows, and possibly other OS's will be in the mix.
I'm writing this with the idea of it being very "modular" in that each server will do it's own "check" ever 15 minutes or so, and that the webserver will only "connect" and grab that data when someone goes to the page (using a cgi to parse it up and display it in a heirarchical fashion). The server will access the data via NFS (the NFS exports are already in place due to another project) Writing to a file gives me a history of data should any individual box go down. (Especially the "webserver" in this case, since it is periodically taken offline during the course of any given week due to its role in the overall project these machines run) I also plan on the actual programs that are called to be set up in a config file. Something along the lines of: APP1_HANDLE = "sar" APP1_EXEC = "sar 5 5" APP1_LOG = "/log/monitor/delta/sar.out" APP1_PARSE = "/log/monitor/parse_sar.pl" APP1_SUMMARY = "/log/monitor/delta/sar.summary" I'm planning on figuring out how to put all of those into a hash, and then using a foreach loop to exec each one, and another foreach loop to wait for each to complete, and a final foreach loop that runs the "parse" for each one and generates a summary. The summary files will all be in a "standard" format that the webserver will use to genrate its display. I had not thought of stderr from the commands, so you're right that catching it is something I need to think on. The other bit I'm working on is to make an "options" file for each server (generated before the summaries are parsed) that contains info like number of processors, and tuning options (set with schedtune and vmtune, etc) that can be used by the "APP_PARSE" scripts in calculating results. (Thrashing detection, etc). I know, it's all a little complex, but the individual pieces are fairly simple in design. And in the end it will meet the goal that management put forth, which is what keeps me paid :) -Tony -----Original Message----- From: drieux [mailto:[EMAIL PROTECTED] Sent: Wednesday, December 03, 2003 7:24 PM To: Perl Perl Subject: Re: Timing several processes On Dec 3, 2003, at 10:49 AM, Akens, Anthony wrote: [..] > print "Running vmstat\n"; > defined(my $vmstat_pid = fork) or die "Cannot fork: $!"; unless > ($vmstat_pid) { > exec "vmstat 5 5 > /log/monitor/delta/vmstat.out"; > die "cannot exec vmstat: $!"; > } > print "Running sar\n"; > defined(my $sar_pid = fork) or die "Cannot fork: $!"; > unless ($sar_pid) { > exec "sar 5 5 > /log/monitor/delta/sar.out"; > die "cannot exec date: $!"; > } > print "Waiting...\n"; > waitpid($vmstat_pid, 0); > waitpid($sar_pid, 0); > print "done!\n"; [..] I presume you are working on a solaris box? have you thought about timex sar 5 5 timex vmstat 5 5 and you will notice that the sar command will take about 25 seconds and the vmstat about 20. but then there is that minor nit about exec "vmstat 5 5 > /log/monitor/delta/vmstat.out" or die "cannot exec vmstat: $!"; since in theory exec WILL not return, so if it failed why not keep it in the proper context... Then there is that Minor Nit about not controlling 'stderr' which can lead to things like: vladimir: 60:] ./st*.plx Running vmstat Running sar sh: /log/monitor/delta/vmstat.out: cannot create sh: /log/monitor/delta/sar.out: cannot create Waiting... done! vladimir: 61:] So while you are in the process of learning fork() and exec() why not think a bit more agressively and go with say a pipe to pass back the information.... so as not to buy the IO overhead of writing to files? While the following was written for a command line 'let us get interactive' type of solution, it might be a framework you could rip off and use: <http://www.wetware.com/drieux/pbl/Sys/gen_sym_big_dog.txt> ciao drieux --- -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]