Jason Frisvold wrote:

> Greetings,
> 
> I'm in the process of writing a large network monitoring system in
> perl.  I want to be sure I'm headed in the right direction, however.
> 
> I have a large MySQL database comprised of all the items that need
> monitoring.  Each individual row contains exactly one monitoring type
> (although, i would love to be able to combine this efficiently)
> 
> One of the tables will contain the individual monitoring types and the
> name of the program that processes them.  I'd like to have a centralized
> system that deals with spawning off these processes and monitoring those
> to ensure they are running correctly.  I'm looking to spawn each process
> with the information it needs to process instead of it having to contact
> the database and retrieve it on it's own.  This is where I'm stuck.  The
> data it needs to process can be fairly large and I'd rather not drop
> back to creating an external text file with all the data.  Is there a
> way to push a block of memory from one process to another?  Or some
> other efficient way to give the new process the data it needs?  Part of
> the main program will be a throttling system that breaks the data down
> into bite size chunks based on processor usage, running time, and memory
> usage.  So, in order to properly throttle the processes, I need to be
> able to pass some command line parameters in addition to the data
> chunk...
> 
> Has anyone attempted anything like this?  Do I have a snowball's
> chance?  :)
> 

i don't know how you manage your monitoring script. is it:

1. one big script that knows how to monitor everything
2. everytime it forks, it involves itself again and again to monitor more 
process

or is it:

1. one script that handles the forking and calls the right scripts to do the 
monitor job
2. many smaller scripts are created that know how to manage just one process

i think the advantage of the first appoach is that, you don't end up with 
many different smaller scripts all around but the disadvantage is that 
everytime it forks, it dups itself so if the script is big, it could 
consume large chunk of memory.

the advantage of the second appoach is that you clearly seperate the process 
of forking, creating concurrent process and the task of monitoring. the 
disadvange, of course, is that you will end up with many many smaller 
script all over the place. as you said, if you have a large database with 
many process to monitor, you will have to manage your monitoring script 
carefully. despite that, i should still go with the second appoach and 
perhaps do something like:

#!/usr/bin/perl -w
use strict;

#-- process table
#-- you will fill this from the db of course
my %process_table = ( process1 => ['monitor1.pl','param1','param2'],
                      process2 => ['monitor2.pl','param3','param4'],
                      process3 => ['monitor3.pl','param6','param6']);

while(my($p,$m) = each %process_table){
        my $s = fork;
        if($s){
                #-- parent
                print "$p will be monitored by @{$m}\n";
        }elsif(defined $s){
                #-- child
                exec(@{$m});
                exit;
        }else{
                #-- fork failed!
                warn "Unable to monitor $p with @{$m}\n";
                next;
        }
}

__END__

now everytime you have a now process to monitor, you can simply add them to 
the %process_table (well, add it to the db first of course) with all the 
paramater it needs and the script will pick it up automatically and create 
a seperate process for you.

if you put everything in one big file and everytime you fork, you are just 
making a copy of yourself(including code,data,stack..etc) so if the file is 
big, it could consume a lot of memory and if the number of process is 
large, this will likely to make your machine unstable.

make sure you read:

perldoc -f fork
perldoc -f wait
perldoc -f waitpid

signals... etc

david

-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to