>I guess you could use 'top' in a unix window.
>
>Then kick off 1 drone only and look at the memory usage. Then * it by the
>number of processes you expect.
>55 is a lot of processes!!!! On Any unix system.
>
>Perhaps you could stagger the number of processes. so maybe spawn 30
drones
>and have each drone process 2 sites in serial, one after the other?
>
>I've seen some pretty large perl processes on very simillar spec machines
as
>yours.
>
>Marty

>> -----Original Message-----
>>
>> I am designing a system to process almost 4000 remote sites in a nightly
>> sweep.  This process is controlled from a database which maintains site
>> status in realtime (or at least that's the goal).   I am
>> attempting to fork
>> off around 100 "drones" to process 100 concurrent sites.  Each drone
will
>> need a connection to the database.  In doing some impromptu testing I've
>> had the following results...
>>
>> Writing a queen, who does nothing, and sets nothing (no variables or
>> filehandles are open)  except fork off drones, and writing a
>> drone who only
>> connects to the database and nothing else had the following
>> results on this
>> machine config:
>>
>> RedHat Linux 6.2, PIII 600, 256MB RAM, 256MB Swap - Anything more than
55
>> drones and the system is entirely out of memory.
>>
>> Is Perl really using that much memory?  There is approx 190MB of RAM
free,
>> and nearly ALL the swap space free when I kick off the Queen process.
>>
>> Do there results seem typical?  Is there any way to optimize this?
>>
>> Thanks
>>
>> Chuck

Marty,

top is in fact what I have been using to watch it, but I have heard that
you cannot fully trust it's memory reporting because of shared memory usage
and other things, but of course that may be fixed.

The production box this system will live on is same OS, but dual PIII 733,
1GB RAM, 2GB Swap  BUT!  there's always a but! it's not as though THIS
system has full access to that box, another nightly system runs around the
same time, so there will surely be overlap.  I am looking to optimize this
system as much as possible to avoid bringing the box down cause it's out of
memory.  The reason for wanting so many concurrent drones is because the
process runs for several hours currently (using 100 plain FTP's launched
from shell scripts) and we are adding a new site approx every 17 hours - no
joke.  So we don't want to make it take any longer than it does already, in
fact we'd like to be able to scale up if need be as the number of sites
increase.

I guess my real question here is:  Is this much memory usage just a fact of
life when using perl and fork()ing? If so, I can try to ask for more RAM,
and size the production box accordingly.  Obviously my testing is doing
NOTHING compared to what will really be getting processed in the real
system, so RAM usage is likely to much larger than it is in these tests -
which is my fear.  I intened to use good practices, close any handles, and
free as many variables as possible before forking, etc. but I have fears of
bringing this box to it's knees, so if anyone can offer some tips to
optimize the system I am all ears!

Thanks

Chuck





_______________________________________________
Perl-Unix-Users mailing list. To unsubscribe go to 
http://listserv.ActiveState.com/mailman/subscribe/perl-unix-users

Reply via email to