Jeff Westman wrote:
> I'm posed with a problem, looking for suggestions for possible resolution.  I
> have a script that has many steps in it, including telnet & ftp sessions,
> database unloads, and other routines.  This script will run on a server,
> accessing a remote server.  This works fine.  I will likely have several
> dozen (maybe as many as 100) iterations of this script running
> simultaneously.  The problem is, that their is a "bottleneck" towards the end
> of my script -- I have to call a 3rd party process that is single-threaded.
> This means that if I have ~100 versions of my script running, I can only have
> one at a time execute the 3rd party software.  It is very likely that
> multiple versions will arrive at this bottle-neck junction at the same time.
> If I had more than one call the third party program, one will run, one will
> loose, and die.
>
> So I am looking for suggestions on how I might attack this problem.  I've
> thought about building some sort of external queue (like a simple hash file).
>  The servers have numbers like server_01, server_02, etc.  When a iteration
> of the script completes, it writes out it's server name to the file, pauses,
> then checks of any other iteration is running the third party software.  If
> one is running, it waits, with it's server name at the top of the file queue,
> waiting.  A problem might be if again, two or more versions want to update
> this queue file, so I thought maybe a random-wait period before writing to
> the file-queue.
>
> I'm open to other ideas.  (please don't suggest we rename or copy the third
> party software, it just isn't possible).  I'm not looking for code, per se,
> but ideas I can implement that will guarantee I will always only have one
> copy of the external third party software running (including pre-checks,
> queues, etc.

I don't think you need to get this complex Jeff. If your bottleneck were /at/
the end of the processing I would suggest a queue file as you describe, but
not as a means of synchronising the individual scripts. As its final stage each
script would simply append the details of its final operation to a serial file
and then exit. It would then be the job of a separate process to look at this
file periodically and execute any request which may have been written.
That will effectively serialise your operations.

However, since your process may not be able to exit straight away, what you
need, as Stefan says, is a simple dummy file lock. The following will do the
trick

    use strict;
    use Fcntl ':flock';

    open my $que, ">> queue"
            or die "Couldn't open lock file: $!";

    flock $que, LOCK_EX or die "Failed to lock queue: $!";
    do_single_thread_op();
    flock $que, LOCK_UN;

    close $que;

Fcntl is there solely to add the LOCK_EX and LOCK_UN identifiers. I've opened
the file for append so that the file will be created if it isn't already there, but
will be left untouched if it is. The 'flock' call to lock exclusively will wait
indefinitely until it succeeds, which means that the process has come to the
head of the queue. It then has sole access to your third-party process and can
use it as it needs to before unlocking the file, when the next process that it
may have been holding up will be granted its lock and can continue.

I hope this helps,

Rob




-- 
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to