Am 13.01.2012 um 19:44 schrieb Michael Coffman:
>> <snip>
>
> This does look interesting. We have never made use of contexts. My
> only concern would be the continual qstats
> that the jobs would be running. It seems like if I had hundreds or
> thousands of jobs using this methodology, it would put quite a bit of
> strain on the server.
Correct. It's more suitable in a low scale to do it once every 10 minutes or
so. LIke a user has a job running and discovers as he checks the output that it
will take days to finish and so he decides to lower the PRECISION=10E-6 to
PRECISION=10E-3 and the job can honor this even if it checks only every 10
minutes.
> What kind of scale have used this in?
We use it only to include the submission command to the final email, so that
the user has a reference which parameters he used for the actual run. So it's a
one timer per job.
subturbo is a jobwrapper => $0 stored in job context
prolog for each queue => copy the COMMAND found in the job context to a file in
/var/spool/sge/context/${SGE_JOB_SPOOL_DIR##*/}
mailwrapper after the job => reads the created file (in the subject it finds
the jobnumber), appends it to the sent email and removes the file
(in addtion the local messages file is grep'ed for a passed wallclock limit and
included if so)
The spool directory of the job is already gone at the time the email is send.
-- Reuti
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users