2014-12-17 12:23 GMT+01:00 Ole Tange <[email protected]>:
>
> On Wed, Dec 17, 2014 at 12:01 PM, xmoon 2000 <[email protected]>
> wrote:
> > On 15 December 2014 at 09:06, xmoon 2000 <[email protected]>
> wrote:
> >> On 14 December 2014 at 14:08, Hans Schou <[email protected]> wrote:
>
> >> Yes I had read that section before. However, I assume this is a common
> >> request
>
> I think it has been discussed 2-3 times on the mailinglist for the
> whole lifespan of GNU Parallel. So either very few are using it or
> they are happy with the current functionality



> (or they are so
> disgruntled, that they changed to a real queueing system).
>

It should be raily simple with RabbitMQ
https://www.rabbitmq.com/tutorials/tutorial-two-python.html

#1 If you still want to use GNU Parallel

#2 To only use a queing system

/hans

>
> >> and wondered if any other functionality had been created that
> >> would allow a jobQueue to be consumed? i.e. the jobs in the jobQueue
> >> are deleted as they are processed?
>
> GNU Parallel was never intended as a full blown queue system, so if
> you "abuse" it for that, then I think it is not too much to ask that
> if you feel the queuefile is too big, that you shut down GNU Parallel
> and delete the processed jobs manually.
>
> A tip is use use -E to make GNU Parallel stop at a magic value. Then
> remove all lines up to the magic value and go again:
>
> tail -f jobqueue | parallel -E StOpHeRe
> perl -e 'while(<>){/StOpHeRe/ and last};print <>' jobqueue > j2
> mv j2 jobqueue
>
> >> Also a run "run up to x jobs starting now - would be very useful"
>
> You have to explain how this is not what GNU Parallel already does:
>
> # Run 1 jobs per core (during the night)
> echo -j100% > jobsfile
> true >jobqueue; tail -f jobqueue | parallel -j jobsfile -S .. &
> sleep 36000
> # Run 0.5 jobs per core (during the day) - starting from next complete job
> echo -j50% > jobsfile
>
>
> /Ole
>

Reply via email to