On Tue, Jun 7, 2011 at 10:16 AM, Vijish Vijayan <[email protected]> wrote: > Hi, > > I have been using parallel for some time. I have two queries regarding > parallel: > > Memory > Does Parallel optimize memory usage or is there any switch to control memory > usage.
Parallel does not optimize memory usage. But I have just tested running 500 jobs in parallel: It takes up less than 10 MB of RAM in total. To me that will never be a problem. Please give an example of the memory usage being a problem. > -j0 and tail -f > Running parallel as: > tail -f jobsqueue | parallel -j0 & > > I give a number of input commands to jobsqueue file as required. However, I > find no activity going on. First: The jobqueue is a hack. GNU Parallel was never designed for this, but it just happened to work. With -j0 you tell parallel: Please figure out how many jobs can be run in parallel before even starting the first job. So in your job queue you need to put more than the maximal jobs that can be run in parallel before GNU Parallel will run the first job. > But when the tail process is killed. ( pkill tail ). Parallel starts running > all the jobs so far queued to jobsqueue at the same time. By killing tail you give GNU Parallel an END-OF-FILE thus GNU Parallel knows that you have no more that you want to run in parallel. And thus it starts the ones it has already read. > I would like to know if there is a way by which parallel will determine (and > execute) the maximum jobs it can execute at a time from a given queue of > jobs. Since the jobqueue is a hack you could do: >jobqueue tail -f jobqueue | parallel -vuj0 & seq 1 1000 | parallel echo true >> jobqueue After this you can submit jobs normally. echo echo foo >> jobqueue (This assumes that your system can only run less than 1000 jobs in parallel - otherwise raise the number 1000). /Ole
