Hi there GNU Parallel Users!

I'm a happy user of *parallel* 20140122 but I'm stucked in a problem with
the semaphore option.
In the following bash code my intent is to run on several cores (specified
by $numcore) an R script.

for file in `ls $directory` do  sem -j"$numcore" R < rscript.R --slave
--args $file $other_input $directory > "$file".gw.logdonesem --wait

wait  # added by me after some testing, but useless

other
commands
here

This task has to be done 32 times on 10 cores.

I have noticed that parallel spreads correctly the job over the desired
cores but it seems that when the for exausts the files (the thirty files)
does not wait until every job is done and the following lines of code are
executed making you think that the analysis is done while there are some
cores that are running.

This is not convenient because I need the ouput of the 32 process to be
parsed aftwerwards this step and I miss two of them avery time.
Results are indeed correct but I cannot pipe this step.

Hope the problem has been correctly explained... Do you have some advice or
similar experience to share?

Thank you for your cooperation,

Stefano Capomaccio


-- 
.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.

*He tried to be a scientist*

Stefano Capomaccio, PhD
Università Cattolica del Sacro Cuore
Via Emilia Parmense, 84
29122 - Piacenza (PC), Italy
Phone +39 0523 599203 (office)
Phone +39 0523 599482 (lab)
email: [email protected]
email: [email protected]
skype: capemaster

.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.

Reply via email to