Thank you,

It's not necessary. If you used the file lock method, it means that I have
not overlooked any known optimal way and I could use it without get fired.

Jordi.

2017-09-04 9:43 GMT+02:00 Manuel Rodríguez Pascual <
manuel.rodriguez.pasc...@gmail.com>:

>
> When I've been in that situation I have solved the problem with a lock
> on a temporary file.
>
> If you need any more help, please let me know. I probably still have
> some examples around.
>
> best,
>
>
> Manuel
>
>
>
> 2017-09-03 15:02 GMT+02:00 Jordi A. Gómez <jordi.an...@bsc.es>:
> > Hello,
> >
> > I am developing a Spank plugin which starts a privileged process once per
> > node. This process will perform some work that require privileges like
> write
> > frequency, read devices, etc.
> >
> > I know I have the option of run it when Slurm Daemon is started in a node
> > and loads the Spank plugin. But I would prefer just to run it when its
> > necesary using some flag (srun --flag).
> >
> > Because I want to start it one time per node I'm thinking which could be
> the
> > best way to achieve that. In remote context I have the function
> > slurm_spank_task_init_privileged(), but it's executed by every node
> task and
> > I want to start it just once, just one process instance. These are my
> > possibilies:
> >
> > 1) Using little maths to get the number of tasks per node and calc the
> > minimum task number, which would be my selected task. For example, 4
> tasks
> > and 2 nodes, the tasks 0 and 2 would run my privileged process. But I'm
> not
> > pretty sure that task numbers will follow always this pattern. And of
> course
> > is someone runs 3 tasks in first node and 1 in the second one, this has
> no
> > sense.
> >
> > 2) Some kind of inter process sync, or file block... to prevent that no
> > other task in the same machine has started my process yet.
> >
> > Any better ideas?
> >
> > Thank you,
> > Jordi.
>


http://bsc.es/disclaimer

Reply via email to