You can still get the consumable to work, if you don't have it set by default. Only people who need to run this type of job would request it. When it's requested, the host's consumable would decrement, which would make it ineligible to run another instance of the job until the current job is done. Jobs not requesting the consumable could still run on the node.
On Wed, Apr 02, 2014 at 04:37:53PM +0100, Tina Friedrich wrote: > If I call them with '-l exclusive' they will be the only task on > that node, though. I don't want that - I simply want only that for > my job array, no more than one slot on a node should be used. (No > need to block the rest of the node for other jobs). > > But I'm thinking some sort of consumable 'exclusive IO' complex > might be the way forward. (Again, lack of per node consumables a bit > of a gotcha). > > I've figured out in the meantime that it can be achieved by > submitting an AR for a one-slot-per-node PE, and then submitting the > array into that. Not sure which option I favour, still. > > Tina > > On 02/04/14 16:04, Skylar Thompson wrote: > >An exclusive host consumable is the right way to approach the problem. If > >the task elements might be part of a parallel environment, then you'll want > >to set the scaling to JOB as well. > > > >On Wed, Apr 02, 2014 at 03:39:03PM +0100, Tina Friedrich wrote: > >>Hello, > >> > >>I'm sure this has been asked time and time before, only I can't find > >>it (search foo failling, somehow). > >> > >>What's the best way to run an array job so that each task ends up on > >>a different node (but they run concurrently)? I don't mind other > >>jobs running on the nodes at the time, but only want one of mine > >>(network IO intensive tasks, best use of file system would be lots > >>of them but spread as far and wide as the can). > >> > >>I've thought about introducing a consumable - apart from there's no > >>node-level consumables at the moment - but am unsure whether that's > >>the best way to handle this? > > > > > -- > Tina Friedrich, Computer Systems Administrator, Diamond Light Source Ltd > Diamond House, Harwell Science and Innovation Campus - 01235 77 8442 > > -- > This e-mail and any attachments may contain confidential, copyright and or > privileged material, and are for the use of the intended addressee only. If > you are not the intended addressee or an authorised recipient of the > addressee please notify us of receipt by returning the e-mail and do not use, > copy, retain, distribute or disclose the information in or attached to the > e-mail. > Any opinions expressed within this e-mail are those of the > individual and not necessarily of Diamond Light Source Ltd. Diamond > Light Source Ltd. cannot guarantee that this e-mail or any > attachments are free from viruses and we cannot accept liability for > any damage which you may sustain as a result of software viruses > which may be transmitted in or with the message. > Diamond Light Source Limited (company no. 4375679). Registered in England and > Wales with its registered office at Diamond House, Harwell Science and > Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom > > > > -- -- Skylar Thompson ([email protected]) -- Genome Sciences Department, System Administrator -- Foege Building S046, (206)-685-7354 -- University of Washington School of Medicine _______________________________________________ users mailing list [email protected] https://gridengine.org/mailman/listinfo/users
