indeed. All our workstations are submit hosts and in the queue, so people can run jobs on their local host if they want.
We have a GUI tightly integrated with our environment for our staff to submit and monitor their jobs from (they don't have to touch a single job script). On Thu, Nov 22, 2018 at 6:28 PM Tina Friedrich <tina.friedr...@it.ox.ac.uk> wrote: > I really don't want to start a flaming discussion on this - but I don't > think it's an unusual situation. I have, in likewise roughtly 15 years > of doing this, not ever worked anywhere where people didn't have a GUI > to submit from. It's always been a case of 'Wand to use the cluster? > We'll make your workstation a submit host.' > > I think it's a pretty standard way of handling things it you are an > institute that runs their own (maybe small) cluster, especially if the > workstations are also managed machine. > > Tina > > On 21/11/2018 23:26, Christopher Samuel wrote: > > On 22/11/18 5:04 am, Mahmood Naderan wrote: > > > >> The idea is to have a job manager that find the best node for a newly > >> submitted job. If the user has to manually ssh to a node, why one > >> should use slurm or any other thing? > > > > You are in a really really unusual situation - in 15 years I've not come > > across a situation before this where a user would have GUI access to a > > system that can submit jobs directly to a cluster like you can. > > > > I'm not sure why Slurm has this restriction but it might be that you can > > start up an xterm, change your $DISPLAY to be localhost:0 and see if you > > can start an X11 application from that. It might be that you'll need to > > add an xauth cookie for localhost to get that going. > > > > If it does work then (hopefully) you can use that trick to fire up jobs > > with X11 display forwarding. > > > > All the best, > > Chris > -- Dr Stuart Midgley sdm...@gmail.com