BTW, currently I can not run salloc on a node

[mahmood@rocks7 ~]$ salloc
salloc: Granted job allocation 272
[mahmood@rocks7 ~]$ exit
exit
salloc: Relinquishing job allocation 272
[mahmood@rocks7 ~]$ ssh compute-0-2
Warning: untrusted X11 forwarding setup failed: xauth key data not generated
Last login: Wed Jan  2 10:24:35 2019 from rocks7.local
Rocks Compute Node
Rocks 7.0 (Manzanita)
Profile built 17:52 24-Dec-2018

Kickstarted 09:37 24-Dec-2018
[mahmood@compute-0-2 ~]$ salloc
salloc: error: Job submit/allocate failed: Access/permission denied
[mahmood@compute-0-2 ~]$


Regards,
Mahmood




On Wed, Jan 2, 2019 at 6:54 PM Mahmood Naderan <mahmood...@gmail.com> wrote:

> I want to know if there any any way to push the node selection part on
> slurm and not a manual thing that is done by user.
> Currently, I have to manually ssh to a node and try to "allocate
> resources" using salloc.
>
>
> Regards,
> Mahmood
>
>
>
>
> On Wed, Jan 2, 2019 at 5:54 PM Henkel, Andreas <hen...@uni-mainz.de>
> wrote:
>
>> Hi,
>> As far as I understand salloc is used to make allocations but  initiate a
>> shell (whatever the sallocdefaultcommand specifies) on the node you called
>> salloc. If you’re looking for an interactive session you‘ll probably have
>> to use srun --pty xterm . This will allocate the resources AND initiate a
>> shell on one of the allocated nodes.
>> Best
>> Andreas
>>
>> Am 02.01.2019 um 14:43 schrieb Mahmood Naderan <mahmood...@gmail.com>:
>>
>> Chris,
>> Can you explain why I can not get a prompt on a specific node while I
>> have passed the node name to salloc?
>>
>> [mahmood@rocks7 ~]$ salloc
>> salloc: Granted job allocation 268
>> [mahmood@rocks7 ~]$ exit
>> exit
>> salloc: Relinquishing job allocation 268
>> [mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2
>> salloc: Granted job allocation 269
>> [mahmood@rocks7 ~]$ exit
>> exit
>> salloc: Relinquishing job allocation 269
>> [mahmood@rocks7 ~]$ grep SallocDefaultCommand /etc/slurm/slurm.conf
>> #SallocDefaultCommand = "xterm"
>> [mahmood@rocks7 ~]$
>>
>>
>>
>> As you can see the default SallocDefaultCommand is commented. So, I
>> expected to override the default command.
>>
>>
>> Regards,
>> Mahmood
>>
>>
>>
>>
>> On Sun, Dec 30, 2018 at 9:11 PM Mahmood Naderan <mahmood...@gmail.com>
>> wrote:
>>
>>> So, isn't possible to override that "default"? I mean the target node.
>>> In the faq page it is possible to change the default command for salloc,
>>> but I didn't see your confirmation.
>>>
>>>
>>> I really have difficults with interactive jobs that use x11 or binary
>>> files or bash scripts. For some of them, srun doesn't work while salloc
>>> works. On the other hand with srun I can choose a target nide while I can't
>>> do that with salloc.
>>>
>>> Has anybody faced such issues?
>>>
>>> On Sun, Dec 30, 2018, 20:15 Chris Samuel <ch...@csamuel.org> wrote:
>>>
>>>> On 30/12/18 7:16 am, Mahmood Naderan wrote:
>>>>
>>>> > Right...
>>>> > I also tried
>>>> >
>>>> > [mahmood@rocks7 ~]$ salloc --nodelist=compute-0-2 -n 1 -c 1 --mem=4G
>>>> -p
>>>> > RUBY -A y4
>>>> > salloc: Granted job allocation 199
>>>> > [mahmood@rocks7 ~]$ $
>>>> >
>>>> > I expected to see the compute-0-2 prompt. Is that normal?
>>>>
>>>> By default salloc gives you a shell on the same node as you ran it on,
>>>> with a job allocation that you can access by srun.
>>>>
>>>> You can read more about interactive shells here:
>>>>
>>>> https://slurm.schedmd.com/faq.html#prompt
>>>>
>>>> All the best,
>>>> Chris
>>>> --
>>>>   Chris Samuel  :  http://www.csamuel.org/  :  Melbourne, VIC
>>>>
>>>>

Reply via email to