Hi,

Am 29.06.2016 um 22:56 schrieb Jerome:

> Dear all
> 
> Here we runa Rocks cluster 6.2, whith SGE GE2011. I've configure on our 
> cluster a special queue "express" to run uniquely interactive job. This queue 
> is limited in time for 2 hours.
> When i run qrsh, i go on this queue, so all right.
> But i have some strange behavior: i send a batch job, and i is running on the 
> "express" queue, although this is a interactive job.
> Someone could help me in this issue?

Do the submitted jobs request "-now yes" and are serial?

Interactive is more like immediate, and you can run an interactive job in a 
plain batch queue with "-now no", resp. a batch job in an interactive queue if 
you request "-now yes".

Note that parallel jobs are unrelated to this setting and even with qtype NONE 
they will run as long as a PE is attached to the queue in question.

-- Reuti


> The parameter of the 2 queue
> 
> # qconf -sq express.q
> qname                 express.q
> hostlist              @allhosts
> seq_no                0
> load_thresholds       np_load_avg=1.75
> suspend_thresholds    NONE
> nsuspend              1
> suspend_interval      00:05:00
> priority              0
> min_cpu_interval      00:05:00
> processors            UNDEFINED
> qtype                 INTERACTIVE
> ckpt_list             NONE
> pe_list               make mpi mpich orte thread
> rerun                 FALSE
> slots                 1,[compute-0-0.local=0],[compute-0-1.local=0], \
>                      [compute-0-2.local=0],[compute-0-3.local=0], \
>                      [compute-0-4.local=8]
> tmpdir                /tmp
> shell                 /bin/csh
> prolog                NONE
> epilog                NONE
> shell_start_mode      unix_behavior
> starter_method        NONE
> suspend_method        NONE
> resume_method         NONE
> terminate_method      NONE
> notify                00:00:60
> owner_list            NONE
> user_lists            NONE
> xuser_lists           NONE
> subordinate_list      NONE
> complex_values        NONE
> projects              NONE
> xprojects             NONE
> calendar              NONE
> initial_state         default
> s_rt                  INFINITY
> h_rt                  02:00:00
> s_cpu                 INFINITY
> h_cpu                 INFINITY
> s_fsize               INFINITY
> h_fsize               INFINITY
> s_data                INFINITY
> h_data                INFINITY
> s_stack               INFINITY
> h_stack               INFINITY
> s_core                INFINITY
> h_core                INFINITY
> s_rss                 INFINITY
> h_rss                 INFINITY
> s_vmem                INFINITY
> h_vmem                INFINITY
> 
> # qconf -sq all.q
> qname                 all.q
> hostlist              @allhosts
> seq_no                0
> load_thresholds       np_load_avg=1.75
> suspend_thresholds    NONE
> nsuspend              1
> suspend_interval      00:05:00
> priority              0
> min_cpu_interval      00:05:00
> processors            UNDEFINED
> qtype                 BATCH
> ckpt_list             NONE
> pe_list               make mpi mpich orte thread
> rerun                 FALSE
> slots                 1,[compute-0-0.local=64],[compute-0-1.local=64], \
>                      [compute-0-2.local=64],[compute-0-3.local=64], \
>                      [compute-0-4.local=64],[compute-2-0.local=64]
> tmpdir                /tmp
> shell                 /bin/csh
> prolog                NONE
> epilog                NONE
> shell_start_mode      unix_behavior
> starter_method        NONE
> suspend_method        NONE
> resume_method         NONE
> terminate_method      NONE
> notify                00:00:60
> owner_list            NONE
> user_lists            NONE
> xuser_lists           NONE
> subordinate_list      NONE
> complex_values        NONE
> projects              NONE
> xprojects             NONE
> calendar              NONE
> initial_state         default
> s_rt                  INFINITY
> h_rt                  24:00:00
> s_cpu                 INFINITY
> h_cpu                 INFINITY
> s_fsize               INFINITY
> h_fsize               INFINITY
> s_data                INFINITY
> h_data                INFINITY
> s_stack               INFINITY
> h_stack               INFINITY
> s_core                INFINITY
> h_core                INFINITY
> s_rss                 INFINITY
> h_rss                 INFINITY
> s_vmem                INFINITY
> h_vmem                INFINITY
> 
> 
> Regards
> -- 
> -- Jérôme
> Les prévisions sont difficiles, surtout lorsqu'elles concernent l'avenir.
>       (Pierre Dac)
> _______________________________________________
> users mailing list
> users@gridengine.org
> https://gridengine.org/mailman/listinfo/users
> 


_______________________________________________
users mailing list
users@gridengine.org
https://gridengine.org/mailman/listinfo/users

Reply via email to