The error has gone ! qsub is running for small scripts!

Following job is pending for couple of minutes Is it stuck? Can I do
something about this?
-------------------------------------------------------------------------------------------------------------------------
abhinav@abhnav:~$ segway --num-labels=4 train test.genomedata traindir
 traindir/observations/chr21.0000.float32 (9411193, 9595548)
____ PROGRAM ENDED SUCCESSFULLY WITH STATUS 0 AT Thursday March 20
2014, 06:46:28 IST ____
queued 44: emt0.0.0.traindir.430cdb2cafcd11e38f341803736f5e43
(mem_requested=2048M h_vmem=2048M h_stack=8M)

On Thu, Mar 20, 2014 at 5:48 AM, Abhinav Mittal
<[email protected]> wrote:
> hostname              global
> load_scaling          NONE
> complex_values        NONE
> load_values           NONE
> processors            0
> user_lists            NONE
> xuser_lists           NONE
> projects              NONE
> xprojects             NONE
> usage_scaling         NONE
> report_variables      NONE
>
> On Thu, Mar 20, 2014 at 5:46 AM, Reuti <[email protected]> wrote:
>> It's getting a riddle:
>>
>> $ qconf -se global
>>
>>
>> Am 20.03.2014 um 00:48 schrieb Abhinav Mittal:
>>
>>> abhinav@abhnav:~$ qconf -se abhnav
>>> hostname              abhnav
>>> load_scaling          NONE
>>> complex_values        mem_requested=1.801G
>>> load_values           arch=lx26-amd64,num_proc=4,mem_total=1844.480469M, \
>>>                      swap_total=1954.996094M,virtual_total=3799.476562M, \
>>>                      load_avg=0.240000,load_short=0.070000, \
>>>                      load_medium=0.240000,load_long=0.200000, \
>>>                      mem_free=753.285156M,swap_free=1646.085938M, \
>>>                      virtual_free=2399.371094M,mem_used=1091.195312M, \
>>>                      swap_used=308.910156M,virtual_used=1400.105469M, \
>>>                      cpu=1.300000,m_topology=NONE,m_topology_inuse=NONE, \
>>>                      m_socket=0,m_core=0,np_load_avg=0.060000, \
>>>                      np_load_short=0.017500,np_load_medium=0.060000, \
>>>                      np_load_long=0.050000
>>> processors            4
>>> user_lists            NONE
>>> xuser_lists           NONE
>>> projects              NONE
>>> xprojects             NONE
>>> usage_scaling         NONE
>>> report_variables      NONE
>>>
>>> On Thu, Mar 20, 2014 at 12:56 AM, Reuti <[email protected]> wrote:
>>>> Maybe the machine is blocked by an ACL too:
>>>>
>>>> $ qconf -se abhnav
>>>>
>>>> -- Reuti
>>>>
>>>> Am 19.03.2014 um 18:10 schrieb Abhinav Mittal:
>>>>
>>>>> algorithm                         default
>>>>> schedule_interval                 0:0:15
>>>>> maxujobs                          0
>>>>> queue_sort_method                 load
>>>>> job_load_adjustments              np_load_avg=0.50
>>>>> load_adjustment_decay_time        0:7:30
>>>>> load_formula                      np_load_avg
>>>>> schedd_job_info                   true
>>>>> flush_submit_sec                  0
>>>>> flush_finish_sec                  0
>>>>> params                            none
>>>>> reprioritize_interval             0:0:0
>>>>> halftime                          168
>>>>> usage_weight_list                 cpu=1.000000,mem=0.000000,io=0.000000
>>>>> compensation_factor               5.000000
>>>>> weight_user                       0.250000
>>>>> weight_project                    0.250000
>>>>> weight_department                 0.250000
>>>>> weight_job                        0.250000
>>>>> weight_tickets_functional         0
>>>>> weight_tickets_share              0
>>>>> share_override_tickets            TRUE
>>>>> share_functional_shares           TRUE
>>>>> max_functional_jobs_to_schedule   200
>>>>> report_pjob_tickets               TRUE
>>>>> max_pending_tasks_per_job         50
>>>>> halflife_decay_list               none
>>>>> policy_hierarchy                  OFS
>>>>> weight_ticket                     0.500000
>>>>> weight_waiting_time               0.278000
>>>>> weight_deadline                   3600000.000000
>>>>> weight_urgency                    0.500000
>>>>> weight_priority                   0.000000
>>>>> max_reservation                   0
>>>>> default_duration                  INFINITY
>>>>>
>>>>> On Wed, Mar 19, 2014 at 10:27 PM, Reuti <[email protected]> 
>>>>> wrote:
>>>>>> You changed the queue setting?
>>>>>>
>>>>>> $ qconf -sq New
>>>>>>
>>>>>> Another place to look to could be:
>>>>>>
>>>>>> $ qconf -ssconf
>>>>>>
>>>>>> -- Reuti
>>>>>>
>>>>>>
>>>>>> Am 19.03.2014 um 17:54 schrieb Abhinav Mittal:
>>>>>>
>>>>>>> Still ......
>>>>>>>
>>>>>>> abhinav@abhnav:~$ qconf -sql
>>>>>>> New
>>>>>>> abhinav@abhnav:~$ qconf -sel
>>>>>>> abhnav
>>>>>>> localhost
>>>>>>> abhinav@abhnav:~$ qconf -sconf
>>>>>>> #global:
>>>>>>> execd_spool_dir              /var/spool/gridengine/execd
>>>>>>> mailer                       /usr/bin/mail
>>>>>>> xterm                        /usr/bin/xterm
>>>>>>> load_sensor                  none
>>>>>>> prolog                       none
>>>>>>> epilog                       none
>>>>>>> shell_start_mode             posix_compliant
>>>>>>> login_shells                 bash,sh,ksh,csh,tcsh
>>>>>>> min_uid                      0
>>>>>>> min_gid                      0
>>>>>>> user_lists                   none
>>>>>>> xuser_lists                  none
>>>>>>> projects                     none
>>>>>>> xprojects                    none
>>>>>>> enforce_project              false
>>>>>>> enforce_user                 auto
>>>>>>> load_report_time             00:00:40
>>>>>>> max_unheard                  00:05:00
>>>>>>> reschedule_unknown           00:00:00
>>>>>>> loglevel                     log_warning
>>>>>>> administrator_mail           root
>>>>>>> set_token_cmd                none
>>>>>>> pag_cmd                      none
>>>>>>> token_extend_time            none
>>>>>>> shepherd_cmd                 none
>>>>>>> qmaster_params               none
>>>>>>> execd_params                 none
>>>>>>> reporting_params             accounting=true reporting=false \
>>>>>>>                           flush_time=00:00:15 joblog=false 
>>>>>>> sharelog=00:00:00
>>>>>>> finished_jobs                100
>>>>>>> gid_range                    65400-65500
>>>>>>> max_aj_instances             2000
>>>>>>> max_aj_tasks                 75000
>>>>>>> max_u_jobs                   0
>>>>>>> max_jobs                     0
>>>>>>> auto_user_oticket            0
>>>>>>> auto_user_fshare             0
>>>>>>> auto_user_default_project    none
>>>>>>> auto_user_delete_time        86400
>>>>>>> delegated_file_staging       false
>>>>>>> reprioritize                 0
>>>>>>> rlogin_daemon                /usr/sbin/sshd -i
>>>>>>> rlogin_command               /usr/bin/ssh
>>>>>>> qlogin_daemon                /usr/sbin/sshd -i
>>>>>>> qlogin_command               /usr/share/gridengine/qlogin-wrapper
>>>>>>> rsh_daemon                   /usr/sbin/sshd -i
>>>>>>> rsh_command                  /usr/bin/ssh
>>>>>>> jsv_url                      none
>>>>>>> jsv_allowed_mod              ac,h,i,e,o,j,M,N,p,w
>>>>>>>
>>>>>>> abhinav@abhnav:~$ qsub script.sh
>>>>>>> Unable to run job: warning: abhinav your job is not allowed to run in 
>>>>>>> any queue
>>>>>>> Your job 29 ("script.sh") has been submitted.
>>>>>>> Exiting.
>>>>>>>
>>>>>>> On Wed, Mar 19, 2014 at 10:11 PM, Reuti <[email protected]> 
>>>>>>> wrote:
>>>>>>>> Am 19.03.2014 um 16:34 schrieb Abhinav Mittal:
>>>>>>>>
>>>>>>>>> YES abhnav and abhinav are different!
>>>>>>>>
>>>>>>>> Ok.
>>>>>>>>
>>>>>>>>> On Wed, Mar 19, 2014 at 9:00 PM, Reuti <[email protected]> 
>>>>>>>>> wrote:
>>>>>>>>>>>>>>>> <snip>
>>>>>>>>>>>>>>>> user_lists            arusers
>>>>>>>>
>>>>>>>> Why are "arusers" set here? By default this is empty.
>>>>>>>>
>>>>>>>> $ qconf -su arusers
>>>>>>>>
>>>>>>>> Either set it to NONE or attach yourself to the list.
>>>>>>>>
>>>>>>>> -- Reuti
>>>>>>>>
>>>>>>>>
>>>>>>>>>>>>>>>> xuser_lists           NONE
>>>>>>>>>>>>>>>> subordinate_list      NONE
>>>>>>>>>>>>>>>> complex_values        NONE
>>>>>>>>>>>>>>>> projects              NONE
>>>>>>>>>>>>>>>> xprojects             NONE
>>>>>>>>>>>>>>>> calendar              NONE
>>>>>>>>>>>>>>>> initial_state         default
>>>>>>>>>>>>>>>> s_rt                  INFINITY
>>>>>>>>>>>>>>>> h_rt                  INFINITY
>>>>>>>>>>>>>>>> s_cpu                 INFINITY
>>>>>>>>>>>>>>>> h_cpu                 INFINITY
>>>>>>>>>>>>>>>> s_fsize               INFINITY
>>>>>>>>>>>>>>>> h_fsize               INFINITY
>>>>>>>>>>>>>>>> s_data                INFINITY
>>>>>>>>>>>>>>>> h_data                INFINITY
>>>>>>>>>>>>>>>> s_stack               INFINITY
>>>>>>>>>>>>>>>> h_stack               INFINITY
>>>>>>>>>>>>>>>> s_core                INFINITY
>>>>>>>>>>>>>>>> h_core                INFINITY
>>>>>>>>>>>>>>>> s_rss                 INFINITY
>>>>>>>>>>>>>>>> h_rss                 INFINITY
>>>>>>>>>>>>>>>> s_vmem                INFINITY
>>>>>>>>>>>>>>>> h_vmem                INFINITY
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Wed, Mar 19, 2014 at 6:54 PM, Reuti 
>>>>>>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>>>>>>> Am 19.03.2014 um 14:02 schrieb Abhinav Mittal:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> --------------------------------------------------------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qconf -sel
>>>>>>>>>>>>>>>>>> abhnav
>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qconf -sql
>>>>>>>>>>>>>>>>>> New
>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qconf -sconf
>>>>>>>>>>>>>>>>>> #global:
>>>>>>>>>>>>>>>>>> execd_spool_dir              /var/spool/gridengine/execd
>>>>>>>>>>>>>>>>>> mailer                       /usr/bin/mail
>>>>>>>>>>>>>>>>>> xterm                        /usr/bin/xterm
>>>>>>>>>>>>>>>>>> load_sensor                  none
>>>>>>>>>>>>>>>>>> prolog                       none
>>>>>>>>>>>>>>>>>> epilog                       none
>>>>>>>>>>>>>>>>>> shell_start_mode             posix_compliant
>>>>>>>>>>>>>>>>>> login_shells                 bash,sh,ksh,csh,tcsh
>>>>>>>>>>>>>>>>>> min_uid                      0
>>>>>>>>>>>>>>>>>> min_gid                      0
>>>>>>>>>>>>>>>>>> user_lists                   none
>>>>>>>>>>>>>>>>>> xuser_lists                  none
>>>>>>>>>>>>>>>>>> projects                     none
>>>>>>>>>>>>>>>>>> xprojects                    none
>>>>>>>>>>>>>>>>>> enforce_project              false
>>>>>>>>>>>>>>>>>> enforce_user                 auto
>>>>>>>>>>>>>>>>>> load_report_time             00:00:40
>>>>>>>>>>>>>>>>>> max_unheard                  00:05:00
>>>>>>>>>>>>>>>>>> reschedule_unknown           00:00:00
>>>>>>>>>>>>>>>>>> loglevel                     log_warning
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> This I suggest to change to:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> loglevel log_info
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> administrator_mail           root
>>>>>>>>>>>>>>>>>> set_token_cmd                none
>>>>>>>>>>>>>>>>>> pag_cmd                      none
>>>>>>>>>>>>>>>>>> token_extend_time            none
>>>>>>>>>>>>>>>>>> shepherd_cmd                 none
>>>>>>>>>>>>>>>>>> qmaster_params               none
>>>>>>>>>>>>>>>>>> execd_params                 none
>>>>>>>>>>>>>>>>>> reporting_params             accounting=true reporting=false 
>>>>>>>>>>>>>>>>>> \
>>>>>>>>>>>>>>>>>>                      flush_time=00:00:15 joblog=false 
>>>>>>>>>>>>>>>>>> sharelog=00:00:00
>>>>>>>>>>>>>>>>>> finished_jobs                100
>>>>>>>>>>>>>>>>>> gid_range                    65400-65500
>>>>>>>>>>>>>>>>>> max_aj_instances             2000
>>>>>>>>>>>>>>>>>> max_aj_tasks                 75000
>>>>>>>>>>>>>>>>>> max_u_jobs                   0
>>>>>>>>>>>>>>>>>> max_jobs                     0
>>>>>>>>>>>>>>>>>> auto_user_oticket            0
>>>>>>>>>>>>>>>>>> auto_user_fshare             0
>>>>>>>>>>>>>>>>>> auto_user_default_project    none
>>>>>>>>>>>>>>>>>> auto_user_delete_time        86400
>>>>>>>>>>>>>>>>>> delegated_file_staging       false
>>>>>>>>>>>>>>>>>> reprioritize                 0
>>>>>>>>>>>>>>>>>> rlogin_daemon                /usr/sbin/sshd -i
>>>>>>>>>>>>>>>>>> rlogin_command               /usr/bin/ssh
>>>>>>>>>>>>>>>>>> qlogin_daemon                /usr/sbin/sshd -i
>>>>>>>>>>>>>>>>>> qlogin_command               
>>>>>>>>>>>>>>>>>> /usr/share/gridengine/qlogin-wrapper
>>>>>>>>>>>>>>>>>> rsh_daemon                   /usr/sbin/sshd -i
>>>>>>>>>>>>>>>>>> rsh_command                  /usr/bin/ssh
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Did you set this up? The default is to use the builtin tools 
>>>>>>>>>>>>>>>>> for the commands above.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> jsv_url                      none
>>>>>>>>>>>>>>>>>> jsv_allowed_mod              ac,h,i,e,o,j,M,N,p,w
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Fine.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qstat -f
>>>>>>>>>>>>>>>>>> queuename                      qtype resv/used/tot. load_avg 
>>>>>>>>>>>>>>>>>> arch
>>>>>>>>>>>>>>>>>> states
>>>>>>>>>>>>>>>>>> ---------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>> New@abhnav                     BIP   0/0/1          0.68     
>>>>>>>>>>>>>>>>>> lx26-amd64
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> ############################################################################
>>>>>>>>>>>>>>>>>> - PENDING JOBS - PENDING JOBS - PENDING JOBS - PENDING JOBS 
>>>>>>>>>>>>>>>>>> - PENDING JOBS
>>>>>>>>>>>>>>>>>> ############################################################################
>>>>>>>>>>>>>>>>>> 3 0.75000 emt0.0.0.t abhinav      qw    03/16/2014 00:54:30  
>>>>>>>>>>>>>>>>>>    1
>>>>>>>>>>>>>>>>>> 5 0.75000 emt0.0.0.t abhinav      qw    03/16/2014 07:44:02  
>>>>>>>>>>>>>>>>>>    1
>>>>>>>>>>>>>>>>>> 6 0.75000 emt0.0.0.t abhinav      qw    03/16/2014 07:46:05  
>>>>>>>>>>>>>>>>>>    1
>>>>>>>>>>>>>>>>>> 7 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:16:49  
>>>>>>>>>>>>>>>>>>    1
>>>>>>>>>>>>>>>>>> 9 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:23:41  
>>>>>>>>>>>>>>>>>>    1
>>>>>>>>>>>>>>>>>> 10 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:25:11 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 11 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:27:40 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 12 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 22:47:21 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 13 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 23:14:14 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 14 0.75000 emt0.0.0.t abhinav      qw    03/18/2014 23:14:48 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 15 0.75000 emt0.0.0.t abhinav      qw    03/19/2014 16:06:01 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 16 0.25000 script.sh  abhinav      qw    03/19/2014 17:03:34 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 17 0.25000 script.sh  abhinav      qw    03/19/2014 17:04:14 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 18 0.25000 script.sh  abhinav      qw    03/19/2014 17:04:54 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>> 19 0.25000 script.sh  abhinav      qw    03/19/2014 17:07:08 
>>>>>>>>>>>>>>>>>>     1
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> So, what does the queue look like:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> $ qconf -sq New
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> -- Reuti
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ hostname
>>>>>>>>>>>>>>>>>> abhnav
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Wed, Mar 19, 2014 at 5:24 PM, Reuti 
>>>>>>>>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Am 19.03.2014 um 12:37 schrieb Abhinav Mittal:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Not working
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qconf -ss
>>>>>>>>>>>>>>>>>>>> abhnav
>>>>>>>>>>>>>>>>>>>> localhost
>>>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ hostname
>>>>>>>>>>>>>>>>>>>> abhnav
>>>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qsub script.sh
>>>>>>>>>>>>>>>>>>>> Unable to run job: warning: abhinav your job is not 
>>>>>>>>>>>>>>>>>>>> allowed to run in any queue
>>>>>>>>>>>>>>>>>>>> Your job 18 ("script.sh") has been submitted.
>>>>>>>>>>>>>>>>>>>> Exiting.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> Same for qsub -b y as well
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Wed, Mar 19, 2014 at 4:30 PM, Reuti 
>>>>>>>>>>>>>>>>>>>> <[email protected]> wrote:
>>>>>>>>>>>>>>>>>>>>> Hi,
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Am 19.03.2014 um 11:45 schrieb Abhinav Mittal:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> I am trying to run a software called
>>>>>>>>>>>>>>>>>>>>>> "Segway"(http://noble.gs.washington.edu/proj/segway/doc/1.1.0/segway.html)
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Before looking into any application specific problems: is 
>>>>>>>>>>>>>>>>>>>>> a simple script echo'ing "Hello World" working? Can you 
>>>>>>>>>>>>>>>>>>>>> submit a binary with `qsub -b y hostname` too?
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> -- Reuti
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> and getting an error "your job is not allowed to run in 
>>>>>>>>>>>>>>>>>>>>>> any que".
>>>>>>>>>>>>>>>>>>>>>> submit host : localhost , abhnav
>>>>>>>>>>>>>>>>>>>>>> hostname : abhnav
>>>>>>>>>>>>>>>>>>>>>> Still I am getting this error
>>>>>>>>>>>>>>>>>>>>>> Please help.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> -----------------------------------------------------------------------------------------------------------------
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ segway --num-labels=4 train 
>>>>>>>>>>>>>>>>>>>>>> test.genomedata traindir
>>>>>>>>>>>>>>>>>>>>>> traindir/observations/chr21.0000.float32 (9411193, 
>>>>>>>>>>>>>>>>>>>>>> 9595548)
>>>>>>>>>>>>>>>>>>>>>> ____ PROGRAM ENDED SUCCESSFULLY WITH STATUS 0 AT 
>>>>>>>>>>>>>>>>>>>>>> Wednesday March 19
>>>>>>>>>>>>>>>>>>>>>> 2014, 16:06:01 IST ____
>>>>>>>>>>>>>>>>>>>>>> Traceback (most recent call last):
>>>>>>>>>>>>>>>>>>>>>> File "/home/abhinav/arch/Linux-x86_64/bin/segway", line 
>>>>>>>>>>>>>>>>>>>>>> 9, in <module>
>>>>>>>>>>>>>>>>>>>>>> load_entry_point('segway==1.1.0', 'console_scripts', 
>>>>>>>>>>>>>>>>>>>>>> 'segway')()
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 3592, in main
>>>>>>>>>>>>>>>>>>>>>> return runner()
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 3429, in __call__
>>>>>>>>>>>>>>>>>>>>>> self.run(*args, **kwargs)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 3407, in run
>>>>>>>>>>>>>>>>>>>>>> self.run_train()
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 3038, in run_train
>>>>>>>>>>>>>>>>>>>>>> instance_params = run_train_func(num_segs_range)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 3056, in run_train_singlethread
>>>>>>>>>>>>>>>>>>>>>> res = [self.run_train_instance()]
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 2937, in run_train_instance
>>>>>>>>>>>>>>>>>>>>>> self.run_train_round(instance_index, round_index, 
>>>>>>>>>>>>>>>>>>>>>> **kwargs)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 2899, in run_train_round
>>>>>>>>>>>>>>>>>>>>>> round_index, **kwargs)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/run.py",
>>>>>>>>>>>>>>>>>>>>>> line 2770, in queue_train_parallel
>>>>>>>>>>>>>>>>>>>>>> res.queue(restartable_job)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/cluster/__init__.py",
>>>>>>>>>>>>>>>>>>>>>> line 174, in queue
>>>>>>>>>>>>>>>>>>>>>> self._queue_unconditional(restartable_job)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/cluster/__init__.py",
>>>>>>>>>>>>>>>>>>>>>> line 164, in _queue_unconditional
>>>>>>>>>>>>>>>>>>>>>> jobid = restartable_job.run()
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/segway/cluster/__init__.py",
>>>>>>>>>>>>>>>>>>>>>> line 116, in run
>>>>>>>>>>>>>>>>>>>>>> res = self.session.runJob(job_template)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/drmaa-0.7.6-py2.7.egg/drmaa/session.py",
>>>>>>>>>>>>>>>>>>>>>> line 314, in runJob
>>>>>>>>>>>>>>>>>>>>>> c(drmaa_run_job, jid, sizeof(jid), jobTemplate)
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/drmaa-0.7.6-py2.7.egg/drmaa/helpers.py",
>>>>>>>>>>>>>>>>>>>>>> line 299, in c
>>>>>>>>>>>>>>>>>>>>>> return f(*(args + (error_buffer, sizeof(error_buffer))))
>>>>>>>>>>>>>>>>>>>>>> File 
>>>>>>>>>>>>>>>>>>>>>> "/home/abhinav/arch/Linux-x86_64/lib/python2.7/drmaa-0.7.6-py2.7.egg/drmaa/errors.py",
>>>>>>>>>>>>>>>>>>>>>> line 151, in error_check
>>>>>>>>>>>>>>>>>>>>>> raise _ERRORS[code - 1](error_string)
>>>>>>>>>>>>>>>>>>>>>> drmaa.errors.DeniedByDrmException: code 17: warning: 
>>>>>>>>>>>>>>>>>>>>>> abhinav your job
>>>>>>>>>>>>>>>>>>>>>> is not allowed to run in any queue
>>>>>>>>>>>>>>>>>>>>>> Your job 15 
>>>>>>>>>>>>>>>>>>>>>> ("emt0.0.0.traindir.43308e4eaf5211e3a4741803736f5e43") 
>>>>>>>>>>>>>>>>>>>>>> has
>>>>>>>>>>>>>>>>>>>>>> been submitted
>>>>>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ hostname
>>>>>>>>>>>>>>>>>>>>>> abhnav
>>>>>>>>>>>>>>>>>>>>>> abhinav@abhnav:~$ qconf -ss
>>>>>>>>>>>>>>>>>>>>>> abhnav
>>>>>>>>>>>>>>>>>>>>>> localhost
>>>>>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>>>>> users mailing list
>>>>>>>>>>>>>>>>>>>>>> [email protected]
>>>>>>>>>>>>>>>>>>>>>> https://gridengine.org/mailman/listinfo/users
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>>>>>>>>> users mailing list
>>>>>>>>>>>>>>>>>>> [email protected]
>>>>>>>>>>>>>>>>>>> https://gridengine.org/mailman/listinfo/users
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>
>>>>>>
>>>>
>>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to