hi,

getting the following error with jsv script, /home/user1 is nfs mountpoint.
is there anything wrong with my settings. it is not allowing to qrsh/qlogin
to any interactive enabled queues

got no response from JSV script "/home/user1/jsv_beta.sh"

cat /home/user1/.sge_request

-jsv /home/user1/jsv_beta.sh


the content of jsv script:

#!/bin/sh
jsv_on_start() {
        return
        }
jsv_on_verify() {
        l_hard=`jsv_get_param l_hard`
# checking some hard resource complex like h_vmem, h_rt
        l_hvmem_requested=`jsv_sub_is_param l_hard h_vmem`
#-------------------------------------------------------
        q_hard=`jsv_get_param q_hard`
# checking some hard queue values batch.q
        q_batch_requested=`jsv_sub_is_param q_hard regression.q`
# checking h_vmem for batch.q
        if [ "${q_batch_requested}" = "true" ] && [ "${l_hvmem_requested}"
!= "true" ]; then
        jsv_reject "For regression queue jobs, please specifify -l
h_vmem=xxG. YOur job has been rejected"
        return
        fi
}
. ${SGE_ROOT}/util/resources/jsv/jsv_include.sh

jsv_main

Regards
PVK


On Mon, May 27, 2013 at 3:08 PM, Reuti <[email protected]> wrote:

> Am 27.05.2013 um 10:44 schrieb Vamsi Krishna:
>
> > qrsh -l queue3 -l h_vmem=6G does not work, while: qrsh -l queue3 -l
> h_vmem=1G works?
>
> "-l queue3" or "-q queue3"? The former would request a boolean complex
> AFAICS.
>
>
> > YES both are working after some time. Grid took some time to read the
> configuration.
> >
> > How is you h_vmem=64G set, do you get it via a load sensor or is is
> setup for each execution host?
> > yes it is set for execd hosts using qconf -me nodeA complex_values.
> which is eq to physical memory size.
> >
> > as the below configuration impacting other queues.
> > h_vmem                         h_vmem                     MEMORY      <=
>      YES         YES        0        0.
> >
> > i would like to restrict h_vmem only to queue3, i dont want to use
> complex_values for other queues like queue1 and queue2.  so modified to
> default settings.
> > h_vmem                         h_vmem                     MEMORY      <=
>      YES         NO        0        0
> >
> > is there a way to configure at queue level only for queue3.
>
> Usually you sepcify resource requests and SGE will select an appropriate
> queue for your job. Requesting a queue directly is unusual.
>
> Anyway: to get it with different values on a queue level you will need a
> JSV (job submission verifier) requesting the intended value for each type
> of queue.
>
> -- Reuti
>
>
> > Regards
> > PVK
> >
> >
> >
> >
> > On Mon, May 27, 2013 at 1:54 PM, Marco Donauer <[email protected]>
> wrote:
> > Hi,
> >
> > qrsh -l queue3 -l h_vmem=6G does not work, while: qrsh -l queue3 -l
> h_vmem=1G works?
> > The message: Your "qrsh" request could not be scheduled, try again
> later. says that one of your requested resources is currently not available.
> > How is you h_vmem=64G set, do you get it via a load sensor or is is
> setup for each execution host?
> >
> > libsepol.so.1 is not a gridengine library. There are many pages
> available regarding this issue. It seems to be RH related.
> >
> >
> > Regards,
> > Marco
> >
> >
> >
> >
> >
> > On 05/27/2013 06:32 AM, Vamsi Krishna wrote:
> >> yes it is there, It is accepting.  it took some time to read the
> settings.
> >> qconf -sc | grep h_vmem
> >> h_vmem                         h_vmem                     MEMORY
>  <=      YES         YES        0        0
> >>
> >> qconf -se nodeA | grep h_vmem
> >> h_vmem=64G
> >>
> >> what is  difference between default-0 and default-1G
> >> with the above configuration, requestable YES, i cannot submit the job
> with out -l h_vmem, getting the following error when i submit job using
> qrsh -l queue3
> >>
> >> "hostname: error while loading shared libraries: libsepol.so.1: failed
> to map segment from shared object: Cannot allocate memory" (I think this
> should be in case if h_vmem is configured as FORCED)
> >>
> >> Job is successful when i use -l h_vmem like qrsh -l queue3 -l h_vmem=1G,
> >>
> >> Regards
> >> PVK
> >>
> >>
> >>
> >>
> >>
> >> On Mon, May 27, 2013 at 12:52 AM, Marco Donauer <[email protected]>
> wrote:
> >> Hi Vamsi,
> >> is the resource queue3 available which you are requesting in your qrsh
> command?
> >>
> >> Regards
> >> Marco
> >>
> >>
> >>
> >> Vamsi Krishna <[email protected]> schrieb:
> >> qconf -sq queue3.q | grep h_vmem
> >> h_vmem                INFINITY
> >>
> >>
> >> On Mon, May 27, 2013 at 12:13 AM, Vamsi Krishna <[email protected]>
> wrote:
> >> Hi,
> >>
> >> i have three queues queue1.q, queue2.q and queue3.q. I have nodeA part
> of queue3.q. h_vmem is configured to restrict user not to overcomit the job
> using the following settings. but job is never submitted either interactive
> or batch mode.
> >>
> >> qconf -sc | grep h_vmem
> >> h_vmem                         h_vmem                     MEMORY
>  <=      YES         YES        0        0
> >>
> >> qconf -se nodeA | grep h_vmem
> >> h_vmem=64G
> >>
> >> qrsh -l queue3 -l h_vmem=6G
> >> Your "qrsh" request could not be scheduled, try again later.
> >>
> >> Regards
> >> PVK
> >>
> >>
> >> users mailing list
> >>
> >> [email protected]
> >>
> >>
> >>
> >> https://gridengine.org/mailman/listinfo/users
> >>
> >> --
> >> sent with K9-Mail from my Android mobile.
> >>
> >
> >
> > --
> > <Grafik1.png> Marco Donauer | Senior Software Engineer - Customer Support
> > Univa Corporation | The Data Center Optimization Company
> > E-Mail: [email protected] | Phone: +1.512.782.4453 | Mobile:
> +49.151.466.396.92
> >
> > <Where.png>German landline:+49.846.294.2944
> > Twitter: https://twitter.com/mdonauer
> >
> >
> > _______________________________________________
> > users mailing list
> > [email protected]
> > https://gridengine.org/mailman/listinfo/users
>
>
_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to