qrsh -l queue3 -l h_vmem=6G does not work, while: qrsh -l queue3 -l
h_vmem=1G works?
YES both are working after some time. Grid took some time to read the
configuration.

How is you h_vmem=64G set, do you get it via a load sensor or is is setup
for each execution host?
yes it is set for execd hosts using qconf -me nodeA complex_values. which
is eq to physical memory size.

as the below configuration impacting other queues.
h_vmem                         h_vmem                     MEMORY      <=
   YES         YES        0        0.

i would like to restrict h_vmem only to queue3, i dont want to use
complex_values for other queues like queue1 and queue2.  so modified to
default settings.
h_vmem                         h_vmem                     MEMORY      <=
   YES         NO        0        0

is there a way to configure at queue level only for queue3.

Regards
PVK




On Mon, May 27, 2013 at 1:54 PM, Marco Donauer <[email protected]> wrote:

>  Hi,
>
> qrsh -l queue3 -l h_vmem=6G does not work, while: qrsh -l queue3 -l
> h_vmem=1G works?
> The message: Your "qrsh" request could not be scheduled, try again later.
> says that one of your requested resources is currently not available.
> How is you h_vmem=64G set, do you get it via a load sensor or is is setup
> for each execution host?
>
> libsepol.so.1 is not a gridengine library. There are many pages available
> regarding this issue. It seems to be RH related.
>
>
> Regards,
> Marco
>
>
>
>
>
> On 05/27/2013 06:32 AM, Vamsi Krishna wrote:
>
> yes it is there, It is accepting.  it took some time to read the settings.
> qconf -sc | grep h_vmem
>  h_vmem                         h_vmem                     MEMORY      <=
>      YES         YES        0        0
>
>  qconf -se nodeA | grep h_vmem
> h_vmem=64G
>
>  what is  difference between default-0 and default-1G
> with the above configuration, requestable YES, i cannot submit the job
> with out -l h_vmem, getting the following error when i submit job using
> qrsh -l queue3
>
>  "hostname: error while loading shared libraries: libsepol.so.1: failed
> to map segment from shared object: Cannot allocate memory" (I think this
> should be in case if h_vmem is configured as FORCED)
>
>  Job is successful when i use -l h_vmem like qrsh -l queue3 -l h_vmem=1G,
>
>  Regards
>  PVK
>
>
>
>
>
> On Mon, May 27, 2013 at 12:52 AM, Marco Donauer <[email protected]>wrote:
>
>>  Hi Vamsi,
>> is the resource queue3 available which you are requesting in your qrsh
>> command?
>>
>> Regards
>> Marco
>>
>>
>>
>> Vamsi Krishna <[email protected]> schrieb:
>>>
>>>  qconf -sq queue3.q | grep h_vmem
>>> h_vmem                INFINITY
>>>
>>>
>>> On Mon, May 27, 2013 at 12:13 AM, Vamsi Krishna 
>>> <[email protected]>wrote:
>>>
>>>> Hi,
>>>>
>>>>  i have three queues queue1.q, queue2.q and queue3.q. I have nodeA
>>>> part of queue3.q. h_vmem is configured to restrict user not to overcomit
>>>> the job using the following settings. but job is never submitted either
>>>> interactive or batch mode.
>>>>
>>>>  qconf -sc | grep h_vmem
>>>> h_vmem                         h_vmem                     MEMORY
>>>>  <=      YES         YES        0        0
>>>>
>>>>  qconf -se nodeA | grep h_vmem
>>>> h_vmem=64G
>>>>
>>>>  qrsh -l queue3 -l h_vmem=6G
>>>> Your "qrsh" request could not be scheduled, try again later.
>>>>
>>>>  Regards
>>>> PVK
>>>>
>>>
>>>   ------------------------------
>>>
>>> users mailing [email protected]
>>> https://gridengine.org/mailman/listinfo/users
>>>
>>>
>> --
>> sent with K9-Mail from my Android mobile.
>>
>
>
>
> --
>
> [image: Univa] Marco Donauer | Senior Software Engineer - Customer
> Support
> Univa Corporation <http://www.univa.com/> | The Data Center Optimization
> Company
> E-Mail: [email protected] | Phone: +1.512.782.4453 | Mobile:
> +49.151.466.396.92
>
> [image: Where Grid Engine lives]German landline:+49.846.294.2944
> Twitter: https://twitter.com/mdonauer
>
>

<<Where>>

<<Grafik1>>

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to