Hi Reuti,

Nevertheless: do you request anything else with the -l option?
Yes, several other complexes are also requested: h_fsize, s_vmem and s_rt
Then it looks like the issue I posted, although I referred more to limits.


Sorry, I was not able to find it, when did you posted it?
I can not tell now if the other consumable complexes (h_fsize
You made h_fsize consumable? It's a limit per process, and so the total amount 
can be bypassed by several processes of the same job anyway.


Yes, I am aware of this limitation. In fact I would be very interested in an alternative solution to control disk usage per directory. Do you know if this is currently possible in GE?

I have been thinking several times in implementing something for this purpose, using dynamic disk quotas or running periodically du.

This could probably be a separate thread, that could of great relevance for many GE admins.
and s_vmem)
I think that this doesn't need to be consumable, as you made h_vmem consumable 
already. It tells SGE when to send the SIGXCPU warning.

-- Reuti


had also negative values but I guess no because disk and memory consumption in 
the node was far below the available resources.

Cheers,
Javier
-- Reuti


I don't know if anyone else has come over this same problem with 6.2u5 and if 
there is a workaround for it.

[jlopez@svgd ~]$ qhost -q -j -h c5-11
HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO
SWAPUS
-------------------------------------------------------------------------------
global - - - - - - -
compute-5-11 x86_64 -24 47.92 31.5G 9.0G 8.0G 0.0
GRID_large BP 0/4/24
6667492 1.92242 STDIN compchem015 r 06/10/2011 06:13:30 MASTER
6667493 1.92241 STDIN compchem015 r 06/10/2011 06:13:41 MASTER
6667494 1.92241 STDIN compchem015 r 06/10/2011 06:13:47 MASTER
6667495 1.92241 STDIN compchem015 r 06/10/2011 06:13:57 MASTER
GRID_small BP 0/0/24
small BPC 0/10/24
6652641 11.27961 p1761-7 csebdmfa r 06/10/2011 06:14:01 MASTER
6655259 10.43999 p577-16 csebdmfa r 06/10/2011 06:12:26 MASTER
6667942 3.93900 AuLJ139 csmyslfs r 06/10/2011 06:12:46 MASTER
SLAVE
SLAVE
SLAVE
SLAVE
SLAVE
SLAVE
SLAVE
SLAVE
g0-mem_small BPC 0/0/24
offline BP 0/0/24


Thanks in advance,
Javier

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users
<jlopez.vcf>_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

<<attachment: jlopez.vcf>>

_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to