Am 02.05.2012 um 15:07 schrieb Rayson Ho:

> The qmaster spools some of the host configuration, but the load,
> memory used, etc are not spooled to disk. You output should look
> something like this:
> 
> $ qhost
> HOSTNAME                ARCH         NCPU  LOAD  MEMTOT  MEMUSE  SWAPTO  
> SWAPUS
> -------------------------------------------------------------------------------
> global                  -               -     -       -       -       -       
> -
> computer                linux-x64       4     -    3.7G       -    5.7G       
> -

I had this in the inbox for a long time. I remember a discussion some time ago 
about a similar effect, but I can't neither find it, nor was there any final 
solution IIRC (maybe it was even a deletd host).

What were the settings of "max_unheard" in SGE's configuration, if it's still 
of interest?

-- Reuti


> Rayson
> 
> 
> 
> On Wed, May 2, 2012 at 7:16 AM, mahbube rustaee <[email protected]> wrote:
>> Hi ,
>> 
>> does qmaster use cache for qhost command?
>> I shutdown execution nodes and master but after start qmaster (execution
>> nodes are off still) , results are the same before shutdown them such:
>> 
>> xeon-2-3                lx26-amd64     24  0.03   23.5G  361.4M   46.9G
>> 0.0
>> xeon-2-4                lx26-amd64     24  0.99   23.5G    5.7G   46.9G
>> 0.0
>> xeon-2-5                lx26-amd64     24  2.80   23.5G    4.8G   46.9G
>> 0.0
>> xeon-2-6                lx26-amd64     24  0.00   23.5G  361.4M   46.9G
>> 0.0
>> 
>> how these result  are produced ? Is there any cache that should be cleaned
>> in this situation?
>> 
>> Thanks
>> 
>> 
>> _______________________________________________
>> users mailing list
>> [email protected]
>> https://gridengine.org/mailman/listinfo/users
>> 
> 
> _______________________________________________
> users mailing list
> [email protected]
> https://gridengine.org/mailman/listinfo/users
> 


_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to