Hi everyone,

We recently started to use a priority-based scheduling and after solving some 
final issues (see this post: 
https://groups.google.com/forum/m/#!topic/slurm-users/N8r8MoyjQAU), everything 
seems to be running quite smoothly now. However, we realized that the data 
shown by sshare, e.g.

Account         User    RawShares       NormShares      RawUsage        
EffectvUsage    FairShare 

root                                                            0.000000        
        8484544         1.000000            
 root                   root            1                       0.500000        
        0                       0.000000                1.000000 
 iasteam                                1                       0.500000        
        8484544         1.000000    
  iasteam               carvalho        1                       0.250000        
        1550368         0.182729                0.400000 
  iasteam               hany    1                       0.250000                
0                       0.000000                0.800000 
  iasteam               pascal  1                       0.250000                
6934176         0.817271                0.200000 
  iasteam               stark   1                       0.250000                
0                       0.000000                0.800000 

is only updated in very long intervals. This means that the current RawUsage of 
e.g. user ‚pascal‘ stays a very long time on 6934176, and then jumps to the 
next value, say 7238923, where it then again waits a long time until it is 
updated. Different from this behavior, the data shown by sacct is updated every 
second.

We already tried reducing the update interval of sshare by adjusting the 
JobAcctGatherFrequency, but this did not help in our case. Also my attempts to 
look for similar questions had no success. Can anybody help us out here and 
point us to the correct option that we need to change to get everything running 
smoothly?

P.S.: Our config is the same as in the post that I linked (except for the 
proposed fix in the corresponding thread obviously).

Best,
Pascal


Reply via email to