Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Matt Weil
[ces1,ces2,ces3] maxStatCache 8 worker1Threads 2000 maxFilesToCache 50 pagepool 100G maxStatCache 8 lrocData no 378G system memory. On 1/25/17 3:29 PM, Sven Oehme wrote: have you tried to just leave lrocInodes and lrocDirectories on and turn data off ? yes data I just turned

Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Sven Oehme
have you tried to just leave lrocInodes and lrocDirectories on and turn data off ? also did you increase maxstatcache so LROC actually has some compact objects to use ? if you send value for maxfilestocache,maxfilestocache,workerthreads and available memory of the node i can provide a start point.

Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Matt Weil
On 1/25/17 3:00 PM, Sven Oehme wrote: Matt, the assumption was that the remote devices are slower than LROC. there is some attempts in the code to not schedule more than a maximum numbers of outstanding i/os to the LROC device, but this doesn't help in all cases and is depending on what

Re: [gpfsug-discuss] LROC Zimon sensors

2017-01-25 Thread Sven Oehme
start here : https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/General%20Parallel%20File%20System%20(GPFS)/page/IBM%20Spectrum%20Scale%20Performance%20Monitoring%20Bridge On Wed, Jan 25, 2017 at 10:01 PM Sobey, Richard A wrote: > Ok Sven thanks,

Re: [gpfsug-discuss] LROC Zimon sensors

2017-01-25 Thread Sobey, Richard A
Ok Sven thanks, looks like I'll be checking out grafana. Richard From: gpfsug-discuss-boun...@spectrumscale.org on behalf of Sven Oehme Sent: 25 January 2017 20:25 To:

Re: [gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Sven Oehme
Matt, the assumption was that the remote devices are slower than LROC. there is some attempts in the code to not schedule more than a maximum numbers of outstanding i/os to the LROC device, but this doesn't help in all cases and is depending on what kernel level parameters for the device are set.

[gpfsug-discuss] LROC 100% utilized in terms of IOs

2017-01-25 Thread Matt Weil
Hello all, We are having an issue where the LROC on a CES node gets overrun 100% utilized. Processes then start to backup waiting for the LROC to return data. Any way to have the GPFS client go direct if LROC gets to busy? Thanks Matt The materials in this

Re: [gpfsug-discuss] LROC Zimon sensors

2017-01-25 Thread Oesterlin, Robert
For the Zimon “GPFSLROC”, what metrics can Grafana query, I don’t see them documented or exposed anywhere: http://www.ibm.com/support/knowledgecenter/STXKQY_4.2.2/com.ibm.spectrum.scale.v4r22.doc/bl1adv_listofmetricsPMT.htm Bob Oesterlin Sr Principal Storage Engineer, Nuance From:

[gpfsug-discuss] snapshots

2017-01-25 Thread Lukas Hejtmanek
Hello, is there a way to get number of inodes consumed by a particular snapshot? I have a fileset with separate inodespace: Filesets in file system 'vol1': Name StatusPath InodeSpace MaxInodesAllocInodes UsedInodes export Linked

Re: [gpfsug-discuss] Manager nodes

2017-01-25 Thread Achim Rehor
true, but keep in mind the setting of verbsRdmaMinBytes 4096this will define the lower border for Rdma packets. all sizes below will take IPMit freundlichen Grüßen / Kind regardsAchim RehorFrom:        Bryan Banister To:        gpfsug main discussion list