On 1/25/17 3:00 PM, Sven Oehme wrote: Matt,
the assumption was that the remote devices are slower than LROC. there is some attempts in the code to not schedule more than a maximum numbers of outstanding i/os to the LROC device, but this doesn't help in all cases and is depending on what kernel level parameters for the device are set. the best way is to reduce the max size of data to be cached into lroc. I just turned LROC file caching completely off. most if not all of the IO is metadata. Which is what I wanted to keep fast. It is amazing once you drop the latency the IO's go up way more than they ever where before. I guess we will need another nvme. sven On Wed, Jan 25, 2017 at 9:50 PM Matt Weil <mw...@wustl.edu<mailto:mw...@wustl.edu>> wrote: Hello all, We are having an issue where the LROC on a CES node gets overrun 100% utilized. Processes then start to backup waiting for the LROC to return data. Any way to have the GPFS client go direct if LROC gets to busy? Thanks Matt ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail. _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org<http://spectrumscale.org> http://gpfsug.org/mailman/listinfo/gpfsug-discuss _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss ________________________________ The materials in this message are private and may contain Protected Healthcare Information or other information of a sensitive nature. If you are not the intended recipient, be advised that any unauthorized use, disclosure, copying or the taking of any action in reliance on the contents of this information is strictly prohibited. If you have received this email in error, please immediately notify the sender via telephone or return mail.
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss