Hi All, Thanks for all the responses on this, although I have the sneaking suspicion that the most significant thing that is going to come out of this thread is the knowledge that Sven has left IBM for DDN. ;-) or :-( or :-O depending on your perspective.
Anyway … we have done some testing which has shown that a 4 MB block size is best for those workloads that use “normal” sized files. However, we - like many similar institutions - support a mixed workload, so the 128K fragment size that comes with that is not optimal for the primarily biomedical type applications that literally create millions of very small files. That’s why we settled on 1 MB as a compromise. So we’re very eager to now test with GPFS 5, a 4 MB block size, and a 8K fragment size. I’m recreating my test cluster filesystem now with that config … so 4 MB block size on the metadata only system pool, too. Thanks to all who took the time to respond to this thread. I hope it’s been beneficial to others as well… Kevin — Kevin Buterbaugh - Senior System Administrator Vanderbilt University - Advanced Computing Center for Research and Education kevin.buterba...@vanderbilt.edu<mailto:kevin.buterba...@vanderbilt.edu> - (615)875-9633 On Aug 1, 2018, at 7:11 PM, Andrew Beattie <abeat...@au1.ibm.com<mailto:abeat...@au1.ibm.com>> wrote: I too would second the comment about doing testing specific to your environment We recently deployed a number of ESS building blocks into a customer site that was specifically being used for a mixed HPC workload. We spent more than a week playing with different block sizes for both data and metadata trying to identify which variation would provide the best mix of both metadata performance and data performance. one thing we noticed very early on is that MDtest and IOR both respond very differently as you play with both block size and subblock size. What works for one use case may be a very poor option for another use case. Interestingly enough it turned out that the best overall option for our particular use case was an 8MB block size with 32k sub blocks -- as that gave us good Metadata performance and good sequential data performance which is probably why 32k sub block was the default for so many years .... Andrew Beattie Software Defined Storage - IT Specialist Phone: 614-2133-7927 E-mail: abeat...@au1.ibm.com<mailto:abeat...@au1.ibm.com> ----- Original message ----- From: "Marc A Kaplan" <makap...@us.ibm.com<mailto:makap...@us.ibm.com>> Sent by: gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org> To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>> Cc: Subject: Re: [gpfsug-discuss] Sub-block size not quite as expected on GPFS 5 filesystem? Date: Thu, Aug 2, 2018 10:01 AM Firstly, I do suggest that you run some tests and see how much, if any, difference the settings that are available make in performance and/or storage utilization. Secondly, as I and others have hinted at, deeper in the system, there may be additional parameters and settings. Sometimes they are available via commands, and/or configuration settings, sometimes not. Sometimes that's just because we didn't want to overwhelm you or ourselves with yet more "tuning knobs". Sometimes it's because we made some component more tunable than we really needed, but did not make all the interconnected components equally or as widely tunable. Sometimes it's because we want to save you from making ridiculous settings that would lead to problems... OTOH, as I wrote before, if a burning requirement surfaces, things may change from release to release... Just as for so many years subblocks per block seemed forever frozen at the number 32. Now it varies... and then the discussion shifts to why can't it be even more flexible? _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org<http://spectrumscale.org> http://gpfsug.org/mailman/listinfo/gpfsug-discuss<https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cb821b9e8a6db4408fff308d5f80c907d%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687655210056012&sdata=SCzz05SABDQ0vxprDYfdKGOY1VES%2Fm0tIr2kRnGlY4c%3D&reserved=0> _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org<http://spectrumscale.org> https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fgpfsug.org%2Fmailman%2Flistinfo%2Fgpfsug-discuss&data=02%7C01%7CKevin.Buterbaugh%40vanderbilt.edu%7Cb821b9e8a6db4408fff308d5f80c907d%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C0%7C636687655210056012&sdata=SCzz05SABDQ0vxprDYfdKGOY1VES%2Fm0tIr2kRnGlY4c%3D&reserved=0
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss