On 28/02/2021 09:31, Jan-Frode Myklebust wrote:

I’ve tried benchmarking many vs. few vdisks per RG, and never could see any performance difference.

That's encouraging.


Usually we create 1 vdisk per enclosure per RG,   thinking this will allow us to grow with same size vdisks when adding additional enclosures in the future.

Don’t think mmvdisk can be told to create multiple vdisks per RG directly, so you have to manually create multiple vdisk sets each with the apropriate size.


Thing is back in the day so GPFS v2.x/v3.x there where strict warnings that you needed a minimum of six NSD's for optimal performance. I have sat in presentations where IBM employees have said so. What we where told back then is that GPFS needs a minimum number of NSD's in order to be able to spread the I/O's out. So if an NSD is being pounded for reads and a write comes in it. can direct it to a less busy NSD.

Now I can imagine that in a ESS/DSS-G that as it's being scattered to the winds under the hood this is no longer relevant. But some notes to the effect for us old timers would be nice if that is the case to put our minds to rest.


JAB.

--
Jonathan A. Buzzard                         Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to