Hi
There other reasons to have more than 1. It is management of those. When you have to add or remove NSDs of a FS having more than 1 makes it possible to empty some space and manage those in and out. Manually but possible. If you have one big NSD or even 1 per enclosure it might difficult or even not possible depending the number of enclosures and FS utilization.
Starting some ESS version (not DSS, cant comment on that) that I do not recall but in the last 6 months, we have change the default (for those that use the default) to 4 NSDs per enclosure for ESS 5000. There is no impact on performance either way on ESS, we tested it. But management of those on the long run should be easier.
--
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / Salutacions
Luis Bolinches
Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / Salutacions
Luis Bolinches
Consultant IT Specialist
IBM Spectrum Scale development
Mobile Phone: +358503112585
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland
"If you always give you will always have" -- Anonymous
----- Original message -----
From: "Achim Rehor" <achim.re...@de.ibm.com>
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>
Cc:
Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's
Date: Mon, Mar 1, 2021 10:16
The reason for having multiple NSDs in legacy NSD (non-GNR) handling is
the increased parallelism, that gives you 'more spindles' and thus more
performance.
In GNR the drives are used in parallel anyway through the GNRstriping.
Therfore, you are using all drives of a ESS/GSS/DSS model under the hood
in the vdisks anyway.
The only reason for having more NSDs is for using them for different
filesystems.
Mit freundlichen Grüßen / Kind regards
Achim Rehor
IBM EMEA ESS/Spectrum Scale Support
gpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 08:58:43:
> From: Jonathan Buzzard <jonathan.buzz...@strath.ac.uk>
> To: gpfsug-discuss@spectrumscale.org
> Date: 01/03/2021 08:58
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of
NSD's
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
>
> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
> >
> > I?ve tried benchmarking many vs. few vdisks per RG, and never could
see
> > any performance difference.
>
> That's encouraging.
>
> >
> > Usually we create 1 vdisk per enclosure per RG, thinking this will
> > allow us to grow with same size vdisks when adding additional
enclosures
> > in the future.
> >
> > Don?t think mmvdisk can be told to create multiple vdisks per RG
> > directly, so you have to manually create multiple vdisk sets each with
> > the apropriate size.
> >
>
> Thing is back in the day so GPFS v2.x/v3.x there where strict warnings
> that you needed a minimum of six NSD's for optimal performance. I have
> sat in presentations where IBM employees have said so. What we where
> told back then is that GPFS needs a minimum number of NSD's in order to
> be able to spread the I/O's out. So if an NSD is being pounded for reads
> and a write comes in it. can direct it to a less busy NSD.
>
> Now I can imagine that in a ESS/DSS-G that as it's being scattered to
> the winds under the hood this is no longer relevant. But some notes to
> the effect for us old timers would be nice if that is the case to put
> our minds to rest.
>
>
> JAB.
>
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> _______________________________________________
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?
>
u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIGaQ&c=jf_iaSHvJObTbx-
> siA1ZOg&r=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
> M&m=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q&s=z6yRHIKsH-
> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg&e=
>
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwIFAw&c=jf_iaSHvJObTbx-siA1ZOg&r=1mZ896psa5caYzBeaugTlc7TtRejJp3uvKYxas3S7Xc&m=whXeCPYkxzk5dnbnp3D-9cD3VasUhW-VIdocU6J7CuY&s=2QoNTGiUmBkQ_UhARaVpqet7Uo1lF15oAg6qNBiJPIQ&e=
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss