Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Laurence Horrocks-Barlow
Like Jan, I did some benchmarking a few years ago when the default recommended 
RG's dropped to 1 per DA to meet rebuild requirements. I couldn't see any 
discernable difference.

As Achim has also mentioned, I just use vdisks for creating additional 
filesystems. Unless there is going to be a lot of shuffling of space or future 
filesystem builds, then I divide the RG's into say 10 vdisks to give some 
flexibility and granularity

There is also a flag iirc that changes the gpfs magic to consider multiple 
under lying disks, when I find it again Which can provide increased 
performance on traditional RAID builds.

-- Lauz

On 1 March 2021 08:16:43 GMT, Achim Rehor  wrote:
>The reason for having multiple NSDs in legacy NSD (non-GNR) handling is
>
>the increased parallelism, that gives you 'more spindles' and thus more
>
>performance.
>In GNR the drives are used in parallel anyway through the GNRstriping. 
>Therfore, you are using all drives of a ESS/GSS/DSS model under the
>hood 
>in the vdisks anyway. 
>
>The only reason for having more NSDs is for using them for different 
>filesystems. 
>
> 
>Mit freundlichen Grüßen / Kind regards
>
>Achim Rehor
>
>IBM EMEA ESS/Spectrum Scale Support
>
>
>
>
>
>
>
>
>
>
>
>
>gpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 08:58:43:
>
>> From: Jonathan Buzzard 
>> To: gpfsug-discuss@spectrumscale.org
>> Date: 01/03/2021 08:58
>> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
>NSD's
>> Sent by: gpfsug-discuss-boun...@spectrumscale.org
>> 
>> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
>> > 
>> > I?ve tried benchmarking many vs. few vdisks per RG, and never could
>
>see 
>> > any performance difference.
>> 
>> That's encouraging.
>> 
>> > 
>> > Usually we create 1 vdisk per enclosure per RG,   thinking this
>will 
>> > allow us to grow with same size vdisks when adding additional 
>enclosures 
>> > in the future.
>> > 
>> > Don?t think mmvdisk can be told to create multiple vdisks per RG 
>> > directly, so you have to manually create multiple vdisk sets each
>with 
>
>> > the apropriate size.
>> > 
>> 
>> Thing is back in the day so GPFS v2.x/v3.x there where strict
>warnings 
>> that you needed a minimum of six NSD's for optimal performance. I
>have 
>> sat in presentations where IBM employees have said so. What we where 
>> told back then is that GPFS needs a minimum number of NSD's in order
>to 
>> be able to spread the I/O's out. So if an NSD is being pounded for
>reads 
>
>> and a write comes in it. can direct it to a less busy NSD.
>> 
>> Now I can imagine that in a ESS/DSS-G that as it's being scattered to
>
>> the winds under the hood this is no longer relevant. But some notes
>to 
>> the effect for us old timers would be nice if that is the case to put
>
>> our minds to rest.
>> 
>> 
>> JAB.
>> 
>> -- 
>> Jonathan A. Buzzard Tel: +44141-5483420
>> HPC System Administrator, ARCHIE-WeSt.
>> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> https://urldefense.proofpoint.com/v2/url?
>> 
>u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-
>> siA1ZOg=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
>> M=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q=z6yRHIKsH-
>> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg= 
>> 
>
>
>___
>gpfsug-discuss mailing list
>gpfsug-discuss at spectrumscale.org
>http://gpfsug.org/mailman/listinfo/gpfsug-discuss

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Andrew Beattie
Jonathan,

You need to create vdisk sets which will create multiple vdisks, you can then 
assign vdisk sets to your filesystem. (Assigning multiple vdisks at a time)

Things to watch - free space calculations are more complex as it’s building 
multiple vdisks under the cover using multiple raid parameters

Also it’s worth assuming a 10% reserve or approx - drive per disk shelf for 
rebuild space 



Mmvdisk vdisk set ... insert parameters

https://www.ibm.com/support/knowledgecenter/mk/SSYSP8_5.3.2/com.ibm.spectrum.scale.raid.v5r02.adm.doc/bl8adm_mmvdisk.htm

Sent from my iPhone

> On 1 Mar 2021, at 21:45, Jonathan Buzzard  
> wrote:
> 
> On 01/03/2021 09:08, Luis Bolinches wrote:
>> Hi
>> 
>> There other reasons to have more than 1. It is management of those. When 
>> you have to add or remove NSDs of a FS having more than 1 makes it 
>> possible to empty some space and manage those in and out. Manually but 
>> possible. If you have one big NSD or even 1 per enclosure it might 
>> difficult or even not possible depending the number of enclosures and FS 
>> utilization.
>> 
>> Starting some ESS version (not DSS, cant comment on that) that I do not 
>> recall but in the last 6 months, we have change the default (for those 
>> that use the default) to 4 NSDs per enclosure for ESS 5000. There is no 
>> impact on performance either way on ESS, we tested it. But management of 
>> those on the long run should be easier.
> Question how does one create a none default number of vdisks per 
> enclosure then?
> 
> I tried creating a stanza file and then doing mmcrvdisk but it was not 
> happy, presumably because of the "new style" recovery group management
> 
> mmcrvdisk: [E] This command is not supported by recovery groups under 
> management of mmvdisk.
> 
> 
> 
> 
> JAB.
> 
> -- 
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwICAg=jf_iaSHvJObTbx-siA1ZOg=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk=9HlRHByoByQcM0mY0elL-l4DgA6MzHkAGzE70Rl2p2E=eWRfWGpdZB-PZ_InCCjgmdQOCy6rgWj9Oi3TGGA38yY=
>  
> 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Jonathan Buzzard

On 01/03/2021 09:08, Luis Bolinches wrote:

Hi

>
There other reasons to have more than 1. It is management of those. When 
you have to add or remove NSDs of a FS having more than 1 makes it 
possible to empty some space and manage those in and out. Manually but 
possible. If you have one big NSD or even 1 per enclosure it might 
difficult or even not possible depending the number of enclosures and FS 
utilization.

>
Starting some ESS version (not DSS, cant comment on that) that I do not 
recall but in the last 6 months, we have change the default (for those 
that use the default) to 4 NSDs per enclosure for ESS 5000. There is no 
impact on performance either way on ESS, we tested it. But management of 
those on the long run should be easier.
Question how does one create a none default number of vdisks per 
enclosure then?


I tried creating a stanza file and then doing mmcrvdisk but it was not 
happy, presumably because of the "new style" recovery group management


mmcrvdisk: [E] This command is not supported by recovery groups under 
management of mmvdisk.





JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Achim Rehor
Correct, there was. 
The OS is dealing with pdisks, while GPFS is striping over Vdisks/NSDs.

For GNR there is a differetnt queuing setup in GPFS, than there was for 
NSDs.
See "mmfsadm dump nsd" and check for NsdQueueTraditional versus 
NsdQueueGNR 

And yes, i was too strict, with 
"> The only reason for having more NSDs is for using them for 
different 
> filesystems."

there are other management reasons to run with a reasonable number of 
vdisks, just not performance reasons. 

Mit freundlichen Gruessen / Kind regards

Achim Rehor

IBM EMEA ESS/Spectrum Scale Support


gpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 10:06:07:

> From: Simon Thompson 
> To: gpfsug main discussion list 
> Date: 01/03/2021 10:06
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
NSD's
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> Or for hedging your bets about how you might want to use it in future.
> 
> We are never quite sure if we want to do something different in the 
> future with some of the storage, sure that might mean we want to 
> steal some space from a file-system, but that is perfectly valid. 
> And we have done this, both in temporary transient states (data 
> migration between systems), or permanently (found we needed 
> something on a separate file-system)
> 
> So yes whilst there might be no performance impact on doing this, 
westill do.
> 
> I vaguely recall some of the old reasoning was around IO queues in 
> the OS, i.e. if you had 6 NSDs vs 16 NSDs attached to the NSD 
> server, you have 16 IO queues passing to multipath, which can help 
> keep the data pipes full. I suspect there was some optimal number of
> NSDs for different storage controllers, but I don't know if anyone 
> ever benchmarked that.
> 
> Simon
> 
> On 01/03/2021, 08:16, "gpfsug-discuss-boun...@spectrumscale.org on 
> behalf of achim.re...@de.ibm.com"  boun...@spectrumscale.org on behalf of achim.re...@de.ibm.com> wrote:
> 
> The reason for having multiple NSDs in legacy NSD (non-GNR) handling 
is 
> the increased parallelism, that gives you 'more spindles' and thus 
more 
> performance.
> In GNR the drives are used in parallel anyway through the 
GNRstriping. 
> Therfore, you are using all drives of a ESS/GSS/DSS model under the 
hood 
> in the vdisks anyway. 
> 
> The only reason for having more NSDs is for using them for different 

> filesystems. 
> 
> 
> Mit freundlichen Grüßen / Kind regards
> 
> Achim Rehor
> 
> IBM EMEA ESS/Spectrum Scale Support
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>     gpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 
08:58:43:
> 
> > From: Jonathan Buzzard 
> > To: gpfsug-discuss@spectrumscale.org
> > Date: 01/03/2021 08:58
> > Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number 
of 
> NSD's
> > Sent by: gpfsug-discuss-boun...@spectrumscale.org
> > 
> > On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
> > > 
> > > I?ve tried benchmarking many vs. few vdisks per RG, and never 
could 
> see 
> > > any performance difference.
> > 
> > That's encouraging.
> > 
> > > 
> > > Usually we create 1 vdisk per enclosure per RG,   thinking this 
will 
> > > allow us to grow with same size vdisks when adding additional 
> enclosures 
> > > in the future.
> > > 
> > > Don?t think mmvdisk can be told to create multiple vdisks per RG 

> > > directly, so you have to manually create multiple vdisk setseach 
with 
> 
> > > the apropriate size.
> > > 
> > 
> > Thing is back in the day so GPFS v2.x/v3.x there where strict 
warnings 
> > that you needed a minimum of six NSD's for optimal performance. I 
have 
> > sat in presentations where IBM employees have said so. What we 
where 
> > told back then is that GPFS needs a minimum number of NSD's 
inorder to 
> > be able to spread the I/O's out. So if an NSD is being poundedfor 
reads 
> 
> > and a write comes in it. can direct it to a less busy NSD.
> > 
> > Now I can imagine that in a ESS/DSS-G that as it's being scattered 
to 
> > the winds under the hood this is no longer relevant. But some 
notes to 
> > the effect for us old timers would be nice if that is the case to 
put 
> > our minds to rest.
> > 
> > 
> > JAB.
> > 
> > -- 
> > Jonathan A. Buzzard

Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Olaf Weiser
@all, please note...
 
as being said. there is a major difference , if we talk about GNR or GPFS native... 
 
one "comon" key is the #queues in the OS to talk to a disk-device,
so if you run "classical" NSD  architecture.. you may check how many IOPS you can fire against your block devices...
GPFS's internal rule is ~ 3 IOs per device , you can adjust it .. .or (proceed below)
 
 in GNR/ESS.. here... we (IBM) pre-configured everything on the NSD server side.. ready to use ..
in on GNR.. you have to do this job .. (how many NSD worker ... etc..) ((its a bit more complex... (other topic)  ))
 
the IMPORTANT key in both cases for the client is:
ignorePrefetchLunCount=yes
workerThreads = ...your number of IOs you may think is ok...
 
those  parameters tell GPFS .. use # x worker and do your work... ignore the # disk to calculate how many IO traffic can be inflight
 
for GNR:
as Luis said.. its more the management of FS rather than performance , to decide for other from default #vdisk
 
 
 
- Original message -From: "Luis Bolinches" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.orgSubject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD'sDate: Mon, Mar 1, 2021 10:08 AM 
Hi
 
There other reasons to have more than 1. It is management of those. When you have to add or remove NSDs of a FS having more than 1 makes it possible to empty some space and manage those in and out. Manually but possible. If you have one big NSD or even 1 per enclosure it might difficult or even not possible depending the number of enclosures and FS utilization.
 
Starting some ESS version (not DSS, cant comment on that) that I do not recall but in the last 6 months, we have change the default (for those that use the default) to 4 NSDs per enclosure for ESS 5000. There is no impact on performance either way on ESS, we tested it. But management of those on the long run should be easier.
--Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches
Consultant IT Specialist
IBM Spectrum Scale development
Mobile Phone: +358503112585
 
https://www.youracclaim.com/user/luis-bolinches
 
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" --  Anonymous
 
 
 
- Original message -From: "Achim Rehor" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD'sDate: Mon, Mar 1, 2021 10:16 
The reason for having multiple NSDs in legacy NSD (non-GNR) handling isthe increased parallelism, that gives you 'more spindles' and thus moreperformance.In GNR the drives are used in parallel anyway through the GNRstriping.Therfore, you are using all drives of a ESS/GSS/DSS model under the hoodin the vdisks anyway.The only reason for having more NSDs is for using them for differentfilesystems. Mit freundlichen Grüßen / Kind regardsAchim RehorIBM EMEA ESS/Spectrum Scale Supportgpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 08:58:43:> From: Jonathan Buzzard > To: gpfsug-discuss@spectrumscale.org> Date: 01/03/2021 08:58> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number ofNSD's> Sent by: gpfsug-discuss-boun...@spectrumscale.org>> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:> >> > I?ve tried benchmarking many vs. few vdisks per RG, and never couldsee> > any performance difference.>> That's encouraging.>> >> > Usually we create 1 vdisk per enclosure per RG,   thinking this will> > allow us to grow with same size vdisks when adding additionalenclosures> > in the future.> >> > Don?t think mmvdisk can be told to create multiple vdisks per RG> > directly, so you have to manually create multiple vdisk sets each with> > the apropriate size.> >>> Thing is back in the day so GPFS v2.x/v3.x there where strict warnings> that you needed a minimum of six NSD's for optimal performance. I have> sat in presentations where IBM employees have said so. What we where> told back then is that GPFS needs a minimum number of NSD's in order to> be able to spread the I/O's out. So if an NSD is being pounded for reads> and a write comes in it. can direct it to a less busy NSD.>> Now I can imagine that in a ESS/DSS-G that as it's being scattered to> the winds under the hood this is no longer relevant. But some notes to> the effect for us old timers would be nice if that is the case to put> our minds to rest.>>> JAB.>> --> Jonathan A. Buzzard                         Tel: +44141-5483420> HPC System Administrator, ARCHIE-WeSt.> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG> ___> gpfsug-discuss mailing list> gpfsug-discuss at spectrumscale.org> https://urldefense.proofpoint.com/v2/url?>u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-> 

Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Luis Bolinches
Hi
 
There other reasons to have more than 1. It is management of those. When you have to add or remove NSDs of a FS having more than 1 makes it possible to empty some space and manage those in and out. Manually but possible. If you have one big NSD or even 1 per enclosure it might difficult or even not possible depending the number of enclosures and FS utilization.
 
Starting some ESS version (not DSS, cant comment on that) that I do not recall but in the last 6 months, we have change the default (for those that use the default) to 4 NSDs per enclosure for ESS 5000. There is no impact on performance either way on ESS, we tested it. But management of those on the long run should be easier.
--Ystävällisin terveisin / Kind regards / Saludos cordiales / Salutations / SalutacionsLuis Bolinches
Consultant IT Specialist
IBM Spectrum Scale development
Mobile Phone: +358503112585
 
https://www.youracclaim.com/user/luis-bolinches
 
Ab IBM Finland Oy
Laajalahdentie 23
00330 Helsinki
Uusimaa - Finland"If you always give you will always have" --  Anonymous
 
 
 
- Original message -From: "Achim Rehor" Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD'sDate: Mon, Mar 1, 2021 10:16 
The reason for having multiple NSDs in legacy NSD (non-GNR) handling isthe increased parallelism, that gives you 'more spindles' and thus moreperformance.In GNR the drives are used in parallel anyway through the GNRstriping.Therfore, you are using all drives of a ESS/GSS/DSS model under the hoodin the vdisks anyway.The only reason for having more NSDs is for using them for differentfilesystems. Mit freundlichen Grüßen / Kind regardsAchim RehorIBM EMEA ESS/Spectrum Scale Supportgpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 08:58:43:> From: Jonathan Buzzard > To: gpfsug-discuss@spectrumscale.org> Date: 01/03/2021 08:58> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number ofNSD's> Sent by: gpfsug-discuss-boun...@spectrumscale.org>> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:> >> > I?ve tried benchmarking many vs. few vdisks per RG, and never couldsee> > any performance difference.>> That's encouraging.>> >> > Usually we create 1 vdisk per enclosure per RG,   thinking this will> > allow us to grow with same size vdisks when adding additionalenclosures> > in the future.> >> > Don?t think mmvdisk can be told to create multiple vdisks per RG> > directly, so you have to manually create multiple vdisk sets each with> > the apropriate size.> >>> Thing is back in the day so GPFS v2.x/v3.x there where strict warnings> that you needed a minimum of six NSD's for optimal performance. I have> sat in presentations where IBM employees have said so. What we where> told back then is that GPFS needs a minimum number of NSD's in order to> be able to spread the I/O's out. So if an NSD is being pounded for reads> and a write comes in it. can direct it to a less busy NSD.>> Now I can imagine that in a ESS/DSS-G that as it's being scattered to> the winds under the hood this is no longer relevant. But some notes to> the effect for us old timers would be nice if that is the case to put> our minds to rest.>>> JAB.>> --> Jonathan A. Buzzard                         Tel: +44141-5483420> HPC System Administrator, ARCHIE-WeSt.> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG> ___> gpfsug-discuss mailing list> gpfsug-discuss at spectrumscale.org> https://urldefense.proofpoint.com/v2/url?>u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-> siA1ZOg=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-> M=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q=z6yRHIKsH-> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg=>___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttps://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIFAw=jf_iaSHvJObTbx-siA1ZOg=1mZ896psa5caYzBeaugTlc7TtRejJp3uvKYxas3S7Xc=whXeCPYkxzk5dnbnp3D-9cD3VasUhW-VIdocU6J7CuY=2QoNTGiUmBkQ_UhARaVpqet7Uo1lF15oAg6qNBiJPIQ=  
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Simon Thompson
Or for hedging your bets about how you might want to use it in future.

We are never quite sure if we want to do something different in the future with 
some of the storage, sure that might mean we want to steal some space from a 
file-system, but that is perfectly valid. And we have done this, both in 
temporary transient states (data migration between systems), or permanently 
(found we needed something on a separate file-system)

So yes whilst there might be no performance impact on doing this, we still do.

I vaguely recall some of the old reasoning was around IO queues in the OS, i.e. 
if you had 6 NSDs vs 16 NSDs attached to the NSD server, you have 16 IO queues 
passing to multipath, which can help keep the data pipes full. I suspect there 
was some optimal number of NSDs for different storage controllers, but I don't 
know if anyone ever benchmarked that.

Simon

On 01/03/2021, 08:16, "gpfsug-discuss-boun...@spectrumscale.org on behalf of 
achim.re...@de.ibm.com"  wrote:

The reason for having multiple NSDs in legacy NSD (non-GNR) handling is 
the increased parallelism, that gives you 'more spindles' and thus more 
performance.
In GNR the drives are used in parallel anyway through the GNRstriping. 
Therfore, you are using all drives of a ESS/GSS/DSS model under the hood 
in the vdisks anyway. 

The only reason for having more NSDs is for using them for different 
filesystems. 


Mit freundlichen Grüßen / Kind regards

Achim Rehor

IBM EMEA ESS/Spectrum Scale Support












gpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 08:58:43:

> From: Jonathan Buzzard 
> To: gpfsug-discuss@spectrumscale.org
> Date: 01/03/2021 08:58
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
NSD's
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
> > 
> > I?ve tried benchmarking many vs. few vdisks per RG, and never could 
see 
> > any performance difference.
> 
> That's encouraging.
> 
> > 
> > Usually we create 1 vdisk per enclosure per RG,   thinking this will 
> > allow us to grow with same size vdisks when adding additional 
enclosures 
> > in the future.
> > 
> > Don?t think mmvdisk can be told to create multiple vdisks per RG 
> > directly, so you have to manually create multiple vdisk sets each with 

> > the apropriate size.
> > 
> 
> Thing is back in the day so GPFS v2.x/v3.x there where strict warnings 
> that you needed a minimum of six NSD's for optimal performance. I have 
> sat in presentations where IBM employees have said so. What we where 
> told back then is that GPFS needs a minimum number of NSD's in order to 
> be able to spread the I/O's out. So if an NSD is being pounded for reads 

> and a write comes in it. can direct it to a less busy NSD.
> 
> Now I can imagine that in a ESS/DSS-G that as it's being scattered to 
> the winds under the hood this is no longer relevant. But some notes to 
> the effect for us old timers would be nice if that is the case to put 
> our minds to rest.
> 
> 
> JAB.
> 
> -- 
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?
> 

u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
> M=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q=z6yRHIKsH-
> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg= 
> 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-03-01 Thread Achim Rehor
The reason for having multiple NSDs in legacy NSD (non-GNR) handling is 
the increased parallelism, that gives you 'more spindles' and thus more 
performance.
In GNR the drives are used in parallel anyway through the GNRstriping. 
Therfore, you are using all drives of a ESS/GSS/DSS model under the hood 
in the vdisks anyway. 

The only reason for having more NSDs is for using them for different 
filesystems. 

 
Mit freundlichen Grüßen / Kind regards

Achim Rehor

IBM EMEA ESS/Spectrum Scale Support












gpfsug-discuss-boun...@spectrumscale.org wrote on 01/03/2021 08:58:43:

> From: Jonathan Buzzard 
> To: gpfsug-discuss@spectrumscale.org
> Date: 01/03/2021 08:58
> Subject: [EXTERNAL] Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of 
NSD's
> Sent by: gpfsug-discuss-boun...@spectrumscale.org
> 
> On 28/02/2021 09:31, Jan-Frode Myklebust wrote:
> > 
> > I?ve tried benchmarking many vs. few vdisks per RG, and never could 
see 
> > any performance difference.
> 
> That's encouraging.
> 
> > 
> > Usually we create 1 vdisk per enclosure per RG,   thinking this will 
> > allow us to grow with same size vdisks when adding additional 
enclosures 
> > in the future.
> > 
> > Don?t think mmvdisk can be told to create multiple vdisks per RG 
> > directly, so you have to manually create multiple vdisk sets each with 

> > the apropriate size.
> > 
> 
> Thing is back in the day so GPFS v2.x/v3.x there where strict warnings 
> that you needed a minimum of six NSD's for optimal performance. I have 
> sat in presentations where IBM employees have said so. What we where 
> told back then is that GPFS needs a minimum number of NSD's in order to 
> be able to spread the I/O's out. So if an NSD is being pounded for reads 

> and a write comes in it. can direct it to a less busy NSD.
> 
> Now I can imagine that in a ESS/DSS-G that as it's being scattered to 
> the winds under the hood this is no longer relevant. But some notes to 
> the effect for us old timers would be nice if that is the case to put 
> our minds to rest.
> 
> 
> JAB.
> 
> -- 
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> https://urldefense.proofpoint.com/v2/url?
> 
u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss=DwIGaQ=jf_iaSHvJObTbx-
> siA1ZOg=RGTETs2tk0Kz_VOpznDVDkqChhnfLapOTkxLvgmR2-
> M=Mr4A8ROO2t7qFYTfTRM_LoPLllETw72h51FK07dye7Q=z6yRHIKsH-
> IaOjtto4ZyUjFFe0vTGhqzYUiM23rEShg= 
> 


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-02-28 Thread Jonathan Buzzard

On 28/02/2021 09:31, Jan-Frode Myklebust wrote:


I’ve tried benchmarking many vs. few vdisks per RG, and never could see 
any performance difference.


That's encouraging.



Usually we create 1 vdisk per enclosure per RG,   thinking this will 
allow us to grow with same size vdisks when adding additional enclosures 
in the future.


Don’t think mmvdisk can be told to create multiple vdisks per RG 
directly, so you have to manually create multiple vdisk sets each with 
the apropriate size.




Thing is back in the day so GPFS v2.x/v3.x there where strict warnings 
that you needed a minimum of six NSD's for optimal performance. I have 
sat in presentations where IBM employees have said so. What we where 
told back then is that GPFS needs a minimum number of NSD's in order to 
be able to spread the I/O's out. So if an NSD is being pounded for reads 
and a write comes in it. can direct it to a less busy NSD.


Now I can imagine that in a ESS/DSS-G that as it's being scattered to 
the winds under the hood this is no longer relevant. But some notes to 
the effect for us old timers would be nice if that is the case to put 
our minds to rest.



JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] dssgmkfs.mmvdisk number of NSD's

2021-02-28 Thread Jan-Frode Myklebust
I’ve tried benchmarking many vs. few vdisks per RG, and never could see any
performance difference.

Usually we create 1 vdisk per enclosure per RG,   thinking this will allow
us to grow with same size vdisks when adding additional enclosures in the
future.

Don’t think mmvdisk can be told to create multiple vdisks per RG directly,
so you have to manually create multiple vdisk sets each with the apropriate
size.



  -jf

lør. 27. feb. 2021 kl. 19:01 skrev Jonathan Buzzard <
jonathan.buzz...@strath.ac.uk>:

>
> Doing an upgrade on our storage which involved replacing all the 4TB
> disks with 16TB disks. Some hiccups with five of the disks being dead
> when inserted but that is all sorted.
>
> So the system was originally installed with DSS-G 2.0a so with "legacy"
> commands for vdisks etc. We had 10 metadata NSD's and 10 data NSD's per
> draw aka recovery group of the D3284 enclosures.
>
> The dssgmkfs.mmvdisk has created exactly one data and one metadata NSD
> per draw of a DS3284 leading to a really small number of NSD's in the
> file system.
>
> All my instincts tell me that this is going to lead to horrible
> performance on the file system. Historically you wanted a reasonable
> number of NSD's in a system for decent performance.
>
> Taking what the ddsgmkfs.mmvdisk has give me even with a DSS-G260 you
> would get only 12 NSD's of each type, which for a potentially ~5PB file
> system seems on the really low side to me.
>
> Is there any way to tell ddsgmkfs.mmvdisk to create more NSD's than the
> one per recovery group or is this no longer relevant and performance
> with really low numbers of NSD's is fine these days?
>
>
> JAB.
>
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss