Re: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13

2020-12-10 Thread Jonathan Buzzard

On 10/12/2020 21:59, Andrew Beattie wrote:
CAUTION: This email originated outside the University. Check before 
clicking links or attachments.

Thanks Ed,

The UQ team are well aware of the current limits published in the FAQ.

However the issue is not the number of physical nodes or the concurrent 
user sessions, but rather the number of SMB / NFS export mounts that 
Spectrum Scale supports from a single cluster or even remote mount 
protocol clusters is no longer enough for their research environment.


The current total number of Exports can not exceed 1000, which is an 
issue when they have multiple thousands of research project ID’s with 
users needing access to every project ID with its relevant security 
permissions.


Grouping Project ID’s under a single export isn’t a viable option as 
there is no simple way to identify which research group / user is going 
to request a new project ID, new project ID’s are automatically created 
and allocated when a request for storage allocation is fulfilled.


Projects ID’s (independent file sets) are published not only as SMB 
exports, but are also mounted using multiple AFM cache clusters to high 
performance instrument clusters, multiple HPC clusters or up to 5 
different campus access points, including remote universities.


The data workflow is not a simple linear workflow
And the mixture of different types of users with requests for storage, 
and storage provisioning has resulted in the University creating their 
own provisioning portal which interacts with the Spectrum Scale data 
fabric (multiple Spectrum Scale clusters in single global namespace, 
connected via 100GB Ethernet over AFM) in multiple points to deliver the 
project ID provisioning at the relevant locations specified by the user 
/ research group.


One point of data surfacing, in this data fabric, is the Spectrum Scale 
Protocols cluster that Les manages, which provides the central user 
access point via SMB or NFS, all research users across the university 
who want to access one or more of their storage allocations do so via 
the SMB / NFS mount points from this specific storage cluster.


I am not sure thousands of SMB exports is ever a good idea. I suspect 
Windows Server would keel over and die too in that scenario


My suggestion would be to looking into some consolidated SMB exports and 
then mask it all with DFS.


Though this presumes that they are not handing out "project" security 
credentials that are shared between multiple users. That would be very 
bad..



JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13

2020-12-10 Thread Andrew Beattie
on list 
> Subject: [gpfsug-discuss] Protocol limits
> Message-ID:
> 
> Content-Type: text/plain; charset="utf-8"
> 
> hi all
> 
> we run a large number of shares from CES servers connected to a single
> scale cluster
> we understand the current supported limit is 1000 SMB shares, we run the
> same number of NFS shares
> 
> we also understand that using external CES cluster to increase that limit
> is not supported based on the documentation, we use the same authentication
> for all shares, we do have additional use cases for sharing where this
> pathway would be attractive going forward
> 
> so the question becomes if we need to run 2 SMB and NFS shares off a
> scale cluster is there any hardware design we can use to do this whilst
> maintaining support
> 
> I have submitted a support request to ask if this can be done but thought I
> would ask the collective good if this has already been solved
> 
> thanks
> 
> leslie
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/a460a862/attachment-0001.html
>  >
> 
> --
> 
> Message: 2
> Date: Thu, 10 Dec 2020 00:21:03 +0100
> From: Jan-Frode Myklebust 
> To: gpfsug main discussion list 
> Subject: Re: [gpfsug-discuss] Protocol limits
> Message-ID:
> 
> Content-Type: text/plain; charset="utf-8"
> 
> My understanding of these limits are that they are to limit the
> configuration files from becoming too large, which makes
> changing/processing them somewhat slow.
> 
> For SMB shares, you might be able to limit the number of configured shares
> by using wildcards in the config (%U). These wildcarded entries counts as
> one share.. Don?t know if simimar tricks can be done for NFS..
> 
> 
> 
>   -jf
> 
> ons. 9. des. 2020 kl. 23:45 skrev leslie elliott <
> leslie.james.elli...@gmail.com>:
> 
> >
> > hi all
> >
> > we run a large number of shares from CES servers connected to a single
> > scale cluster
> > we understand the current supported limit is 1000 SMB shares, we run the
> > same number of NFS shares
> >
> > we also understand that using external CES cluster to increase that limit
> > is not supported based on the documentation, we use the same authentication
> > for all shares, we do have additional use cases for sharing where this
> > pathway would be attractive going forward
> >
> > so the question becomes if we need to run 2 SMB and NFS shares off a
> > scale cluster is there any hardware design we can use to do this whilst
> > maintaining support
> >
> > I have submitted a support request to ask if this can be done but thought
> > I would ask the collective good if this has already been solved
> >
> > thanks
> >
> > leslie
> > ___
> > gpfsug-discuss mailing list
> > gpfsug-discuss at spectrumscale.org
> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> >
> -- next part --
> An HTML attachment was scrubbed...
> URL: 
> <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/4744cdc0/attachment-0001.html
>  >
> 
> --
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss 
> 
> 
> End of gpfsug-discuss Digest, Vol 107, Issue 13
> ***
> 
> 
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
> 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Contents of gpfsug-discuss Digest, Vol 107, Issue 13

2020-12-10 Thread Edward Boyd
 Please review the CES limits in the FAQ which statesQ5.2:What are some scaling considerations for the
protocols function?A5.2:Scaling considerations for the protocols function include:
The number of protocol nodes. 
If you are using SMB
in any combination of other protocols you can configure only up to
16 protocol nodes. This is a hard limit and SMB cannot be enabled
if there are more protocol nodes. If only NFS and Object are enabled,
you can have 32 nodes configured as protocol nodes.The number of client connections. 
A maximum of 3,000 SMB connections
is recommended per protocol node with a maximum of 20,000 SMB connections
per cluster. A maximum of 4,000 NFS connections per protocol node
is recommended. A maximum of 2,000 Object connections per protocol
nodes is recommended. The maximum number of connections depends on
the amount of memory configured and sufficient CPU. We recommend a
minimum of 64GB of memory for only Object or only NFS use cases. If
you have multiple protocols enabled or if you have SMB enabled we
recommend 128GB of memory on the system.https://www.ibm.com/support/knowledgecenter/STXKQY/gpfsclustersfaq.html?view=kc#maxproto
Edward L. Boyd ( Ed )IBM Certified Client Technical Specialist, Level 2 ExpertOpen Foundation, Master Certified Technical SpecialistIBM Systems, Storage SolutionsUS Federal407-271-9210 Office / Cell / Office / Texteb...@us.ibm.com email-gpfsug-discuss-boun...@spectrumscale.org wrote: -To: gpfsug-discuss@spectrumscale.orgFrom: gpfsug-discuss-requ...@spectrumscale.orgSent by: gpfsug-discuss-boun...@spectrumscale.orgDate: 12/10/2020 07:00AMSubject: [EXTERNAL] gpfsug-discuss Digest, Vol 107, Issue 13Send gpfsug-discuss mailing list submissions togpfsug-discuss@spectrumscale.orgTo subscribe or unsubscribe via the World Wide Web, visithttp://gpfsug.org/mailman/listinfo/gpfsug-discuss or, via email, send a message with subject or body 'help' togpfsug-discuss-requ...@spectrumscale.orgYou can reach the person managing the list atgpfsug-discuss-ow...@spectrumscale.orgWhen replying, please edit your Subject line so it is more specificthan "Re: Contents of gpfsug-discuss digest..."Today's Topics:   1. Protocol limits (leslie elliott)   2. Re: Protocol limits (Jan-Frode Myklebust)--Message: 1Date: Thu, 10 Dec 2020 08:45:22 +1000From: leslie elliott <leslie.james.elli...@gmail.com>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Subject: [gpfsug-discuss] Protocol limitsMessage-ID:<CANBv+tsnwzTH5796xMfpLmWc-aY5=kihhlaacx-fzgdblup...@mail.gmail.com>Content-Type: text/plain; charset="utf-8"hi allwe run a large number of shares from CES servers connected to a singlescale clusterwe understand the current supported limit is 1000 SMB shares, we run thesame number of NFS shareswe also understand that using external CES cluster to increase that limitis not supported based on the documentation, we use the same authenticationfor all shares, we do have additional use cases for sharing where thispathway would be attractive going forwardso the question becomes if we need to run 2 SMB and NFS shares off ascale cluster is there any hardware design we can use to do this whilstmaintaining supportI have submitted a support request to ask if this can be done but thought Iwould ask the collective good if this has already been solvedthanksleslie-- next part --An HTML attachment was scrubbed...URL: <http://gpfsug.org/pipermail/gpfsug-discuss/attachments/20201210/a460a862/attachment-0001.html >--Message: 2Date: Thu, 10 Dec 2020 00:21:03 +0100From: Jan-Frode Myklebust <janfr...@tanso.net>To: gpfsug main discussion list <gpfsug-discuss@spectrumscale.org>Subject: Re: [gpfsug-discuss] Protocol limitsMessage-ID:<cahwpatj8xi5bez7m+gpqaguoxy_p+qw87mj4uf7z2nxr1ae...@mail.gmail.com>Content-Type: text/plain; charset="utf-8"My understanding of these limits are that they are to limit theconfiguration files from becoming too large, which makeschanging/processing them somewhat slow.For SMB shares, you might be able to limit the number of configured sharesby using wildcards in the config (%U). These wildcarded entries counts asone share.. Don?t know if simimar tricks can be done for NFS..  -jfons. 9. des. 2020 kl. 23:45 skrev leslie elliott <leslie.james.elli...@gmail.com>:>> hi all>> we run a large number of shares from CES servers connected to a single> scale cluster> we understand the current supported limit is 1000 SMB shares, we run the> same number of NFS shares>> we also understand that using external CES cluster to increase that limit> is not supported based on the documentation, we use the same authentication> for all shares, we do have additional use cases for sharing where this> pathway would be