Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Carl Zetie
ST>I believe socket based licenses are also about to or already no longer 
available 
ST>for new customers (existing customers can continue to buy).

ST>Carl can probably comment on this?
 
That is correct. Friday Jan 11 is the last chance for *new* customers to buy 
Standard Edition sockets. 
 
And as Simon says, those of you who are currently Sockets customers can remain 
on Sockets, buying additional licenses and renewing existing licenses. (IBM 
Legal requires me to add, any statement about the future is an intention, not a 
commitment -- but, as I've said before, as long as it's my decision to make, my 
intent is to keep Sockets as long as existing customers want them). 

And yes, one of the reasons I wanted to get away from Socket pricing is the 
kind of scenarios some of you brought up. Implementing the best deployment 
topology for your needs shouldn't be a licensing transaction. (Don't even get 
me started on client licenses).
 
 
regards,

 
 
Carl Zetie  
Program Director  
Offering Management for Spectrum Scale, IBM  
  
(540) 882 9353 ][ Research Triangle Park
 ca...@us.ibm.com 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Christof Schmitt
> Whilst we're on protocols, are there any restrictions on using mixed architectures? I don't recall seeing this but...
> E.g. my new shiny boxes are ppc64le systems and my old legacy nodes are x86. It's all ctdb locking right ..
> (ok maybe mixing be and le hosts would be bad)
 
If you are using SMB, all CES nodes have to be the same architecture:
https://www.ibm.com/support/knowledgecenter/en/STXKQY_5.0.2/com.ibm.spectrum.scale.v5r02.doc/bl1ins_smbclients.htm
All CES nodes need to be the same hardware architecture (x86 versus Power®) and the same endianness (little endian versus big endian).
Regards, 
 
Christof Schmitt || IBM || Spectrum Scale Development || Tucson, AZchristof.schm...@us.ibm.com  ||  +1-520-799-2469    (T/L: 321-2469)
 
 
- Original message -From: Simon Thompson Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service separationDate: Wed, Jan 9, 2019 11:05 AM 
Can you use node affinity within CES groups?For example I have some shiny new servers I want to normally use. If I plan maintenance, I move the IP to another shiny box. But I also have some old off support legacy hardware that I'm happy to use in a DR situation (e.g. they are in another site). So I want a group for my SMB boxes and NFS boxes, but have affinity normally, and then have old hardware in case of failure.Whilst we're on protocols, are there any restrictions on using mixed architectures? I don't recall seeing this but... E.g. my new shiny boxes are ppc64le systems and my old legacy nodes are x86. It's all ctdb locking right .. (ok maybe mixing be and le hosts would be bad)(Sure I'll take a performance hit when I fail to the old nodes, but that is better than no service).SimonFrom: gpfsug-discuss-boun...@spectrumscale.org [gpfsug-discuss-boun...@spectrumscale.org] on behalf of aspal...@us.ibm.com [aspal...@us.ibm.com]Sent: 09 January 2019 17:21To: gpfsug-discuss@spectrumscale.orgCc: gpfsug-discuss@spectrumscale.orgSubject: Re: [gpfsug-discuss] Spectrum Scale protocol node service separationHey guys - I wanted to reply from the Scale development side.First off, consider CES as a stack and the implications of such:- all protocols are installed on all nodes- if a specific protocol is enabled (SMB, NFS, OBJ, Block), it's enabled for all protocol nodes- if a specific protocol is started (SMB, NFS, OBJ, Block), it's started on all nodes by default, unless manually specified.As was indicated in the e-mail chain, you don't want to be removing rpms to create a subset of nodes serving various protocols as this will cause overall issues.  You also don't want to manually be disabling protocols on some nodes/not others in order to achieve nodes that are 'only serving' SMB, for instance.  Doing this manual stopping/starting of protocols isn't something that will adhere to failover.===A few possible solutions if you want to segregate protocols to specific nodes are:===1) CES-Groups in combination with specific IPs / DNS hostnames that correspond to each protocol.- As mentioned, this can still be bypassed if someone attempts a mount using an IP/DNS name not set for their protocol.  However, you could probably prevent some of this with an external firewall rule.- Using CES-Groups confines the IPs/DNS hostnames to very specific nodes2) Firewall rules- This is best if done external to the cluster, and at a level that can restrict specific protocol traffic to specific IPs/hostnames- combine this with #1 for the best results.- Although it may work, try to stay away from crazy firewall rules on each protocol node itself as this can get confusing very quickly.  It's easier if you can set this up external to the nodes.3) Similar to above but using Node Affinity CES-IP policy - but no CES groups.- Upside is node-affinity will attempt to keep your CES-IPs associated with specific nodes.  So if you restrict specific protocol traffic to specific IPs, then they'll stay on nodes you designate- Watch out for failovers.  In error cases (or upgrades) where an IP needs to move to another node, it obviously can't remain on the node that's having issues.  This means you may have protocol trafffic crossover when this occurs.4) A separate remote cluster for each CES protocol- In this example, you could make fairly small remote clusters (although we recommend 2->3nodes at least for failover purposes).  The local cluster would provide the storage.  The remote clusters would mount it.  One remote cluster could have only SMB enabled.  Another remote cluster could have only OBJ enabled.  etc...--I hope this helps a bitRegards,Aaron PalazzoloIBM Spectrum Scale Deployment, Infrastructure, Virtualization9042 S Rita Road, Tucson AZ 85744Phone: 520-799-5161, T/L: 321-5161E-mail: aspal...@us.i

Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Simon Thompson
Can you use node affinity within CES groups?

For example I have some shiny new servers I want to normally use. If I plan 
maintenance, I move the IP to another shiny box. But I also have some old off 
support legacy hardware that I'm happy to use in a DR situation (e.g. they are 
in another site). So I want a group for my SMB boxes and NFS boxes, but have 
affinity normally, and then have old hardware in case of failure.

Whilst we're on protocols, are there any restrictions on using mixed 
architectures? I don't recall seeing this but... E.g. my new shiny boxes are 
ppc64le systems and my old legacy nodes are x86. It's all ctdb locking right .. 
(ok maybe mixing be and le hosts would be bad)

(Sure I'll take a performance hit when I fail to the old nodes, but that is 
better than no service).

Simon

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of aspal...@us.ibm.com 
[aspal...@us.ibm.com]
Sent: 09 January 2019 17:21
To: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

Hey guys - I wanted to reply from the Scale development side.

First off, consider CES as a stack and the implications of such:
- all protocols are installed on all nodes
- if a specific protocol is enabled (SMB, NFS, OBJ, Block), it's enabled for 
all protocol nodes
- if a specific protocol is started (SMB, NFS, OBJ, Block), it's started on all 
nodes by default, unless manually specified.

As was indicated in the e-mail chain, you don't want to be removing rpms to 
create a subset of nodes serving various protocols as this will cause overall 
issues.  You also don't want to manually be disabling protocols on some 
nodes/not others in order to achieve nodes that are 'only serving' SMB, for 
instance.  Doing this manual stopping/starting of protocols isn't something 
that will adhere to failover.

===
A few possible solutions if you want to segregate protocols to specific nodes 
are:
===
1) CES-Groups in combination with specific IPs / DNS hostnames that correspond 
to each protocol.
- As mentioned, this can still be bypassed if someone attempts a mount using an 
IP/DNS name not set for their protocol.  However, you could probably prevent 
some of this with an external firewall rule.
- Using CES-Groups confines the IPs/DNS hostnames to very specific nodes

2) Firewall rules
- This is best if done external to the cluster, and at a level that can 
restrict specific protocol traffic to specific IPs/hostnames
- combine this with #1 for the best results.
- Although it may work, try to stay away from crazy firewall rules on each 
protocol node itself as this can get confusing very quickly.  It's easier if 
you can set this up external to the nodes.

3) Similar to above but using Node Affinity CES-IP policy - but no CES groups.
- Upside is node-affinity will attempt to keep your CES-IPs associated with 
specific nodes.  So if you restrict specific protocol traffic to specific IPs, 
then they'll stay on nodes you designate
- Watch out for failovers.  In error cases (or upgrades) where an IP needs to 
move to another node, it obviously can't remain on the node that's having 
issues.  This means you may have protocol trafffic crossover when this occurs.

4) A separate remote cluster for each CES protocol
- In this example, you could make fairly small remote clusters (although we 
recommend 2->3nodes at least for failover purposes).  The local cluster would 
provide the storage.  The remote clusters would mount it.  One remote cluster 
could have only SMB enabled.  Another remote cluster could have only OBJ 
enabled.  etc...

--
I hope this helps a bit


Regards,

Aaron Palazzolo
IBM Spectrum Scale Deployment, Infrastructure, Virtualization
9042 S Rita Road, Tucson AZ 85744
Phone: 520-799-5161, T/L: 321-5161
E-mail: aspal...@us.ibm.com


- Original message -
From: gpfsug-discuss-requ...@spectrumscale.org
Sent by: gpfsug-discuss-boun...@spectrumscale.org
To: gpfsug-discuss@spectrumscale.org
Cc:
Subject: gpfsug-discuss Digest, Vol 84, Issue 4
Date: Wed, Jan 9, 2019 7:13 AM

Send gpfsug-discuss mailing list submissions to
gpfsug-discuss@spectrumscale.org

To subscribe or unsubscribe via the World Wide Web, visit
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
or, via email, send a message with subject or body 'help' to
gpfsug-discuss-requ...@spectrumscale.org

You can reach the person managing the list at
gpfsug-discuss-ow...@spectrumscale.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of gpfsug-discuss digest..."


Today's Topics:

   1. Re: Spectrum Scale pro

Re: [gpfsug-discuss] Spectrum Scale protocol node service separation

2019-01-09 Thread Aaron S Palazzolo
Hey guys - I wanted to reply from the Scale development side. 
 
First off, consider CES as a stack and the implications of such:
- all protocols are installed on all nodes
- if a specific protocol is enabled (SMB, NFS, OBJ, Block), it's enabled for all protocol nodes
- if a specific protocol is started (SMB, NFS, OBJ, Block), it's started on all nodes by default, unless manually specified.  
 
As was indicated in the e-mail chain, you don't want to be removing rpms to create a subset of nodes serving various protocols as this will cause overall issues.  You also don't want to manually be disabling protocols on some nodes/not others in order to achieve nodes that are 'only serving' SMB, for instance.  Doing this manual stopping/starting of protocols isn't something that will adhere to failover.  
 
===
A few possible solutions if you want to segregate protocols to specific nodes are:
===
1) CES-Groups in combination with specific IPs / DNS hostnames that correspond to each protocol.  
- As mentioned, this can still be bypassed if someone attempts a mount using an IP/DNS name not set for their protocol.  However, you could probably prevent some of this with an external firewall rule.  
- Using CES-Groups confines the IPs/DNS hostnames to very specific nodes
 
2) Firewall rules
- This is best if done external to the cluster, and at a level that can restrict specific protocol traffic to specific IPs/hostnames
- combine this with #1 for the best results.  
- Although it may work, try to stay away from crazy firewall rules on each protocol node itself as this can get confusing very quickly.  It's easier if you can set this up external to the nodes.  
 
3) Similar to above but using Node Affinity CES-IP policy - but no CES groups.  
- Upside is node-affinity will attempt to keep your CES-IPs associated with specific nodes.  So if you restrict specific protocol traffic to specific IPs, then they'll stay on nodes you designate
- Watch out for failovers.  In error cases (or upgrades) where an IP needs to move to another node, it obviously can't remain on the node that's having issues.  This means you may have protocol trafffic crossover when this occurs.
 
4) A separate remote cluster for each CES protocol
- In this example, you could make fairly small remote clusters (although we recommend 2->3nodes at least for failover purposes).  The local cluster would provide the storage.  The remote clusters would mount it.  One remote cluster could have only SMB enabled.  Another remote cluster could have only OBJ enabled.  etc...  
 
--
I hope this helps a bit
 
Regards,Aaron Palazzolo
IBM Spectrum Scale Deployment, Infrastructure, Virtualization9042 S Rita Road, Tucson AZ 85744Phone: 520-799-5161, T/L: 321-5161E-mail: aspal...@us.ibm.com
 
 
- Original message -From: gpfsug-discuss-requ...@spectrumscale.orgSent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug-discuss@spectrumscale.orgCc:Subject: gpfsug-discuss Digest, Vol 84, Issue 4Date: Wed, Jan 9, 2019 7:13 AM 
Send gpfsug-discuss mailing list submissions togpfsug-discuss@spectrumscale.orgTo subscribe or unsubscribe via the World Wide Web, visithttp://gpfsug.org/mailman/listinfo/gpfsug-discussor, via email, send a message with subject or body 'help' togpfsug-discuss-requ...@spectrumscale.orgYou can reach the person managing the list atgpfsug-discuss-ow...@spectrumscale.orgWhen replying, please edit your Subject line so it is more specificthan "Re: Contents of gpfsug-discuss digest..."Today's Topics:   1. Re: Spectrum Scale protocol node service separation.      (Andi Rhod Christiansen)   2. Re: Spectrum Scale protocol node service separation.      (Sanchez, Paul)--Message: 1Date: Wed, 9 Jan 2019 13:24:30 +From: Andi Rhod Christiansen To: gpfsug main discussion list Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node serviceseparation.Message-ID:Content-Type: text/plain; charset="utf-8"Hi Simon,It was actually also the only solution I found if I want to keep them within the same cluster ?Thanks for the reply, I will see what we figure out !Venlig hilsen / Best RegardsAndi Rhod ChristiansenFra: gpfsug-discuss-boun...@spectrumscale.org  P? vegne af Simon ThompsonSendt: 9. januar 2019 13:20Til: gpfsug main discussion list Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.You have to run all services on all nodes ( ? ) actually its technically possible to remove the packages once protocols is running on the node, but next time you reboot the node, it will get marked unhealthy and you spend an hour working out why? But what we do to split load is have different IPs assigned to different CES groups and then assign the SMB nodes to the SMB group IPs etc ?Technically a user could still connect to the NFS (in our case) IPs with SMB p

Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Simon Thompson
I think only recently was remote cluster support added (though we have been 
doing it since CES was released).

I agree that capacity licenses have freed us to implement a better solution.. 
no longer do we run quorum/token managers on nsd nodes to reduce socket costs.

I believe socket based licenses are also about to or already no longer 
available for new customers (existing customers can continue to buy).

Carl can probably comment on this?

Simon

From: gpfsug-discuss-boun...@spectrumscale.org 
[gpfsug-discuss-boun...@spectrumscale.org] on behalf of paul.sanc...@deshaw.com 
[paul.sanc...@deshaw.com]
Sent: 09 January 2019 14:05
To: gpfsug main discussion list
Subject: Re: [gpfsug-discuss] Spectrum Scale protocol node service  
separation.

The docs say: “CES supports the following export protocols: NFS, SMB, object, 
and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If 
a protocol is enabled in the CES cluster, all CES nodes serve that protocol.” 
Which would seem to indicate that the answer is “no”.

This kind of thing is another good reason to license Scale by storage capacity 
rather than by sockets (PVU).  This approach was already a good idea due to the 
flexibility it allows to scale manager, quorum, and NSD server nodes for 
performance and high-availability without affecting your software licensing 
costs.  This can result in better design and the flexibility to more quickly 
respond to new problems by adding server nodes.

So assuming you’re not on the old PVU licensing model, it is trivial to deploy 
as many gateway nodes as needed to separate these into distinct remote 
clusters.  You can create an object gateway cluster, and a CES gateway cluster 
each which only mounts and exports what is necessary.  You can even virtualize 
these servers and host them on the same hardware, if you’re into that.

-Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Andi Rhod Christiansen
Sent: Wednesday, January 9, 2019 5:25 AM
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 😊

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Sanchez, Paul
The docs say: “CES supports the following export protocols: NFS, SMB, object, 
and iSCSI (block). Each protocol can be enabled or disabled in the cluster. If 
a protocol is enabled in the CES cluster, all CES nodes serve that protocol.” 
Which would seem to indicate that the answer is “no”.

This kind of thing is another good reason to license Scale by storage capacity 
rather than by sockets (PVU).  This approach was already a good idea due to the 
flexibility it allows to scale manager, quorum, and NSD server nodes for 
performance and high-availability without affecting your software licensing 
costs.  This can result in better design and the flexibility to more quickly 
respond to new problems by adding server nodes.

So assuming you’re not on the old PVU licensing model, it is trivial to deploy 
as many gateway nodes as needed to separate these into distinct remote 
clusters.  You can create an object gateway cluster, and a CES gateway cluster 
each which only mounts and exports what is necessary.  You can even virtualize 
these servers and host them on the same hardware, if you’re into that.

-Paul

From: gpfsug-discuss-boun...@spectrumscale.org 
 On Behalf Of Andi Rhod Christiansen
Sent: Wednesday, January 9, 2019 5:25 AM
To: gpfsug main discussion list 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 😊

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andi Rhod Christiansen
Hi Simon,

It was actually also the only solution I found if I want to keep them within 
the same cluster 😊

Thanks for the reply, I will see what we figure out !

Venlig hilsen / Best Regards

Andi Rhod Christiansen

Fra: gpfsug-discuss-boun...@spectrumscale.org 
 På vegne af Simon Thompson
Sendt: 9. januar 2019 13:20
Til: gpfsug main discussion list 
Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

You have to run all services on all nodes ( ☹ ) actually its technically 
possible to remove the packages once protocols is running on the node, but next 
time you reboot the node, it will get marked unhealthy and you spend an hour 
working out why… 

But what we do to split load is have different IPs assigned to different CES 
groups and then assign the SMB nodes to the SMB group IPs etc …

Technically a user could still connect to the NFS (in our case) IPs with SMB 
protocol, but there’s not a lot we can do about that … though our upstream 
firewall drops said traffic.

Simon

From: 
mailto:gpfsug-discuss-boun...@spectrumscale.org>>
 on behalf of "a...@b4restore.com<mailto:a...@b4restore.com>" 
mailto:a...@b4restore.com>>
Reply-To: 
"gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
mailto:gpfsug-discuss@spectrumscale.org>>
Date: Wednesday, 9 January 2019 at 10:31
To: "gpfsug-discuss@spectrumscale.org<mailto:gpfsug-discuss@spectrumscale.org>" 
mailto:gpfsug-discuss@spectrumscale.org>>
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 😊

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andi Rhod Christiansen
Hi Andrew,

Where can I request such a feature? 😊

Venlig hilsen / Best Regards

Andi Rhod Christiansen

Fra: gpfsug-discuss-boun...@spectrumscale.org 
 På vegne af Andrew Beattie
Sendt: 9. januar 2019 12:17
Til: gpfsug-discuss@spectrumscale.org
Cc: gpfsug-discuss@spectrumscale.org
Emne: Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Andi,

All the CES nodes in the same cluster will share the same protocol exports
if you want to separate them you need to create remote mount clusters and 
export the additional protocols via the remote mount

it would actually be a useful RFE to have the ablity to create CES groups 
attached to the base cluster and by group create exports of different 
protocols, but its not available today.
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com<mailto:abeat...@au1.ibm.com>


- Original message -
From: Andi Rhod Christiansen mailto:a...@b4restore.com>>
Sent by: 
gpfsug-discuss-boun...@spectrumscale.org<mailto:gpfsug-discuss-boun...@spectrumscale.org>
To: gpfsug main discussion list 
mailto:gpfsug-discuss@spectrumscale.org>>
Cc:
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.
Date: Wed, Jan 9, 2019 8:31 PM


Hi,



I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?



If it is possible it would be great to hear pros and cons about doing this 😊



Thanks in advance!



Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Simon Thompson
You have to run all services on all nodes ( ☹ ) actually its technically 
possible to remove the packages once protocols is running on the node, but next 
time you reboot the node, it will get marked unhealthy and you spend an hour 
working out why… 

But what we do to split load is have different IPs assigned to different CES 
groups and then assign the SMB nodes to the SMB group IPs etc …

Technically a user could still connect to the NFS (in our case) IPs with SMB 
protocol, but there’s not a lot we can do about that … though our upstream 
firewall drops said traffic.

Simon

From:  on behalf of 
"a...@b4restore.com" 
Reply-To: "gpfsug-discuss@spectrumscale.org" 
Date: Wednesday, 9 January 2019 at 10:31
To: "gpfsug-discuss@spectrumscale.org" 
Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.

Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 😊

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist



___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andrew Beattie
Andi,
 
All the CES nodes in the same cluster will share the same protocol exports
if you want to separate them you need to create remote mount clusters and export the additional protocols via the remote mount
 
it would actually be a useful RFE to have the ablity to create CES groups attached to the base cluster and by group create exports of different protocols, but its not available today.
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
E-mail: abeat...@au1.ibm.com
 
 
- Original message -From: Andi Rhod Christiansen Sent by: gpfsug-discuss-boun...@spectrumscale.orgTo: gpfsug main discussion list Cc:Subject: [gpfsug-discuss] Spectrum Scale protocol node service separation.Date: Wed, Jan 9, 2019 8:31 PM  
Hi,
 
I seem to be unable to find any information on separating protocol services on specific CES nodes within a cluster. Does anyone know if it is possible to take, lets say 4 of the ces nodes within a cluster and dividing them into two and have two of the running SMB and the other two running OBJ instead of having them all run both services?
 
If it is possible it would be great to hear pros and cons about doing this 😊 
 
Thanks in advance!
 
Venlig hilsen / Best Regards
Andi ChristiansenIT Solution Specialist
 
___gpfsug-discuss mailing listgpfsug-discuss at spectrumscale.orghttp://gpfsug.org/mailman/listinfo/gpfsug-discuss
 

___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


[gpfsug-discuss] Spectrum Scale protocol node service separation.

2019-01-09 Thread Andi Rhod Christiansen
Hi,

I seem to be unable to find any information on separating protocol services on 
specific CES nodes within a cluster. Does anyone know if it is possible to 
take, lets say 4 of the ces nodes within a cluster and dividing them into two 
and have two of the running SMB and the other two running OBJ instead of having 
them all run both services?

If it is possible it would be great to hear pros and cons about doing this 😊

Thanks in advance!

Venlig hilsen / Best Regards

Andi Christiansen
IT Solution Specialist


___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss