Re: [gpfsug-discuss] Multi-cluster question (was Re: gpfsug-discuss Digest, Vol 100, Issue 32)

2020-05-31 Thread Jonathan Buzzard

On 29/05/2020 20:55, Stephen Ulmer wrote:
I have a question about multi-cluster, but it is related to this thread 
(it would be solving the same problem).


Let’s say we have two clusters A and B, both clusters are normally 
shared-everything with no NSD servers defined.


Er, even in a shared-everything all nodes fibre channel attached you 
still have to define NSD servers. That is a given NSD has a server (or 
ideally a list of servers) that arbitrate the disk. Unless it has 
changed since 3.x days. Never run a 4.x or later with all the disks SAN 
attached on all the nodes.


We want cluster B to be 
able to use a file system in cluster A.  If I zone the SAN such that 
cluster B can see all of cluster A’s disks, can I then define a 
multi-cluster relationship between them and mount a file system from A on B?


To state it another way, must B's I/O for the foreign file system pass 
though NSD servers in A, or can B’s nodes discover that they have 
FibreChannel paths to those disks and use them?




My understanding is that remote cluster mounts have to pass through the 
NSD servers.



JAB.

--
Jonathan A. Buzzard Tel: +44141-5483420
HPC System Administrator, ARCHIE-WeSt.
University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Multi-cluster question (was Re: gpfsug-discuss Digest, Vol 100, Issue 32)

2020-05-31 Thread Jan-Frode Myklebust
No, this is a common misconception.  You don’t need any NSD servers. NSD
servers are only needed if you have nodes without direct block access.

Remote cluster or not, disk access will be over local block device (without
involving NSD servers in any way), or NSD server if local access isn’t
available. NSD-servers are not «arbitrators» over access to a disk, they’re
just stupid proxies of IO commands.


  -jf

søn. 31. mai 2020 kl. 11:31 skrev Jonathan Buzzard <
jonathan.buzz...@strath.ac.uk>:

> On 29/05/2020 20:55, Stephen Ulmer wrote:
> > I have a question about multi-cluster, but it is related to this thread
> > (it would be solving the same problem).
> >
> > Let’s say we have two clusters A and B, both clusters are normally
> > shared-everything with no NSD servers defined.
>
> Er, even in a shared-everything all nodes fibre channel attached you
> still have to define NSD servers. That is a given NSD has a server (or
> ideally a list of servers) that arbitrate the disk. Unless it has
> changed since 3.x days. Never run a 4.x or later with all the disks SAN
> attached on all the nodes.
>
> > We want cluster B to be
> > able to use a file system in cluster A.  If I zone the SAN such that
> > cluster B can see all of cluster A’s disks, can I then define a
> > multi-cluster relationship between them and mount a file system from A
> on B?
> >
> > To state it another way, must B's I/O for the foreign file system pass
> > though NSD servers in A, or can B’s nodes discover that they have
> > FibreChannel paths to those disks and use them?
> >
>
> My understanding is that remote cluster mounts have to pass through the
> NSD servers.
>
>
> JAB.
>
> --
> Jonathan A. Buzzard Tel: +44141-5483420
> HPC System Administrator, ARCHIE-WeSt.
> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] Multi-cluster question (was Re: gpfsug-discuss Digest, Vol 100, Issue 32)

2020-05-31 Thread Avila, Geoffrey
The local-block-device method of I/O is what is usually termed "SAN mode";
right?

On Sun, May 31, 2020 at 12:47 PM Jan-Frode Myklebust 
wrote:

>
> No, this is a common misconception.  You don’t need any NSD servers. NSD
> servers are only needed if you have nodes without direct block access.
>
> Remote cluster or not, disk access will be over local block device
> (without involving NSD servers in any way), or NSD server if local access
> isn’t available. NSD-servers are not «arbitrators» over access to a disk,
> they’re just stupid proxies of IO commands.
>
>
>   -jf
>
> søn. 31. mai 2020 kl. 11:31 skrev Jonathan Buzzard <
> jonathan.buzz...@strath.ac.uk>:
>
>> On 29/05/2020 20:55, Stephen Ulmer wrote:
>> > I have a question about multi-cluster, but it is related to this thread
>> > (it would be solving the same problem).
>> >
>> > Let’s say we have two clusters A and B, both clusters are normally
>> > shared-everything with no NSD servers defined.
>>
>> Er, even in a shared-everything all nodes fibre channel attached you
>> still have to define NSD servers. That is a given NSD has a server (or
>> ideally a list of servers) that arbitrate the disk. Unless it has
>> changed since 3.x days. Never run a 4.x or later with all the disks SAN
>> attached on all the nodes.
>>
>> > We want cluster B to be
>> > able to use a file system in cluster A.  If I zone the SAN such that
>> > cluster B can see all of cluster A’s disks, can I then define a
>> > multi-cluster relationship between them and mount a file system from A
>> on B?
>> >
>> > To state it another way, must B's I/O for the foreign file system pass
>> > though NSD servers in A, or can B’s nodes discover that they have
>> > FibreChannel paths to those disks and use them?
>> >
>>
>> My understanding is that remote cluster mounts have to pass through the
>> NSD servers.
>>
>>
>> JAB.
>>
>> --
>> Jonathan A. Buzzard Tel: +44141-5483420
>> HPC System Administrator, ARCHIE-WeSt.
>> University of Strathclyde, John Anderson Building, Glasgow. G4 0NG
>> ___
>> gpfsug-discuss mailing list
>> gpfsug-discuss at spectrumscale.org
>> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>>
> ___
> gpfsug-discuss mailing list
> gpfsug-discuss at spectrumscale.org
> http://gpfsug.org/mailman/listinfo/gpfsug-discuss
>
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


Re: [gpfsug-discuss] gpfsug-discuss Digest, Vol 100, Issue 32

2020-05-31 Thread Valdis Klētnieks
On Fri, 29 May 2020 22:30:08 +0100, Jonathan Buzzard said:
> Ethernet goes *very* fast these days you know :-) In fact *much* faster
> than fibre channel.

Yes, but the justification, purchase, and installation of 40G or 100G Ethernet
interfaces in the machines involved, plus the routers/switches along the way,
can go very slowly indeed.

So finding a way to replace 10G Ether with 16G FC can be a win.



pgptPtT2nieiU.pgp
Description: PGP signature
___
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss