Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-08 Thread Joe Julian



On 03/07/18 14:47, Jim Kinney wrote:
[snip]. 
The gluster-fuse client works but is slower than most people like. I 
use the fuse process in my setup at work. ...


Depending on the use case and configuration. With client-side caching 
and cache invalidation, a good number of the performance complaints can 
be addressed in a similar (better) way to how nfs makes things fast.




On Wed, 2018-03-07 at 14:50 -0500, Ben Mason wrote:

Hello,

I'm designing a 2-node, HA NAS that must support NFS. I had planned 
on using GlusterFS native NFS until I saw that it is being 
deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I 
saw that the Ganesha HA support ended after 3.10 and its replacement 
is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync & 
pacemaker, which seems to work quite well. Are there any performance 
issues or other concerns with using GlusterFS as a replication layer 
and kernel NFS on top of that?


Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org 
http://lists.gluster.org/mailman/listinfo/gluster-users

--
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-08 Thread Joe Julian
There has been a deadlock problem in the past where both the knfs module 
and the fuse module each need more memory to satisfy a fop and neither 
can acquire that memory due to competing locks. This caused an infinite 
wait. Not sure if anything was ever done in the kernel to remedy that.



On 03/07/18 11:50, Ben Mason wrote:

Hello,

I'm designing a 2-node, HA NAS that must support NFS. I had planned on 
using GlusterFS native NFS until I saw that it is being deprecated. 
Then, I was going to use GlusterFS + NFS-Ganesha until I saw that the 
Ganesha HA support ended after 3.10 and its replacement is still a 
WIP. So, I landed on GlusterFS + kernel NFS + corosync & pacemaker, 
which seems to work quite well. Are there any performance issues or 
other concerns with using GlusterFS as a replication layer and kernel 
NFS on top of that?


Thanks!


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users


___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-07 Thread Ondrej Valousek
You say that accessing Gluster via NFS is actually faster than native (fuse) 
client?
Still I would like to know why we can’t use kernel NFS server on the data 
bricks. I understand we can’t use it on MDS as it can’t support pNFS.

Ondrej

From: gluster-users-boun...@gluster.org 
[mailto:gluster-users-boun...@gluster.org] On Behalf Of Jim Kinney
Sent: Wednesday, March 07, 2018 11:47 PM
To: gluster-users@gluster.org
Subject: Re: [Gluster-users] Kernel NFS on GlusterFS

Gluster does the sync part better than corosync. It's not an active/passive 
failover system. It more all active. Gluster handles the recovery once all 
nodes are back online.

That requires the client tool chain to understand that a write goes to all 
storage devices not just the active one.

3.10 is a long term support release. Upgrading to 3.12 or 4 is not a 
significant issue once a replacement for NFS-ganesha stabilizes.

Kernel NFS doesn't understand "write to two IP addresses". That's what 
NFS-Ganesha does. The gluster-fuse client works but is slower than most people 
like. I use the fuse process in my setup at work. Will be changing to 
NFS-Ganesha as part of the upgrade to 3.10.

On Wed, 2018-03-07 at 14:50 -0500, Ben Mason wrote:
Hello,

I'm designing a 2-node, HA NAS that must support NFS. I had planned on using 
GlusterFS native NFS until I saw that it is being deprecated. Then, I was going 
to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA support ended 
after 3.10 and its replacement is still a WIP. So, I landed on GlusterFS + 
kernel NFS + corosync & pacemaker, which seems to work quite well. Are there 
any performance issues or other concerns with using GlusterFS as a replication 
layer and kernel NFS on top of that?

Thanks!

___

Gluster-users mailing list

Gluster-users@gluster.org<mailto:Gluster-users@gluster.org>

http://lists.gluster.org/mailman/listinfo/gluster-users

--

James P. Kinney III



Every time you stop a school, you will have to build a jail. What you

gain at one end you lose at the other. It's like feeding a dog on his

own tail. It won't fatten the dog.

- Speech 11/23/1900 Mark Twain



http://heretothereideas.blogspot.com/

-

The information contained in this e-mail and in any attachments is confidential 
and is designated solely for the attention of the intended recipient(s). If you 
are not an intended recipient, you must not use, disclose, copy, distribute or 
retain this e-mail or any part thereof. If you have received this e-mail in 
error, please notify the sender by return e-mail and delete all copies of this 
e-mail from your computer system(s). Please direct any additional queries to: 
communicati...@s3group.com. Thank You. Silicon and Software Systems Limited (S3 
Group). Registered in Ireland no. 378073. Registered Office: South County 
Business Park, Leopardstown, Dublin 18.___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

Re: [Gluster-users] Kernel NFS on GlusterFS

2018-03-07 Thread Jim Kinney
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online. 
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a
significant issue once a replacement for NFS-ganesha stabilizes.
Kernel NFS doesn't understand "write to two IP addresses". That's what
NFS-Ganesha does. The gluster-fuse client works but is slower than most
people like. I use the fuse process in my setup at work. Will be
changing to NFS-Ganesha as part of the upgrade to 3.10.
On Wed, 2018-03-07 at 14:50 -0500, Ben Mason wrote:
> Hello,
> I'm designing a 2-node, HA NAS that must support NFS. I had planned
> on using GlusterFS native NFS until I saw that it is being
> deprecated. Then, I was going to use GlusterFS + NFS-Ganesha until I
> saw that the Ganesha HA support ended after 3.10 and its replacement
> is still a WIP. So, I landed on GlusterFS + kernel NFS + corosync &
> pacemaker, which seems to work quite well. Are there any performance
> issues or other concerns with using GlusterFS as a replication layer
> and kernel NFS on top of that?
> 
> Thanks!
> 
> ___
> Gluster-users mailing list
> Gluster-users@gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
-- 
James P. Kinney III

Every time you stop a school, you will have to build a jail. What you
gain at one end you lose at the other. It's like feeding a dog on his
own tail. It won't fatten the dog.
- Speech 11/23/1900 Mark Twain

http://heretothereideas.blogspot.com/
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users

[Gluster-users] Kernel NFS on GlusterFS

2018-03-07 Thread Ben Mason
Hello,

I'm designing a 2-node, HA NAS that must support NFS. I had planned on
using GlusterFS native NFS until I saw that it is being deprecated. Then, I
was going to use GlusterFS + NFS-Ganesha until I saw that the Ganesha HA
support ended after 3.10 and its replacement is still a WIP. So, I landed
on GlusterFS + kernel NFS + corosync & pacemaker, which seems to work quite
well. Are there any performance issues or other concerns with using
GlusterFS as a replication layer and kernel NFS on top of that?

Thanks!
___
Gluster-users mailing list
Gluster-users@gluster.org
http://lists.gluster.org/mailman/listinfo/gluster-users