Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-26 Thread Karthik Subrahmanya
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > >

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-26 Thread TomK
On 2/26/2018 7:14 AM, Kaleb S. KEITHLEY wrote: Hey, Yep. A blog is where I was writing it up to begin with. Anyway, got alot of demand for it over the last one day so here it is: http://microdevsys.com/wp/glusterfs-configuration-and-setup-w-nfs-ganesha-for-an-ha-nfs-cluster/ Skip to the

Re: [Gluster-users] Problems with write-behind with large files on Gluster 3.8.4

2018-02-26 Thread Raghavendra Gowdappa
+csaba On Tue, Feb 27, 2018 at 2:49 AM, Jim Prewett wrote: > > Hello, > > I'm having problems when write-behind is enabled on Gluster 3.8.4. > > I have 2 Gluster servers each with a single brick that is mirrored between > them. The code causing these issues reads two

Re: [Gluster-users] new Gluster cluster: 3.10 vs 3.12

2018-02-26 Thread Vlad Kopylov
Thanks! On Mon, Feb 26, 2018 at 4:26 PM, Ingard Mevåg wrote: > After discussing with Xavi in #gluster-dev we found out that we could > eliminate the slow lstats by disabling disperse.eager-lock. > There is an open issue here : >

Re: [Gluster-users] new Gluster cluster: 3.10 vs 3.12

2018-02-26 Thread Anatoliy Dmytriyev
Hi Vlad and Ingard, Thanks a lot for the replies. Regards, Anatoliy > On 26 Feb 2018, at 22:26, Ingard Mevåg wrote: > > After discussing with Xavi in #gluster-dev we found out that we could > eliminate the slow lstats by disabling disperse.eager-lock. > There is an open

Re: [Gluster-users] new Gluster cluster: 3.10 vs 3.12

2018-02-26 Thread Ingard Mevåg
After discussing with Xavi in #gluster-dev we found out that we could eliminate the slow lstats by disabling disperse.eager-lock. There is an open issue here : https://bugzilla.redhat.com/show_bug.cgi?id=1546732 ___ Gluster-users mailing list

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-26 Thread WK
+1 I also would like to see those instruction. I've been interested in NFS-Ganesha with Gluster but there wasn't obvious references that were up to date. (Let alone introduction of StorHaug) -wk On 2/25/2018 10:28 PM, Serkan Çoban wrote: I would like to see the steps for reference, can

[Gluster-users] Release 4.0: RC1 tagged

2018-02-26 Thread Shyam Ranganathan
Hi, RC1 is tagged in the code, and the request for packaging the same is on its way. We should have packages as early as today, and request the community to test the same and return some feedback. We have about 3-4 days (till Thursday) for any pending fixes and the final release to happen, so

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-26 Thread Ryan Wilkinson
Here is info. about the Raid controllers. Doesn't seem to be the culprit. Slow host: Name PERC H710 Mini (Embedded) Firmware Version 21.3.4-0001 Cache Memory Size 512 MB Fast Host: Name PERC H310 Mini (Embedded) Firmware Version 20.12.1-0002 Cache Memory Size 0 MB Slow host: Name PERC H310 Mini

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-26 Thread Alvin Starr
I would be really supprised if the problem was related to Idrac. The Idrac processor is a stand alone cpu with its own nic and runs independent of the main CPU. That being said it does have visibility into the whole system. try using dmidecode to compare the systems and take a close look at

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-26 Thread Ryan Wilkinson
I've tested about 12 different Dell servers. Ony a couple of them have Idrac express and all the others have Idrac Enterprise. All the boxes with Enterprise perform poorly and the couple that have express perform well. I use the disks in raid mode on all of them. I've tried a few non-Dell

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-26 Thread Serkan Çoban
I don't think it is related with iDRAC itself but some configuration is wrong or there is some hw error. Did you check battery of raid controller? Do you use disks in jbod mode or raid mode? On Mon, Feb 26, 2018 at 6:12 PM, Ryan Wilkinson wrote: > Thanks for the suggestion.

Re: [Gluster-users] Gluster performance / Dell Idrac enterprise conflict

2018-02-26 Thread Ryan Wilkinson
Thanks for the suggestion. I tried both of these with no difference in performance.I have tried several other Dell hosts with Idrac Enterprise and getting the same results. I also tried a new Dell T130 with Idrac express and was getting over 700 MB/s. Any other users had this issues with Idrac

[Gluster-users] rpc/glusterd-locks error

2018-02-26 Thread Vineet Khandpur
Good morning. We have a 6 node cluster. 3 nodes are participating in a replica 3 volume. Naming convention: xx01 - 3 nodes participating in ovirt_vol xx02 - 3 nodes NOT particpating in ovirt_vol Last week, restarted glusterd on each node in cluster to update (one at a time). The three xx01 nodes

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-26 Thread Dave Sherohman
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes

Re: [Gluster-users] Quorum in distributed-replicate volume

2018-02-26 Thread Karthik Subrahmanya
Hi Dave, On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman wrote: > I've configured 6 bricks as distributed-replicated with replica 2, > expecting that all active bricks would be usable so long as a quorum of > at least 4 live bricks is maintained. > The client quorum is

Re: [Gluster-users] NFS Ganesha HA w/ GlusterFS

2018-02-26 Thread Kaleb S. KEITHLEY
On 02/25/2018 08:29 PM, TomK wrote: > Hey Guy's, > > A success story instead of a question. > > With your help, managed to get the HA component working with HAPROXY and > keepalived to build a fairly resilient NFS v4 VM cluster.  ( Used > Gluster, NFS Ganesha v2.60, HAPROXY, keepalived w/

[Gluster-users] Quorum in distributed-replicate volume

2018-02-26 Thread Dave Sherohman
I've configured 6 bricks as distributed-replicated with replica 2, expecting that all active bricks would be usable so long as a quorum of at least 4 live bricks is maintained. However, I have just found