Re: [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature

2019-05-03 Thread Strahil
Hi Jiffin,

No vendor will support your corosync/pacemaker stack if you do not have proper 
fencing.
As Gluster is already a cluster of its own, it makes sense to control 
everything from there.

Best Regards,
Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan  
wrote:
>
>
> On 30/04/19 6:59 PM, Strahil Nikolov wrote: 
> > Hi, 
> > 
> > I'm posting this again as it got bounced. 
> > Keep in mind that corosync/pacemaker  is hard for proper setup by new 
> > admins/users. 
> > 
> > I'm still trying to remediate the effects of poor configuration at work. 
> > Also, storhaug is nice for hyperconverged setups where the host is not only 
> > hosting bricks, but  other  workloads. 
> > Corosync/pacemaker require proper fencing to be setup and most of the 
> > stonith resources 'shoot the other node in the head'. 
> > I would be happy to see an easy to deploy (let say 
> > 'cluster.enable-ha-ganesha true') and gluster to be bringing up the 
> > Floating IPs and taking care of the NFS locks, so no disruption will be 
> > felt by the clients. 
>
>
> It do take care those, but need to follow certain prerequisite, but 
> please fencing won't configured for this setup. May we think about in 
> future. 
>
> -- 
>
> Jiffin 
>
> > 
> > Still, this will be a lot of work to achieve. 
> > 
> > Best Regards, 
> > Strahil Nikolov 
> > 
> > On Apr 30, 2019 15:19, Jim Kinney  wrote: 
> >>    
> >> +1! 
> >> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> >> instead of fuse mounts. Having an integrated, designed in process to 
> >> coordinate multiple nodes into an HA cluster will very welcome. 
> >> 
> >> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
> >>  wrote: 
> >>>    
> >>> Hi all, 
> >>> 
> >>> Some of you folks may be familiar with HA solution provided for 
> >>> nfs-ganesha by gluster using pacemaker and corosync. 
> >>> 
> >>> That feature was removed in glusterfs 3.10 in favour for common HA 
> >>> project "Storhaug". Even Storhaug was not progressed 
> >>> 
> >>> much from last two years and current development is in halt state, hence 
> >>> planning to restore old HA ganesha solution back 
> >>> 
> >>> to gluster code repository with some improvement and targetting for next 
> >>> gluster release 7. 
> >>> 
> >>>    I have opened up an issue [1] with details and posted initial set of 
> >>>patches [2] 
> >>> 
> >>> Please share your thoughts on the same 
> >>> 
> >>> 
> >>> Regards, 
> >>> 
> >>> Jiffin 
> >>> 
> >>> [1] https://github.com/gluster/glusterfs/issues/663 
> >>> 
> >>> [2] 
> >>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
> >>>  
> >>> 
> >>> 
> >> -- 
> >> Sent from my Android device with K-9 Mail. All tyopes are thumb related 
> >> and reflect authenticity. 
> > Keep in mind that corosync/pacemaker  is hard for proper setup by new 
> > admins/users. 
> > 
> > I'm still trying to remediate the effects of poor configuration at work. 
> > Also, storhaug is nice for hyperconverged setups where the host is not only 
> > hosting bricks, but  other  workloads. 
> > Corosync/pacemaker require proper fencing to be setup and most of the 
> > stonith resources 'shoot the other node in the head'. 
> > I would be happy to see an easy to deploy (let say 
> > 'cluster.enable-ha-ganesha true') and gluster to be bringing up the 
> > Floating IPs and taking care of the NFS locks, so no disruption will be 
> > felt by the clients. 
> > 
> > Still, this will be a lot of work to achieve. 
> > 
> > Best Regards, 
> > Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney  
> > wrote: 
> >> +1! 
> >> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS 
> >> instead of fuse mounts. Having an integrated, designed in process to 
> >> coordinate multiple nodes into an HA cluster will very welcome. 
> >> 
> >> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan 
> >>  wrote: 
> >>> Hi all, 
> >>> 
> >>> Some of you folks may be familiar with HA solution provided for 
> >>> nfs-ganesha by gluster using pacemaker and corosync. 
> >>> 
> >>> That feature was removed in glusterfs 3.10 in favour for common HA 
> >>> project "Storhaug". Even Storhaug was not progressed 
> >>> 
> >>> much from last two years and current development is in halt state, hence 
> >>> planning to restore old HA ganesha solution back 
> >>> 
> >>> to gluster code repository with some improvement and targetting for next 
> >>> gluster release 7. 
> >>> 
> >>> I have opened up an issue [1] with details and posted initial set of 
> >>> patches [2] 
> >>> 
> >>> Please share your thoughts on the same 
> >>> 
> >>> Regards, 
> >>> 
> >>> Jiffin 
> >>> 
> >>> [1] https://github.com/gluster/glusterfs/issues/663 
> >>> 
> >>> [2] 
> >>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged)
> >>>  
> >> 
> >> -- 
> >> Sent from my Android device with K-9 Mail. All tyopes are thumb related 
> >> and reflect authenticity. 

Re: [Gluster-users] Upgrade 5.3 -> 5.4 on debian: public IP is used instead of LAN IP

2019-05-03 Thread Artem Russakovskii
Just to update everyone on the nasty crash one of our servers continued
having even after 5.5/5.6, I posted a summary of the results here:
https://bugzilla.redhat.com/show_bug.cgi?id=1690769#c4.

Sincerely,
Artem

--
Founder, Android Police , APK Mirror
, Illogical Robot LLC
beerpla.net | +ArtemRussakovskii
 | @ArtemR



On Wed, Mar 20, 2019 at 12:57 PM Artem Russakovskii 
wrote:

> Amar,
>
> I see debuginfo packages now and have installed them. I'm available via
> Skype as before, just ping me there.
>
> Sincerely,
> Artem
>
> --
> Founder, Android Police , APK Mirror
> , Illogical Robot LLC
> beerpla.net | +ArtemRussakovskii
>  | @ArtemR
> 
>
>
> On Tue, Mar 19, 2019 at 10:46 PM Amar Tumballi Suryanarayan <
> atumb...@redhat.com> wrote:
>
>>
>>
>> On Wed, Mar 20, 2019 at 9:52 AM Artem Russakovskii 
>> wrote:
>>
>>> Can I roll back performance.write-behind: off and lru-limit=0 then? I'm
>>> waiting for the debug packages to be available for OpenSUSE, then I can
>>> help Amar with another debug session.
>>>
>>>
>> Yes, the write-behind issue is now fixed. You can enable write-behind.
>> Also remove lru-limit=0, so you can also utilize the benefit of garbage
>> collection introduced in 5.4
>>
>> Lets get to fixing the problem once the debuginfo packages are available.
>>
>>
>>
>>> In the meantime, have you had time to set up 1x4 replicate testing? I was
>>> told you were only testing 1x3, and it's the 4th brick that may be
>>> causing
>>> the crash, which is consistent with this whole time only 1 of 4 bricks
>>> constantly crashing. The other 3 have been rock solid. I'm hoping you
>>> could
>>> find the issue without a debug session this way.
>>>
>>>
>> That is my gut feeling still. Added a basic test case with 4 bricks,
>> https://review.gluster.org/#/c/glusterfs/+/22328/. But I think this
>> particular issue is happening only on certain pattern of access for 1x4
>> setup. Lets get to the root of it once we have debuginfo packages for Suse
>> builds.
>>
>> -Amar
>>
>> Sincerely,
>>> Artem
>>>
>>> --
>>> Founder, Android Police , APK Mirror
>>> , Illogical Robot LLC
>>> beerpla.net | +ArtemRussakovskii
>>>  | @ArtemR
>>> 
>>>
>>>
>>> On Tue, Mar 19, 2019 at 8:27 PM Nithya Balachandran >> >
>>> wrote:
>>>
>>> > Hi Artem,
>>> >
>>> > I think you are running into a different crash. The ones reported which
>>> > were prevented by turning off write-behind are now fixed.
>>> > We will need to look into the one you are seeing to see why it is
>>> > happening.
>>> >
>>> > Regards,
>>> > Nithya
>>> >
>>> >
>>> > On Tue, 19 Mar 2019 at 20:25, Artem Russakovskii 
>>> > wrote:
>>> >
>>> >> The flood is indeed fixed for us on 5.5. However, the crashes are not.
>>> >>
>>> >> Sincerely,
>>> >> Artem
>>> >>
>>> >> --
>>> >> Founder, Android Police , APK Mirror
>>> >> , Illogical Robot LLC
>>> >> beerpla.net | +ArtemRussakovskii
>>> >>  | @ArtemR
>>> >> 
>>> >>
>>> >>
>>> >> On Mon, Mar 18, 2019 at 5:41 AM Hu Bert 
>>> wrote:
>>> >>
>>> >>> Hi Amar,
>>> >>>
>>> >>> if you refer to this bug:
>>> >>> https://bugzilla.redhat.com/show_bug.cgi?id=1674225 : in the test
>>> >>> setup i haven't seen those entries, while copying & deleting a few
>>> GBs
>>> >>> of data. For a final statement we have to wait until i updated our
>>> >>> live gluster servers - could take place on tuesday or wednesday.
>>> >>>
>>> >>> Maybe other users can do an update to 5.4 as well and report back
>>> here.
>>> >>>
>>> >>>
>>> >>> Hubert
>>> >>>
>>> >>>
>>> >>>
>>> >>> Am Mo., 18. März 2019 um 11:36 Uhr schrieb Amar Tumballi Suryanarayan
>>> >>> :
>>> >>> >
>>> >>> > Hi Hu Bert,
>>> >>> >
>>> >>> > Appreciate the feedback. Also are the other boiling issues related
>>> to
>>> >>> logs fixed now?
>>> >>> >
>>> >>> > -Amar
>>> >>> >
>>> >>> > On Mon, Mar 18, 2019 at 3:54 PM Hu Bert 
>>> >>> wrote:
>>> >>> >>
>>> >>> >> update: upgrade from 5.3 -> 5.5 in a replicate 3 test setup with 2
>>> >>> >> volumes done. In 'gluster peer status' the peers stay connected
>>> during
>>> >>> >> the upgrade, no 'peer rejected' messages. No cksum mismatches in
>>> the
>>> >>> >> logs. Looks good :-)
>>> >>> >>
>>> >>> >> Am Mo., 18. März 2019 um 09:54 Uhr schrieb Hu Bert <
>>> >>> revi...@googlemail.com>:
>>> >>> >> >
>>> >>> >> > Good morning :-)
>>> >>> >> >
>>> >>> >> > for debian the packages are there:
>>> >>> >> >
>>> >>>
>>> https://download.gluster.org/pub/gluster/glusterfs/5/5.5/Debian/stretch/amd64/apt/pool/main/g/glusterfs/
>>> >>> >> >
>>> >>> >> > I'll do 

Re: [Gluster-users] Thin-arbiter questions

2019-05-03 Thread David Cunningham
OK, thank you Ashish.


On Fri, 3 May 2019 at 14:43, Ashish Pandey  wrote:

> David,
>
> I am adding members who are working on glusterd2 (Aravinda) and
> thin-arbiter support in glusterd (Vishal) and who can
> better reply on these questions.
>
> Patch for glusterd has been sent and it only requires reviews. I hope it
> should be completed in next 1 month or so.
> https://review.gluster.org/#/c/glusterfs/+/22612/
>
> ---
> Ashish
>
> --
> *From: *"David Cunningham" 
> *To: *"Ashish Pandey" 
> *Cc: *gluster-users@gluster.org
> *Sent: *Friday, May 3, 2019 8:04:04 AM
> *Subject: *Re: [Gluster-users] Thin-arbiter questions
>
> Hi Ashish,
>
> Thanks very much for that reply. How stable is GD2? Is there even a vague
> ETA on when it might be supported in gluster?
>
>
> On Fri, 3 May 2019 at 14:30, Ashish Pandey  wrote:
>
>> Hi David,
>>
>> Creation of thin-arbiter volume is currently supported by GD2 only. The
>> command "glustercli" is available when glusterd2 is running.
>> We are also working on providing thin-arbiter support on glusted however,
>> it is not available right now.
>> https://review.gluster.org/#/c/glusterfs/+/22612/
>>
>> ---
>> Ashish
>>
>> --
>> *From: *"David Cunningham" 
>> *To: *gluster-users@gluster.org
>> *Sent: *Friday, May 3, 2019 7:40:03 AM
>> *Subject: *[Gluster-users] Thin-arbiter questions
>>
>> Hello,
>>
>> We are setting up a thin-arbiter and hope someone can help with some
>> questions. We've been following the documentation from
>> https://docs.gluster.org/en/latest/Administrator%20Guide/Thin-Arbiter-Volumes/
>> .
>>
>> 1. What release of 5.x supports thin-arbiter? We tried a "gluster volume
>> create" with the --thin-arbiter option on 5.5 and got an "unrecognized
>> option --thin-arbiter" error.
>>
>> 2. The instruction to create a new volume with a thin-arbiter is clear.
>> How do you add a thin-arbiter to an already existing volume though?
>>
>> 3. The documentation suggests running glusterfsd manually to start the
>> thin-arbiter. Is there a service that can do this instead? I found a
>> mention of one in https://bugzilla.redhat.com/show_bug.cgi?id=1579786
>> but it's not really documented.
>>
>> Thanks in advance for your help,
>>
>> --
>> David Cunningham, Voisonics Limited
>> http://voisonics.com/
>> USA: +1 213 221 1092
>> New Zealand: +64 (0)28 2558 3782
>>
>> ___
>> Gluster-users mailing list
>> Gluster-users@gluster.org
>> https://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>
> --
> David Cunningham, Voisonics Limited
> http://voisonics.com/
> USA: +1 213 221 1092
> New Zealand: +64 (0)28 2558 3782
>
>

-- 
David Cunningham, Voisonics Limited
http://voisonics.com/
USA: +1 213 221 1092
New Zealand: +64 (0)28 2558 3782
___
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users