Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature
Hi, I'm posting this again as it got bounced. Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil Nikolov On Apr 30, 2019 15:19, Jim Kinney wrote: > > +1! > I'm using nfs-ganesha in my next upgrade so my client systems can use NFS > instead of fuse mounts. Having an integrated, designed in process to > coordinate multiple nodes into an HA cluster will very welcome. > > On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan > wrote: >> >> Hi all, >> >> Some of you folks may be familiar with HA solution provided for nfs-ganesha >> by gluster using pacemaker and corosync. >> >> That feature was removed in glusterfs 3.10 in favour for common HA project >> "Storhaug". Even Storhaug was not progressed >> >> much from last two years and current development is in halt state, hence >> planning to restore old HA ganesha solution back >> >> to gluster code repository with some improvement and targetting for next >> gluster release 7. >> >> I have opened up an issue [1] with details and posted initial set of >>patches [2] >> >> Please share your thoughts on the same >> >> >> Regards, >> >> Jiffin >> >> [1] https://github.com/gluster/glusterfs/issues/663 >> >> [2] >> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) >> >> > > -- > Sent from my Android device with K-9 Mail. All tyopes are thumb related and > reflect authenticity. Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney wrote: > > +1! > I'm using nfs-ganesha in my next upgrade so my client systems can use NFS > instead of fuse mounts. Having an integrated, designed in process to > coordinate multiple nodes into an HA cluster will very welcome. > > On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan > wrote: >> >> Hi all, >> >> Some of you folks may be familiar with HA solution provided for nfs-ganesha >> by gluster using pacemaker and corosync. >> >> That feature was removed in glusterfs 3.10 in favour for common HA project >> "Storhaug". Even Storhaug was not progressed >> >> much from last two years and current development is in halt state, hence >> planning to restore old HA ganesha solution back >> >> to gluster code repository with some improvement and targetting for next >> gluster release 7. >> >> I have opened up an issue [1] with details and posted initial set of patches >> [2] >> >> Please share your thoughts on the same >> >> Regards, >> >> Jiffin >> >> [1] https://github.com/gluster/glusterfs/issues/663 >> >> [2] >> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) > > > -- > Sent from my Android device with K-9 Mail. All tyopes are thumb related and > reflect authenticity. ___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/836554017 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/486278655 Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature
Hi Jiffin, No vendor will support your corosync/pacemaker stack if you do not have proper fencing. As Gluster is already a cluster of its own, it makes sense to control everything from there. Best Regards, Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan wrote: > > > On 30/04/19 6:59 PM, Strahil Nikolov wrote: > > Hi, > > > > I'm posting this again as it got bounced. > > Keep in mind that corosync/pacemaker is hard for proper setup by new > > admins/users. > > > > I'm still trying to remediate the effects of poor configuration at work. > > Also, storhaug is nice for hyperconverged setups where the host is not only > > hosting bricks, but other workloads. > > Corosync/pacemaker require proper fencing to be setup and most of the > > stonith resources 'shoot the other node in the head'. > > I would be happy to see an easy to deploy (let say > > 'cluster.enable-ha-ganesha true') and gluster to be bringing up the > > Floating IPs and taking care of the NFS locks, so no disruption will be > > felt by the clients. > > > It do take care those, but need to follow certain prerequisite, but > please fencing won't configured for this setup. May we think about in > future. > > -- > > Jiffin > > > > > Still, this will be a lot of work to achieve. > > > > Best Regards, > > Strahil Nikolov > > > > On Apr 30, 2019 15:19, Jim Kinney wrote: > >> > >> +1! > >> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS > >> instead of fuse mounts. Having an integrated, designed in process to > >> coordinate multiple nodes into an HA cluster will very welcome. > >> > >> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan > >> wrote: > >>> > >>> Hi all, > >>> > >>> Some of you folks may be familiar with HA solution provided for > >>> nfs-ganesha by gluster using pacemaker and corosync. > >>> > >>> That feature was removed in glusterfs 3.10 in favour for common HA > >>> project "Storhaug". Even Storhaug was not progressed > >>> > >>> much from last two years and current development is in halt state, hence > >>> planning to restore old HA ganesha solution back > >>> > >>> to gluster code repository with some improvement and targetting for next > >>> gluster release 7. > >>> > >>> I have opened up an issue [1] with details and posted initial set of > >>>patches [2] > >>> > >>> Please share your thoughts on the same > >>> > >>> > >>> Regards, > >>> > >>> Jiffin > >>> > >>> [1] https://github.com/gluster/glusterfs/issues/663 > >>> > >>> [2] > >>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) > >>> > >>> > >>> > >> -- > >> Sent from my Android device with K-9 Mail. All tyopes are thumb related > >> and reflect authenticity. > > Keep in mind that corosync/pacemaker is hard for proper setup by new > > admins/users. > > > > I'm still trying to remediate the effects of poor configuration at work. > > Also, storhaug is nice for hyperconverged setups where the host is not only > > hosting bricks, but other workloads. > > Corosync/pacemaker require proper fencing to be setup and most of the > > stonith resources 'shoot the other node in the head'. > > I would be happy to see an easy to deploy (let say > > 'cluster.enable-ha-ganesha true') and gluster to be bringing up the > > Floating IPs and taking care of the NFS locks, so no disruption will be > > felt by the clients. > > > > Still, this will be a lot of work to achieve. > > > > Best Regards, > > Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney > > wrote: > >> +1! > >> I'm using nfs-ganesha in my next upgrade so my client systems can use NFS > >> instead of fuse mounts. Having an integrated, designed in process to > >> coordinate multiple nodes into an HA cluster will very welcome. > >> > >> On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan > >> wrote: > >>> Hi all, > >>> > >>> Some of you folks may be familiar with HA solution provided for > >>> nfs-ganesha by gluster using pacemaker and corosync. > >>> > >>> That feature was removed in glusterfs 3.10 in favour for common HA > >>> project "Storhaug". Even Storhaug was not progressed > >>> > >>> much from last two years and current development is in halt state, hence > >>> planning to restore old HA ganesha solution back > >>> > >>> to gluster code repository with some improvement and targetting for next > >>> gluster release 7. > >>> > >>> I have opened up an issue [1] with details and posted initial set of > >>> patches [2] > >>> > >>> Please share your thoughts on the same > >>> > >>> Regards, > >>> > >>> Jiffin > >>> > >>> [1] https://github.com/gluster/glusterfs/issues/663 > >>> > >>> [2] > >>> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) > >>> > >> > >> -- > >> Sent from my Android device with K-9 Mail. All tyopes are thumb related > >> and reflect authenticity.
Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature
Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney wrote: > > +1! > I'm using nfs-ganesha in my next upgrade so my client systems can use NFS > instead of fuse mounts. Having an integrated, designed in process to > coordinate multiple nodes into an HA cluster will very welcome. > > On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan > wrote: >> >> Hi all, >> >> Some of you folks may be familiar with HA solution provided for nfs-ganesha >> by gluster using pacemaker and corosync. >> >> That feature was removed in glusterfs 3.10 in favour for common HA project >> "Storhaug". Even Storhaug was not progressed >> >> much from last two years and current development is in halt state, hence >> planning to restore old HA ganesha solution back >> >> to gluster code repository with some improvement and targetting for next >> gluster release 7. >> >> I have opened up an issue [1] with details and posted initial set of patches >> [2] >> >> Please share your thoughts on the same >> >> Regards, >> >> Jiffin >> >> [1] https://github.com/gluster/glusterfs/issues/663 >> >> [2] >> https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) > > > -- > Sent from my Android device with K-9 Mail. All tyopes are thumb related and > reflect authenticity.___ Community Meeting Calendar: APAC Schedule - Every 2nd and 4th Tuesday at 11:30 AM IST Bridge: https://bluejeans.com/836554017 NA/EMEA Schedule - Every 1st and 3rd Tuesday at 01:00 PM EDT Bridge: https://bluejeans.com/486278655 Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature
Hi On 04/05/19 12:04 PM, Strahil wrote: Hi Jiffin, No vendor will support your corosync/pacemaker stack if you do not have proper fencing. As Gluster is already a cluster of its own, it makes sense to control everything from there. Best Regards, Yeah I agree with your point. What I meant to say by default this feature won't provide any fencing mechanism, user need to manually configure fencing for the cluster. In future we can try to include to default fencing configuration for the ganesha cluster as part of the Ganesha HA configuration Regards, Jiffin Strahil NikolovOn May 3, 2019 09:08, Jiffin Tony Thottan wrote: On 30/04/19 6:59 PM, Strahil Nikolov wrote: Hi, I'm posting this again as it got bounced. Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. It do take care those, but need to follow certain prerequisite, but please fencing won't configured for this setup. May we think about in future. -- Jiffin Still, this will be a lot of work to achieve. Best Regards, Strahil Nikolov On Apr 30, 2019 15:19, Jim Kinney wrote: +1! I'm using nfs-ganesha in my next upgrade so my client systems can use NFS instead of fuse mounts. Having an integrated, designed in process to coordinate multiple nodes into an HA cluster will very welcome. On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan wrote: Hi all, Some of you folks may be familiar with HA solution provided for nfs-ganesha by gluster using pacemaker and corosync. That feature was removed in glusterfs 3.10 in favour for common HA project "Storhaug". Even Storhaug was not progressed much from last two years and current development is in halt state, hence planning to restore old HA ganesha solution back to gluster code repository with some improvement and targetting for next gluster release 7. I have opened up an issue [1] with details and posted initial set of patches [2] Please share your thoughts on the same Regards, Jiffin [1] https://github.com/gluster/glusterfs/issues/663 [2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney wrote: +1! I'm using nfs-ganesha in my next upgrade so my client systems can use NFS instead of fuse mounts. Having an integrated, designed in process to coordinate multiple nodes into an HA cluster will very welcome. On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan wrote: Hi all, Some of you folks may be familiar with HA solution provided for nfs-ganesha by gluster using pacemaker and corosync. That feature was removed in glusterfs 3.10 in favour for common HA project "Storhaug". Even Storhaug was not progressed much from last two years and current development is in halt state, hence planning to restore old HA ganesha solution back to gluster code repository with some improvement and targetting for next gluster release 7. I have opened up an issue [1] with details and posted initial set of patches [2] Please share your thoughts on the same Regards, Jiffin [1] https://github.com/gluster/glusterfs/issues/663 [2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
Re: [Gluster-devel] [Gluster-users] Proposing to previous ganesha HA clustersolution back to gluster code as gluster-7 feature
On 30/04/19 6:59 PM, Strahil Nikolov wrote: Hi, I'm posting this again as it got bounced. Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. It do take care those, but need to follow certain prerequisite, but please fencing won't configured for this setup. May we think about in future. -- Jiffin Still, this will be a lot of work to achieve. Best Regards, Strahil Nikolov On Apr 30, 2019 15:19, Jim Kinney wrote: +1! I'm using nfs-ganesha in my next upgrade so my client systems can use NFS instead of fuse mounts. Having an integrated, designed in process to coordinate multiple nodes into an HA cluster will very welcome. On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan wrote: Hi all, Some of you folks may be familiar with HA solution provided for nfs-ganesha by gluster using pacemaker and corosync. That feature was removed in glusterfs 3.10 in favour for common HA project "Storhaug". Even Storhaug was not progressed much from last two years and current development is in halt state, hence planning to restore old HA ganesha solution back to gluster code repository with some improvement and targetting for next gluster release 7. I have opened up an issue [1] with details and posted initial set of patches [2] Please share your thoughts on the same Regards, Jiffin [1] https://github.com/gluster/glusterfs/issues/663 [2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. Keep in mind that corosync/pacemaker is hard for proper setup by new admins/users. I'm still trying to remediate the effects of poor configuration at work. Also, storhaug is nice for hyperconverged setups where the host is not only hosting bricks, but other workloads. Corosync/pacemaker require proper fencing to be setup and most of the stonith resources 'shoot the other node in the head'. I would be happy to see an easy to deploy (let say 'cluster.enable-ha-ganesha true') and gluster to be bringing up the Floating IPs and taking care of the NFS locks, so no disruption will be felt by the clients. Still, this will be a lot of work to achieve. Best Regards, Strahil NikolovOn Apr 30, 2019 15:19, Jim Kinney wrote: +1! I'm using nfs-ganesha in my next upgrade so my client systems can use NFS instead of fuse mounts. Having an integrated, designed in process to coordinate multiple nodes into an HA cluster will very welcome. On April 30, 2019 3:20:11 AM EDT, Jiffin Tony Thottan wrote: Hi all, Some of you folks may be familiar with HA solution provided for nfs-ganesha by gluster using pacemaker and corosync. That feature was removed in glusterfs 3.10 in favour for common HA project "Storhaug". Even Storhaug was not progressed much from last two years and current development is in halt state, hence planning to restore old HA ganesha solution back to gluster code repository with some improvement and targetting for next gluster release 7. I have opened up an issue [1] with details and posted initial set of patches [2] Please share your thoughts on the same Regards, Jiffin [1] https://github.com/gluster/glusterfs/issues/663 [2] https://review.gluster.org/#/q/topic:rfc-663+(status:open+OR+status:merged) -- Sent from my Android device with K-9 Mail. All tyopes are thumb related and reflect authenticity. ___ Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel