Hi, Supriti.

CMAL is dead. There wasn't enough interest in it, and the majority of it's features are provided by whatever clustered filesystem is backing Ganesha.

There are plans underway to extend the FSAL APIs to support more features required by HA. It's unlikely that we'll ever be able to run active/active without a framework like pacemaker/corosync, but it's not impossible.

Note that storhaug for Gluster does do active/active, so it's certainly possible. Work for Ceph is still in progress.

Daniel

On 07/05/2017 06:56 AM, Supriti Singh wrote:
Hi Mark,

I have looked into storhaug. I was trying to use the resource agents,
"ganesha" and "ganesha_trigger". But just for active-passive
configuration, I used systemd resource agent for now.

Thanks,
Supriti


------
Supriti Singh
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton,
HRB 21284 (AG Nürnberg)

gui mark <guiggl...@gmail.com> 07/05/17 10:58 AM >>>
Hi Supriti,

I've been on Nfs-ganesha HA for a while, the patch you metioned is
somewhat a "generic" for ceph-backended exports using rados OMAP(for
now, of course cephfs and rgw). And glusterfs uses a shared fs
interface, then we have to do mount.glusterfs to setup shared fs.

We use these shared store (sharedfs or rados_kv) to store client
tracking info, which is needed for nfsv4 recovery.
And there is a HA solution based on pacemaker and corosync just like
what you tried : https://github.com/linux-ha-storage/storhaug.
(It is still under developing, and you may find problems if you try)

IMHO, even if you use CTDB, you have to have a shared store for client
tracking info if you are using nfsv4, no?


I'm interested in the CMAL interface too, but there seems to be few
infomation about it.

Regards,
Mark

On Wed, Jul 5, 2017 at 3:55 PM, Supriti Singh <supriti.si...@suse.com
<mailto:supriti.si...@suse.com>> wrote:

     Hello all,

    I was able to setup a active-passive NFS-Ganesha HA cluster for
    CephFS and RGW FSAL using the pacmaker and corosync.

    For active-active, as per my understanding, we need to share the
    /var/lib/nfs/ganesha state among the nfs-ganesha nodes.
    I saw a patch that has been kept in hold for 2.6
    (https://review.gerrithub.io/#/c/355070/) that is adding this
    functionality.

    I am writing this email to know if there is any other development
    effort(for cephfs and rgw) in this regard going on,
    that I can contribute to. Also, if there is a possibility to use one
    single solution regardless of FSAL. Something like
    Samba does with CTDB? I think CMAL interface was trying to achieve the
    same:https://github.com/nfs-ganesha/nfs-ganesha/wiki/Cluster-DRC

    Thanks,
    Supriti

    ------
    Supriti Singh��SUSE Linux GmbH, GF: Felix Imend��rffer, Jane
    Smithard, Graham Norton,
    HRB 21284 (AG N��rnberg)






    
------------------------------------------------------------------------------
    Check out the vibrant tech community on one of the world's most
    engaging tech sites, Slashdot.org! http://sdm.link/slashdot
    _______________________________________________
    Nfs-ganesha-devel mailing list
    Nfs-ganesha-devel@lists.sourceforge.net
    <mailto:Nfs-ganesha-devel@lists.sourceforge.net>
    https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel




------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot



_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Nfs-ganesha-devel mailing list
Nfs-ganesha-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/nfs-ganesha-devel

Reply via email to