First of all the parameter 201 it must be diferent for every resource

2012/6/19 Martin Marji Cermak <cerm...@gmail.com>

> Hello guys,
> I have 3 questions if you please.
>
> I have a HA NFS cluster - Centos 6.2, pacemaker, corosync, two NFS nodes
> plus 1 quorum node, in semi Active-Active configuration.
> By "semi", I mean that both NFS nodes are active and each of them is under
> normal circumstances exclusively responsible for one (out of two) Volume
> Group - using the ocf:heartbeat:LVM RA.
> Each LVM volume group lives on a dedicated multipath iscsi device, exported
> from a shared SAN.
>
> I'm exporting a NFSv3/v4 export (/srv/nfs/software_repos directory). I need
> to make it available for 2 separate /21 networks as read-only, and for 3
> different servers as read-write.
> I'm using the ocf:heartbeat:exportfs RA and it seems to me I have to use
> the ocf:heartbeat:exportfs RA 5 times.
>
>
> The configuration (only IP addresses changed) is here:
> http://pastebin.com/eHkgUv64
>
>
> 1) is there a way how to export this directory 5 times without defining 5
> ocf:heartbeat:exportfs primitives? It's a lot of duplications...
> I search all the forums and I fear the ocf:heartbeat:exportfs simply
> supports only one host / network range. But maybe someone has been working
> on a patch?
>
>
>
> 2) while using the ocf:heartbeat:exportfs 5 times for the same directory,
> do I have to use the _same_ FSID (201 in my config) for all these 5
> primitives (as I"m exporting the _same_ filesystem / directory)?
> I'm getting this warning when doing so
>
> WARNING: Resources
>
> p_exportfs_software_repos_ae1,p_exportfs_software_repos_ae2,p_exportfs_software_repos_buller,p_exportfs_software_repos_iap-mgmt,p_exportfs_software_repos_youyangs
> violate uniqueness for parameter "fsid": "201"
> Do you still want to commit?
>
>
>
> 3) wait_for_leasetime_on_stop - I believe this must be set to true when
> exporting NFSv4  with ocf:heartbeat:exportfs.
> http://www.linux-ha.org/doc/man-pages/re-ra-exportfs.html
>
> My 5 exportfs primitives reside in the same group:
>
> group g_nas02 p_lvm02 p_exportfs_software_repos_youyangs
> p_exportfs_software_repos_buller p_fs_software_repos
> p_exportfs_software_repos_ae1 p_exportfs_software_repos_ae2
> p_exportfs_software_repos_iap-mgmt p_ip02 \
>        meta resource-stickiness="101"
>
>
> Even though I have the /proc/fs/nfsd/nfsv4gracetime set to 10 seconds, a
> failover of the NFS group from one NFS node to the second node would take
> more than 50 seconds,
> as it will be waiting for each ocf:heartbeat:exportfs resource sleeping 10
> seconds 5 times.
>
> Is there any way of making them fail over / sleeping in parallel, instead
> of sequential?
>
> I workarounded this by setting wait_for_leasetime_on_stop="true" for only
> one of these (which I believe is safe and does the job it is expected to do
> - please correct me if I'm wrong).
>
>
>
> Thank you for your valuable comments.
>
> My Pacemaker configuration: http://pastebin.com/eHkgUv64
>
>
> [root@irvine ~]# facter | egrep 'lsbdistid|lsbdistrelease'
> lsbdistid => CentOS
> lsbdistrelease => 6.2
>
> [root@irvine ~]# rpm -qa | egrep 'pacemaker|corosync|agents'
>
> corosync-1.4.1-4.el6_2.2.x86_64
> pacemaker-cli-1.1.6-3.el6.x86_64
> pacemaker-libs-1.1.6-3.el6.x86_64
> corosynclib-1.4.1-4.el6_2.2.x86_64
> pacemaker-cluster-libs-1.1.6-3.el6.x86_64
> pacemaker-1.1.6-3.el6.x86_64
> fence-agents-3.1.5-10.el6_2.2.x86_64
>
> resource-agents-3.9.2-7.el6.x86_64
>   with /usr/lib/ocf/resource.d/heartbeat/exportfs updated by hand from:
>
> https://github.com/ClusterLabs/resource-agents/commits/master/heartbeat/exportfs
>
> Thank you very much
> Marji Cermak
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>



-- 
esta es mi vida e me la vivo hasta que dios quiera
_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to