Yes Randy

I rember in my jobs i found this problem in a cluster

This it's wrong
==============================================
<service autostart="1" domain="nfs1-domain" exclusive="0" name="nfs1"
nfslock="1" recovery="relocate">
                            <ip ref="192.168.1.1">
                                    <fs __independent_subtree="1"
ref="volume01">
                                            <nfsexport name="nfs-volume01">
                                                    <nfsclient name=" "
ref="local-subnet"/>
                                            </nfsexport>
                                    </fs>
                            </ip>
================================================
it's must be
================================================
<service autostart="1" domain="nfs1-domain" exclusive="0" name="nfs1"
nfslock="1" recovery="relocate">
                            <ip ref="192.168.1.1"/>
                                    <fs __independent_subtree="1"
ref="volume01">
                                            <nfsexport name="nfs-volume01">
                                                    <nfsclient name=" "
ref="local-subnet"/>
                                            </nfsexport>
                                    </fs>
================================================

A give a little explaination, Redhat have a internar order and knows i
which sequense start the resource
For more information read the script /usr/share/cluster/service.sh under
the metadata session



2012/5/16 Randy Zagar <[email protected]>

>  Also, it looks like the resource manager tries to disable the IP address
> when it's a child of the nfsclient resource.  Is that going to be a problem
> when I have 16 NFS exports hosted on a single IP?
>
>
> -RZ
>
> On 05/16/2012 11:00 AM, [email protected] wrote:
>
>  On 05/15/2012 07:33 PM, Randy Zagar wrote:
>
>
>  >                <resources>>                            <ip 
> address="192.168.1.1" monitor_link="1"/>>                           <ip 
> address="192.168.1.2" monitor_link="1"/>>                           <ip 
> address="192.168.1.3" monitor_link="1"/>>                           <fs 
> device="/dev/cvg00/volume01" force_fsck="0" force_unmount="1" fsid="49388" 
> fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>>    
>                    <fs device="/dev/cvg00/volume02" force_fsck="0" 
> force_unmount="1" fsid="58665" fstype="ext3" mountpoint="/lvm/volume01" 
> name="volume01" self_fence="0"/>>                       <fs 
> device="/dev/cvg00/volume03" force_fsck="0" force_unmount="1" fsid="61028" 
> fstype="ext3" mountpoint="/lvm/volume01" name="volume01" self_fence="0"/>>    
>                    <nfsclient allow_recover="1" name="local-subnet" 
> options="rw,insecure" target="192.168.1.0/24"/>>               </resources>
>
>  For the <fs resources you want nfslock="1" option too.
>
>
>  >                <service autostart="1" domain="nfs1-domain" exclusive="0" 
> name="nfs1" nfslock="1" recovery="relocate">>                         <ip 
> ref="192.168.1.1">>                                 <fs 
> __independent_subtree="1" ref="volume01">>                                    
>       <nfsexport name="nfs-volume01">>                                        
>         <nfsclient name=" " ref="local-subnet"/>>                             
>           </nfsexport>>                                   </fs>>              
>             </ip>
>
>  For all services you need to change the order.
>
> <fs..
>  <nfsexport..
>   <nfsclient..
>    <ip..
>   </nfsclient..
>  </nfsexport..
> </fs
>
> This solves different issues at startup, relocation and recovery
>
> Also note that there is known limitation in nfsd (both rhel5/6) that
> could cause some problems in some conditions in your current
> configuration. A permanent fix is being worked on atm.
>
> Without extreme details, you might have 2 of those services running on
> the same node and attempting to relocate one of them can fail because
> the fs cannot be unmounted. This is due to nfsd holding a lock (at
> kernel level) to the FS. Changing config to the suggested one, mask the
> problem pretty well, but more testing for a real fix is in progress.
>
> Fabio
>
>
> --
> Randy Zagar                               Sr. Unix Systems Administrator
> E-mail: [email protected]            Applied Research Laboratories
> Phone: 512 835-3131                       Univ. of Texas at Austin
>
>
> --
> Linux-cluster mailing list
> [email protected]
> https://www.redhat.com/mailman/listinfo/linux-cluster
>



-- 
esta es mi vida e me la vivo hasta que dios quiera
--
Linux-cluster mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to