I am "rolling my own" replication using zfs send|recv through the  
cluster agent framework and a custom HA shared local storage set of  
scripts(similar to http://www.posix.brte.com.br/blog/?p=75  but  
without avs).  I am not using zfs off of shared storage in the  
supported way. So this is a bit of a lonely area. =)

As these are two different zfs volumes on different zpools of  
differing underlying vdev topology, it appears they are not sharing  
the same fsid and so are assumedly presenting different file handles  
from each other.

I have the cluster parts out of the way(mostly =)), I now need to  
solve the nfs side of things so that at the point of failing over.

I have isolated zfs out of the equation, I receive the same stale file  
handle errors if I try and share an arbitrary UFS folder to the client  
through the cluster interface.

Yeah I am a hack.

Asa

On Nov 20, 2007, at 7:27 PM, Richard Elling wrote:

> asa wrote:
>> Well then this is probably the wrong list to be hounding
>>
>> I am looking for something like 
>> http://blog.wpkg.org/2007/10/26/stale-nfs-file-handle/
>> Where when fileserver A dies, fileserver B can come up, grab the  
>> same  IP address via some mechanism(in this case I am using sun  
>> cluster) and  keep on trucking without the lovely stale file handle  
>> errors I am  encountering.
>>
>
> If you are getting stale file handles, then the Solaris cluster is  
> misconfigured.
> Please double check the NFS installation guide for Solaris Cluster and
> verify that the paths are correct.
> -- richard
>

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to