On Nov 10, 2007, at 3:49 PM, Mattias Pantzare wrote:

> 2007/11/10, asa <[EMAIL PROTECTED]>:
>> Hello all. I am working on an NFS failover scenario between two
>> servers.  I am getting the stale file handle errors on my (linux)
>> client which point to there being a mismatch in the fsid's of my two
>> filesystems when the failover occurs.
>> I understand that the fsid_guid attribute which is then used as the
>> fsid in an NFS share, is created at zfs create time, but I would like
>> to see and modify that value on any particular zfs filesystem after
>> creation.
>>
>> More details were discussed at http://www.mail-archive.com/zfs-
>> [EMAIL PROTECTED]/msg03662.html but this was talking about the
>> same filesystem sitting on a san failing over between two nodes.
>>
>> On a linux NFS server one can specify in the nfs exports "-o
>> fsid=num" which can be an arbitrary number, which would seem to fix
>> this issue for me, but it seems to be unsupported on Solaris.
>
> As the fsid is created when the file system is created it will be the
> same when you mount it on a different NFS server. Why change it?


> Or are you trying to match two different file systems? Then you also
> have to match all inode-numbers on your files. That is not possible at
> all.
I am trying to match two different file systems.  I have the two file  
systems being replicated via zfs send|recv for a near realtime mirror  
so they are the same filesystem in my head.  There may well be zfs  
goodness going on under the hood which makes this fsid different even  
if they seem like they could be the same because they originated from  
the same filesystem via zfs send/recv. Perhaps what is happening when  
zfs recv recieves a zfs stream is to create a totally new filesystem  
under the new location.

I found an ID parameter on the datasets with:
 > zdb -d tank/test -lv
Dataset tank/test [ZPL], ID 37406, cr_txg 2410348, 593M, 21 objects

It is different on each machine.  Is this the GUID? or something  
else. Some hack way to set it?

I know not enough about inodes and zfs to know if what I am asking is  
silly, and once I get past this FSID issue I will hit that next  
stumbling block of inode and file ID differences which will trip up  
the nfs failover.

I would like for all my NFS clients to hang during the failover, then  
pick up trucking on this new filesystem, perhaps obviously failing  
their writes back to the apps which are doing the writing.  Naive?

Asa

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to