On Jun 9, 2008, at 12:28 PM, Andy Lubel wrote:

>
> On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:
>
>> That was it!
>>
>> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
>> hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
>> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
>> hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
>> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
>> hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R SETATTR3 Update synch mismatch
>> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
>>
>> It is too bad our silly hardware only allows us to go to 11.23.
>> That's OK though, in a couple months we will be dumping this server
>> with new x4600's.
>>
>> Thanks for the help,
>>
>> -Andy
>>
>>
>> On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:
>>
>>> Andy Lubel wrote:
>>>
>>>> I've got a real doozie..   We recently implemented a b89 as zfs/
>>>> nfs/ cifs server.  The NFS client is HP-UX (11.23).
>>>> What's happening is when our dba edits a file on the nfs mount
>>>> with  vi, it will not save.
>>>> I removed vi from the mix by doing 'touch /nfs/file1' then 'echo
>>>> abc   > /nfs/file1' and it just sat there while the nfs servers cpu
>>>> went up  to 50% (one full core).
>>>
>>> Hi Andy,
>>>
>>> This sounds familiar: you may be hitting something I diagnosed
>>> last year.  Run snoop and see if it loops like this:
>>>
>>> 10920   0.00013 141.240.193.235 -> 141.240.193.27 NFS C GETATTR3
>>> FH=6614
>>> 10921   0.00007 141.240.193.27 -> 141.240.193.235 NFS R GETATTR3 OK
>>> 10922   0.00017 141.240.193.235 -> 141.240.193.27 NFS C SETATTR3
>>> FH=6614
>>> 10923   0.00007 141.240.193.27 -> 141.240.193.235 NFS R SETATTR3
>>> Update synch mismatch
>>> 10924   0.00017 141.240.193.235 -> 141.240.193.27 NFS C GETATTR3
>>> FH=6614
>>> 10925   0.00023 141.240.193.27 -> 141.240.193.235 NFS R GETATTR3 OK
>>> 10926   0.00026 141.240.193.235 -> 141.240.193.27 NFS C SETATTR3
>>> FH=6614
>>> 10927   0.00009 141.240.193.27 -> 141.240.193.235 NFS R SETATTR3
>>> Update synch mismatch
>>>
>>> If you see this, you've hit what we filed as Sun bugid 6538387,
>>> "HP-UX automount NFS client hangs for ZFS filesystems".  It's an
>>> HP-UX bug, fixed in HP-UX 11.31.  The synopsis is that HP-UX gets
>>> bitten by the nanosecond resolution on ZFS.  Part of the CREATE
>>> handshake is for the server to send the create time as a 'guard'
>>> against almost-simultaneous creates - the client has to send it
>>> back in the SETATTR to complete the file creation.  HP-UX has only
>>> microsecond resolution in their VFS, and so the 'guard' value is
>>> not sent accurately and the server rejects it, lather rinse and
>>> repeat.  The spec, RFC 1813, talks about this in section 3.3.2.
>>> You can use NFSv2 in the short term until you get that update.
>>>
>>> If you see something different, by all means send us a snoop.
>
> Update:
>
> We tried nfs v2 and the speed was terrible but the gettattr/setattr
> issue was gone.  So what I'm looking at doing now is to create a raw
> volume, format it with ufs, mount it locallly, then share it over
> nfs.  Luckily we will only have to do it this way for a few months, I
> don't like the extra layer and the block device isn't as fast as we
> hoped (I get about 400MB/s on the zfs filesystem and 180MB/s using the
> ufs-formatted local disk..  I just sure hope I'm not breaking any
> rules by implementing this workaround that will come back to haunt me
> later.
>
> -Andy

Tried this today and although things appear to function correctly, the  
performance seems to be steadily degrading.  Am I getting burnt by  
double-caching?  If so, what is the best way to workaround for my sad  
situation?  I tried directio for the ufs volume and it made it even  
worse..

The only next thing I know to do is destroy one of my zfs pools and go  
back to SVM until we can get some newer nfs clients writing to this  
nearline.  It pains me deeply!!

TIA,

-Andy

>
>
>>>
>>>
>>> Rob T
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to