Re: NFS: file too large

2011-01-19 Thread Rick Macklem
  :Well, since a server specifies the maximum file size it can
  :handle, it seems good form to check for that in the client.
  :(Although I'd agree that a server shouldn't crash if a read/write
  : that goes beyond that limit.)
  :
  :Also, as Matt notes, off_t is signed. As such, it looks to me like
  :the check could mess up if uio_offset it right near
  0x7fff,
  :so that uio-ui_offset + uio-uio_resid ends up negative. I think
  the
  :check a little above that for uio_offset  0 should also check
  :uio_offset + uio_resid  0 to avoid this.
  :
  :rick
 
  Yes, though doing an overflow check in C, at least with newer
  versions
  of GCC, requires a separate comparison. The language has been
  mangled
  pretty badly over the years.
 
 
  if (a + b  a) - can be optimized-out by the compiler
 
  if (a + b  0) - also can be optimized-out by the compiler
 
  x = a + b;
  if (x  a) - this is ok (best method)
 
  x = a + b;
  if (x  0) - this is ok
 
Ok, thanks. I'll admit to being an old K+R type guy.

 
 my question, badly written, was why not let the underlaying fs (ufs,
 zfs, etc)
 have the last word, instead of the nfsclient having to guess? Is there
 a problem in sending back the error?
 
Well, the principal I try and apply in the name of interoperability is:
1 - The client should adhere to the RFCs as strictly as possible
2 - The server should assume the loosest interpretation of the RFCs.

For me #1 applies. ie. If a server specifies a maximum file size, the
client should not violate that. (Meanwhile the server should assume that
clients will exceed the maximum sooner or later.)

Remember that the server might be a Netapp, EMC, ... and those vendors
mostly test their servers against Linux, Solaris clients. (I've tried to
convince them to fire up FreeBSD systems in-house for testing and even
volunteered to help with the setup, but if they've done so, I've never
heard about it. Their usual response is come to connectathon. See below.)

Here's an NFSv4.0 example:
- RFC3530 describes the dircount argument for Readdir as a hint of
  the maximum number of bytes of directory information (in 4th para of
  pg 191). One vendor ships an NFSv4 client that always sets this value
  to 0. Their argument is that, since it is only a hint it can be
  anything they feel like putting there. (Several servers crapped out
  because of this in the early days.)

Part of the problem is that I am not in a position to attend the
interoperability testing events like www.connectathon.org, where these
things are usually discovered (and since they are covered under an NDA
that attendies sign, I don't find out the easy way when problems occur).

rick
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-18 Thread Matthew Dillon
:Well, since a server specifies the maximum file size it can
:handle, it seems good form to check for that in the client.
:(Although I'd agree that a server shouldn't crash if a read/write
: that goes beyond that limit.)
:
:Also, as Matt notes, off_t is signed. As such, it looks to me like
:the check could mess up if uio_offset it right near 0x7fff,
:so that uio-ui_offset + uio-uio_resid ends up negative. I think the
:check a little above that for uio_offset  0 should also check
:uio_offset + uio_resid  0 to avoid this.
:
:rick

Yes, though doing an overflow check in C, at least with newer versions
of GCC, requires a separate comparison.  The language has been mangled
pretty badly over the years.


if (a + b  a)  - can be optimized-out by the compiler

if (a + b  0)  - also can be optimized-out by the compiler

x = a + b;
if (x  a)  - this is ok (best method)

x = a + b;
if (x  0)  - this is ok


This sort of check may already be made in various places (e.g. by UFS
and/or uio), since negative offsets are used to identify meta-data in
UFS.

-Matt
Matthew Dillon 
dil...@backplane.com
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-18 Thread Daniel Braniss
 :Well, since a server specifies the maximum file size it can
 :handle, it seems good form to check for that in the client.
 :(Although I'd agree that a server shouldn't crash if a read/write
 : that goes beyond that limit.)
 :
 :Also, as Matt notes, off_t is signed. As such, it looks to me like
 :the check could mess up if uio_offset it right near 0x7fff,
 :so that uio-ui_offset + uio-uio_resid ends up negative. I think the
 :check a little above that for uio_offset  0 should also check
 :uio_offset + uio_resid  0 to avoid this.
 :
 :rick
 
 Yes, though doing an overflow check in C, at least with newer versions
 of GCC, requires a separate comparison.  The language has been mangled
 pretty badly over the years.
 
 
 if (a + b  a)- can be optimized-out by the compiler
 
 if (a + b  0)- also can be optimized-out by the compiler
 
 x = a + b;
 if (x  a)- this is ok (best method)
 
 x = a + b;
 if (x  0)- this is ok
 
 
 This sort of check may already be made in various places (e.g. by UFS
 and/or uio), since negative offsets are used to identify meta-data in
 UFS.
 
   -Matt
   Matthew Dillon 
   dil...@backplane.com

my question, badly written, was why not let the underlaying fs (ufs, zfs, etc)
have the last word, instead of the nfsclient having to guess? Is there
a problem in sending back the error?

danny


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-14 Thread Daniel Braniss
 :Try editting line #1226 of sys/nfsclient/nfs_vfsops.c, where
 :it sets nm_maxfilesize = (u_int64_t)0x8000 * DEV_BSIZE - 1; and make it
 :something larger.
 :
 :I have no idea why the limit is set that way? (I'm guessing it was the
 :limit for UFS.) Hopefully not some weird buffer cache restriction or
 :similar, but you'll find out when you try increasing it.:-)
 
 This is a throwback to when the buffer cache used 32 bit block numbers,
 hence 0x7FFF was the maximum 'safe' block number multiplied by
 the lowest supported block size (DEV_BSIZE), that could be handled by
 the buffer cache.
 
 That limit is completely irrelevant now and should probably be set to
 0x7FFFLLU (since seek offsets are signed).

I just did that and it fixes the problem.

BTW, why not make away with the test altogether?

Cheers and thanks,
danny


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-14 Thread Rick Macklem
 
 BTW, why not make away with the test altogether?
 
Well, since a server specifies the maximum file size it can
handle, it seems good form to check for that in the client.
(Although I'd agree that a server shouldn't crash if a read/write
 that goes beyond that limit.)

Also, as Matt notes, off_t is signed. As such, it looks to me like
the check could mess up if uio_offset it right near 0x7fff,
so that uio-ui_offset + uio-uio_resid ends up negative. I think the
check a little above that for uio_offset  0 should also check
uio_offset + uio_resid  0 to avoid this.

rick
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-13 Thread Daniel Braniss
  I'm getting 'File too large' when copying via NFS(v3, tcp/udp) a file
  that is larger than 1T. The server is ZFS which has no problem with
  large
  files.
  
  Is this fixable?
  
 As I understand it, there is no FreeBSD VFSop that returns the maximum
 file size supported. As such, the NFS servers just take a guess.
 
 You can either switch to the experimental NFS server, which guesses the
 largest size expressed in 64bits.
 OR
 You can edit sys/nfsserver/nfs_serv.c and change the assignment of a
 value to
 maxfsize = XXX;
 at around line #3671 to a larger value.
 
 I didn't check to see if there are additional restrictions in the
 clients. (They should believe what the server says it can support.)
 
 rick

well, after some more experimentation, it sees to be a FreeBSD client issue.
if the client is linux there is no problem.

BTW, I 'think' I'm using the experimental server, but how can I be sure?
I have the -e set for both nfs_server and mountd, I don't have option NFSD,
but the nfsd.ko gets loaded.
cheers,
danny


___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-13 Thread Rick Macklem
   I'm getting 'File too large' when copying via NFS(v3, tcp/udp) a
   file
   that is larger than 1T. The server is ZFS which has no problem
   with
   large
   files.
  
   Is this fixable?
  
  As I understand it, there is no FreeBSD VFSop that returns the
  maximum
  file size supported. As such, the NFS servers just take a guess.
 
  You can either switch to the experimental NFS server, which guesses
  the
  largest size expressed in 64bits.
  OR
  You can edit sys/nfsserver/nfs_serv.c and change the assignment of a
  value to
  maxfsize = XXX;
  at around line #3671 to a larger value.
 
  I didn't check to see if there are additional restrictions in the
  clients. (They should believe what the server says it can support.)
 
  rick
 
 well, after some more experimentation, it sees to be a FreeBSD client
 issue.
 if the client is linux there is no problem.
 

Try editting line #1226 of sys/nfsclient/nfs_vfsops.c, where
it sets nm_maxfilesize = (u_int64_t)0x8000 * DEV_BSIZE - 1; and make it
something larger.

I have no idea why the limit is set that way? (I'm guessing it was the
limit for UFS.) Hopefully not some weird buffer cache restriction or
similar, but you'll find out when you try increasing it.:-)

I think I'll ask freebsd-fs@ about increasing this for NFSv3 and 4, since
the server does provide a limit. (The client currently only reduces 
nm_maxfilesize from the above initial value using the server's limit.)

Just grep nm_maxfilesize *.c in sys/nfsclient and you'll see it.

 BTW, I 'think' I'm using the experimental server, but how can I be
 sure?
 I have the -e set for both nfs_server and mountd, I don't have option
 NFSD,
 but the nfsd.ko gets loaded.
You can check by:
# nfsstat -s
# nfsstat -e -s
and see which one reports non-zero RPC counts.

If you happen to be running the regular server (probably not, given the
above), you need to edit the server code as well as the client side.

Good luck with it, rick
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-13 Thread Matthew Dillon
:Try editting line #1226 of sys/nfsclient/nfs_vfsops.c, where
:it sets nm_maxfilesize = (u_int64_t)0x8000 * DEV_BSIZE - 1; and make it
:something larger.
:
:I have no idea why the limit is set that way? (I'm guessing it was the
:limit for UFS.) Hopefully not some weird buffer cache restriction or
:similar, but you'll find out when you try increasing it.:-)

This is a throwback to when the buffer cache used 32 bit block numbers,
hence 0x7FFF was the maximum 'safe' block number multiplied by
the lowest supported block size (DEV_BSIZE), that could be handled by
the buffer cache.

That limit is completely irrelevant now and should probably be set to
0x7FFFLLU (since seek offsets are signed).

-Matt
Matthew Dillon 
dil...@backplane.com
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org


Re: NFS: file too large

2011-01-12 Thread Rick Macklem
 I'm getting 'File too large' when copying via NFS(v3, tcp/udp) a file
 that is larger than 1T. The server is ZFS which has no problem with
 large
 files.
 
 Is this fixable?
 
As I understand it, there is no FreeBSD VFSop that returns the maximum
file size supported. As such, the NFS servers just take a guess.

You can either switch to the experimental NFS server, which guesses the
largest size expressed in 64bits.
OR
You can edit sys/nfsserver/nfs_serv.c and change the assignment of a
value to
maxfsize = XXX;
at around line #3671 to a larger value.

I didn't check to see if there are additional restrictions in the
clients. (They should believe what the server says it can support.)

rick
___
freebsd-hackers@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-hackers
To unsubscribe, send any mail to freebsd-hackers-unsubscr...@freebsd.org