On Mar 13, 2006, at 2:53 PM, Juha Jäykkä wrote:
kernel) I cannot write a file larger than approximately 2GB in size
to my AFS volumes, even from the fileserver itself. The release
notes
Build it from source and use --enable-largefile-fileserver
This is odd, I have 1.3.81 and I'm quite able to write >2 GiB files
on the
AFS volume. I do not seem to be able to read them, though. Any process
trying to access the over 2GB-parts of the files hangs for ever. It
cannot
even be killed (SIGKILL). Which one is at fault here, server or
client?
(Everything runs on linux/XFS, except the client cache, which is on
ext2.)
They could both be the cause of your problem.
The large file support has to be in your client as well as your
fileserver, if you want to handle large files. ;-)
You should be able to mix clients and servers for files < 2GB.
I also have one 1.4.0 -server. What happens if I put the large file on
1.4.0 and try to access it from 1.3.81 clients? What if I replicate
the
volume to 1.3.81 fileservers? Shuold I force all fileservers to be
of the
same version?
I don't remember if 1.4.x has large file support enabled by default,
since I don't use packages, but if it doesn't, you only get into
trouble when you mix in the 'wrong direction'. Which means, you
better don't handle large files with a server or client which doesn't
have it supported. (kinda obvious, isn't it?)
For the replication part, you actually shouldn't be able to release
volumes with large files on a server which doesn't support that.
On my machines the 'vos release' fails, but I'm not sure there aren't
any cases where it appears to be working.
I wouldn't trust it anyway... :-)
You don't have to 'force' the fileservers to be the same version you
just should think about what you're doing when you move or replicate
volumes.
A mixed environment requires some extra care, but isn't it always
like that? :-)
Horst_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info