[OpenAFS] vos dump - pipe reliability
is there any reason that the dump to a pipe would be less reliable then the dump to a file? we have openafs 1.2.11 running on debian woody and i am investigating a strange crash which happened during vos dump command. i made a change to the dump command to go to a pipe instead of a file (i.e. no -file option), it dumped most of the volumes with no problem and then crashed the OS. the command run is effectively (volume-name represents a real volume name): vos dump volume-name -localauth | gzip -c | split -b 1024m - $DUMP_DIRECTORY/volume-name.dump. there is enough space for the dumps in $DUMP_DIRECTORY. also, does a volume read-only snapshot have to be created on the same server the original volume exists on? vladimir pgpXfpwsc2jIG.pgp Description: PGP signature
[OpenAFS] AFS installation packages and tools
Hi there, I have an odd problem with a client machine running OpenaAFS 1.2.13 - the client seems to run OK (the AFS filespace is accessible from the machine and the afsd daemon is running) but wheneve I try to execute a command I get the following: [EMAIL PROTECTED] fs sysname bash: fs: command not found I can't understand what's wrong. If someone has ideas on what may be wrong I'll appreciate it. Thanks in advance, Konstantin
Re: [OpenAFS] AFS installation packages and tools
Thus spake Konstantin Boyanov ([EMAIL PROTECTED]): [EMAIL PROTECTED] fs sysname bash: fs: command not found I can't understand what's wrong. If someone has ideas on what may be wrong I'll appreciate it. The fs binary is not in your $PATH. -- Fashion is a form of ugliness so intolerable that we have to alter it every six months. -- Oscar Wilde ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] File too large
kernel) I cannot write a file larger than approximately 2GB in size to my AFS volumes, even from the fileserver itself. The release notes Build it from source and use --enable-largefile-fileserver This is odd, I have 1.3.81 and I'm quite able to write 2 GiB files on the AFS volume. I do not seem to be able to read them, though. Any process trying to access the over 2GB-parts of the files hangs for ever. It cannot even be killed (SIGKILL). Which one is at fault here, server or client? (Everything runs on linux/XFS, except the client cache, which is on ext2.) I also have one 1.4.0 -server. What happens if I put the large file on 1.4.0 and try to access it from 1.3.81 clients? What if I replicate the volume to 1.3.81 fileservers? Shuold I force all fileservers to be of the same version? Cheers, Juha -- --- | Juha Jäykkä, [EMAIL PROTECTED]| | Laboratory of Theoretical Physics | | Department of Physics, University of Turku| | home: http://www.utu.fi/~juolja/ | --- pgpJpzWvyh6Oj.pgp Description: PGP signature
Re: [OpenAFS] File too large
Juha Jäykkä wrote: kernel) I cannot write a file larger than approximately 2GB in size to my AFS volumes, even from the fileserver itself. The release notes Build it from source and use --enable-largefile-fileserver This is odd, I have 1.3.81 and I'm quite able to write 2 GiB files on the AFS volume. I do not seem to be able to read them, though. Any process trying to access the over 2GB-parts of the files hangs for ever. It cannot even be killed (SIGKILL). Which one is at fault here, server or client? (Everything runs on linux/XFS, except the client cache, which is on ext2.) I also have one 1.4.0 -server. What happens if I put the large file on 1.4.0 and try to access it from 1.3.81 clients? What if I replicate the volume to 1.3.81 fileservers? Shuold I force all fileservers to be of the same version? Cheers, Juha I think the client must be 1.4 and non-windows. My own preference would be to run the same version of software on all my servers. -- Steve Devine Storage Systems Academic Computing Network Services Michigan State University 506 Computer Center East Lansing, MI 48824-1042 1-517-432-7327 Baseball is ninety percent mental; the other half is physical. - Yogi Berra ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] New FC4 Kernel == Can't Find System Call Table
On Sun, Mar 05, 2006 at 09:10:04PM -0500, Derrick J Brashear wrote: If this is the bit where the syscall table has been moved into the .rodata section, the resolution is to lose. Maybe someday we will get to use kernel keyrings. We could actually do that today if we were willing to break backward compatibility with every userland tool that did pags that was compiled before today. For many sites this would simply be a different form of sadness. How hard would it be to make this an option, so we can pick our sadness? :) -- Matthew Miller [EMAIL PROTECTED] http://mattdm.org/ Boston University Linux -- http://linux.bu.edu/ ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
[OpenAFS] 1.2.11 upgrade.
Forgive the paranoia, but, I have 2 quick questions, requiring Yes or No answers. (3 servers, Linux 2.4.22, Fedora Core 1) 1) Can I go from 1.2.11 File and DB servers to 1.4.0 by replacing the /usr/afs binaries and restarting? 2) Can I do one system at a time (one a day)? Thanks, Steve - Stephen G. Roseman Lehigh University [EMAIL PROTECTED] ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] 1.2.11 upgrade.
On Mon, 13 Mar 2006, Steve Roseman wrote: Forgive the paranoia, but, I have 2 quick questions, requiring Yes or No answers. (3 servers, Linux 2.4.22, Fedora Core 1) 1) Can I go from 1.2.11 File and DB servers to 1.4.0 by replacing the /usr/afs binaries and restarting? yup (do vol/fs/salvager at once) 2) Can I do one system at a time (one a day)? yup Derrick ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] New FC4 Kernel == Can't Find System Call Table
On Mon, Mar 13, 2006 at 11:18:45AM -0500, Matthew Miller wrote: On Sun, Mar 05, 2006 at 09:10:04PM -0500, Derrick J Brashear wrote: If this is the bit where the syscall table has been moved into the .rodata section, the resolution is to lose. Maybe someday we will get to use kernel keyrings. We could actually do that today if we were willing to break backward compatibility with every userland tool that did pags that was compiled before today. For many sites this would simply be a different form of sadness. How hard would it be to make this an option, so we can pick our sadness? :) We've all complained and flamed about the sys_call_table hook sadness. I think its definitely time to look at using the kernel keyrings. Its a much better solution and will lead toward much happiness in the end. I use several third party tools with OpenAFS that this would break. I'm willing to work with maintainers and fix them to use the kernel keyrings. I've actually been trying to find some time to see what would be involved in porting OpenAFS. Pointers to what would be involved? Jack Neely -- Jack Neely [EMAIL PROTECTED] Campus Linux Services Project Lead PAMS Computer Operations at NC State University GPG Fingerprint: 1917 5AC1 E828 9337 7AA4 EA6B 213B 765F 3B6A 5B89 ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] File too large
On Mar 13, 2006, at 2:53 PM, Juha Jäykkä wrote: kernel) I cannot write a file larger than approximately 2GB in size to my AFS volumes, even from the fileserver itself. The release notes Build it from source and use --enable-largefile-fileserver This is odd, I have 1.3.81 and I'm quite able to write 2 GiB files on the AFS volume. I do not seem to be able to read them, though. Any process trying to access the over 2GB-parts of the files hangs for ever. It cannot even be killed (SIGKILL). Which one is at fault here, server or client? (Everything runs on linux/XFS, except the client cache, which is on ext2.) They could both be the cause of your problem. The large file support has to be in your client as well as your fileserver, if you want to handle large files. ;-) You should be able to mix clients and servers for files 2GB. I also have one 1.4.0 -server. What happens if I put the large file on 1.4.0 and try to access it from 1.3.81 clients? What if I replicate the volume to 1.3.81 fileservers? Shuold I force all fileservers to be of the same version? I don't remember if 1.4.x has large file support enabled by default, since I don't use packages, but if it doesn't, you only get into trouble when you mix in the 'wrong direction'. Which means, you better don't handle large files with a server or client which doesn't have it supported. (kinda obvious, isn't it?) For the replication part, you actually shouldn't be able to release volumes with large files on a server which doesn't support that. On my machines the 'vos release' fails, but I'm not sure there aren't any cases where it appears to be working. I wouldn't trust it anyway... :-) You don't have to 'force' the fileservers to be the same version you just should think about what you're doing when you move or replicate volumes. A mixed environment requires some extra care, but isn't it always like that? :-) Horst___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] New FC4 Kernel == Can't Find System Call Table
--On Monday, March 13, 2006 13:45:04 -0500 Jack Neely [EMAIL PROTECTED] wrote: Its a much better solution and will lead toward much happiness in the end. It's a less-hacky solution. I would not call it better. (At least I don't think it's worse anymore. until about 5 minutes ago I thought there was a problem with the access control model because any process could join any session keyring. That turns out to be incorrect) I use several third party tools with OpenAFS that this would break. I'm willing to work with maintainers and fix them to use the kernel keyrings. The only thing that needs to change is the setpag operation. instead of syscall(AFS_SYSCALL, AFSCALL_SETPAG); or more likely, after it fails. you would need to do the following keyring operations: keyring = keyctl_join_session_keyring(NULL); /* This setperm operations revokes all the owner rights If that were not done, any process could join any pag belonging to the same uid */ keyctl_setperm(keyring, KEY_POS_ALL); lastly, you'd need to call a new afs-specific operation to create the afs pag key in the session keyring. it would probably be a new pioctl. It is conceivable that the final design of keyring based pags would skip this step and just use the serial number of the session keyring as the pag id. I haven't thought through all the ramifications of that yet. p7sGYvSpttKn4.p7s Description: S/MIME cryptographic signature
[OpenAFS] Cache manager does not show (can not get) user token.
All: I've run into a small problem with our openAFS installation. Running debian sarge and following Russ Allbery's instructions as found on: http://www.openafs.org/pipermail/openafs-info/2005-August/019061.html, I have managed to get to the following command this far: bos status server-name This results in the error: bos: failed to contact host's bosserver (security object was passed a bad ticket). Below are quite brief details of the initialization: #: kdestroy ; unlog #: kinit mustafa.hashmi/admin Password for mustafa.hashmi/[EMAIL PROTECTED] # klist -e Ticket cache: FILE:/tmp/krb5cc_0 Default principal: mustafa.hashmi/[EMAIL PROTECTED] Valid starting ExpiresService principal 03/14/06 12:14:02 03/14/06 22:14:01 krbtgt/[EMAIL PROTECTED] Etype (skey, tkt): Triple DES cbc mode with HMAC/sha1, Triple DES cbc mode with HMAC/sha1 Kerberos 4 ticket cache: /tmp/tkt0 klist: You have no tickets cached # aklog -d node30.emergen.biz -k EMERGEN.BIZ Authenticating to cell node30.emergen.biz (server node30.emergen.biz). We were told to authenticate to realm EMERGEN.BIZ. Getting tickets: afs/[EMAIL PROTECTED] About to resolve name mustafa.hashmi.admin to id in cell node30.emergen.biz. Id 32766 Set username to mustafa.hashmi.admin Setting tokens. mustafa.hashmi.admin / @ EMERGEN.BIZ # tokens Tokens held by the Cache Manager: Tokens for [EMAIL PROTECTED] [Expires Mar 14 22:14] --End of list-- The cache manager doesn't seem to be holding any tokens at this point for my user. Just to add, the KDC service is on a different server than the openafs-dbserver, and I have added the REALM as required in /etc/openafs/server/kdc.conf Initially I was under the impression the problem was a mismatch in the kvno number, however, that was just lack of attention on my part when looking at the output from 'tokens'. -- A few additional details of interest: kadmin.local: getprinc afs/node30.emergen.biz Principal: afs/[EMAIL PROTECTED] Expiration date: [never] Last password change: Mon Mar 13 21:25:52 GMT-5 2006 Password expiration date: [none] Maximum ticket life: 0 days 10:00:00 Maximum renewable life: 7 days 00:00:00 Last modified: Mon Mar 13 21:25:52 GMT-5 2006 (faraz.khan/[EMAIL PROTECTED]) Last successful authentication: [never] Last failed authentication: [never] Failed password attempts: 0 Number of keys: 1 Key: vno 3, DES cbc mode with CRC-32, no salt Attributes: Policy: [none] -- node30:# bos listkeys node30.emergen.biz -localauth key 3 has cksum 683704053 Keys last changed on Mon Mar 13 21:27:21 2006. All done. node30:/usr/share/doc# bos listusers node30.emergen.biz -localauth SUsers are: mustafa.hashmi/admin rehan.zafar If someone could please point me in the correct direction, it would be greatly appreciated. Thank you and regards, -- Mustafa A. Hashmi [EMAIL PROTECTED] [EMAIL PROTECTED] ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info