Re: [OpenAFS] Vista, OpenAFS 1.5.20, Cisco VPN - AFS dead
Jeffrey Altman wrote: I installed Cisco VPN 3.8.2 on Vista Ultimate with current Windows Updates and OpenAFS 1.5.20 works just fine when connecting to and disconnecting from a VPN using the UDP tunneling. This does not mean that you are not having a real problem. It does mean that the problem is not pervasive and requires something specific to your environment to reproduce it. Jeffrey Altman Secure Endpoints Inc. Just another data point. It's been my experience with the Cisco VPN client on my Mac that I have to reauthenticate to AFS after starting or stopping the VPN client. Sincerely, Jason Edgecombe ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
Steve Devine wrote: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? To remove the small V#.vol files doesn't help, they are really only 76 bytes long. What happens if you do a vos remove or a vos zap? Go the volumes away and the free space seems as low as before? This can happen, if you only removed readonly and backup volumes which typically can free only the space used by their metadata while the space used by their files and directories is shared between them and the RW volume. But, of course, you don't want to remove your RW-volumes. May be, if you have removed all RO- and BK- volumes you have enough free space for the temporary volume being created when you try to move your smallest RW-volume to another partition/server. There is also a -live option for the vos move command which should doe the move without creating a clone. I suppose it has been written for such cases. Good luck, Hartmut - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
[OpenAFS] Full disk woes
I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? -- Steve Devine Storage Systems Academic Computing Network Services Michigan State University 506 Computer Center East Lansing, MI 48824-1042 1-517-432-7327 Baseball is ninety percent mental; the other half is physical. - Yogi Berra ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
Steve Devine wrote: Hartmut Reuter wrote: Steve Devine wrote: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? To remove the small V#.vol files doesn't help, they are really only 76 bytes long. What happens if you do a vos remove or a vos zap? both commands fail. Even when I use force. What says the VolserLog? Go the volumes away and the free space seems as low as before? This can happen, if you only removed readonly and backup volumes which typically can free only the space used by their metadata while the space used by their files and directories is shared between them and the RW volume. But, of course, you don't want to remove your RW-volumes. May be, if you have removed all RO- and BK- volumes you have enough free space for the temporary volume being created when you try to move your smallest RW-volume to another partition/server. There is also a -live option for the vos move command which should doe the move without creating a clone. I suppose it has been written for such cases. Good luck, Hartmut - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info -- - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
Am Freitag, 6. Juli 2007 schrieb ext Steve Devine: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Did you try vos remsite to remove readonly copies of rw volumes which are located on _another_ partition? Bye... Dirk -- Dirk Heinrichs | Tel: +49 (0)162 234 3408 Configuration Manager | Fax: +49 (0)211 47068 111 Capgemini Deutschland | Mail: [EMAIL PROTECTED] Wanheimerstraße 68 | Web: http://www.capgemini.com D-40468 Düsseldorf | ICQ#: 110037733 GPG Public Key C2E467BB | Keyserver: www.keyserver.net signature.asc Description: This is a digitally signed message part.
Re: [OpenAFS] Full disk woes
Hartmut Reuter wrote: Steve Devine wrote: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? To remove the small V#.vol files doesn't help, they are really only 76 bytes long. What happens if you do a vos remove or a vos zap? both commands fail. Even when I use force. Go the volumes away and the free space seems as low as before? This can happen, if you only removed readonly and backup volumes which typically can free only the space used by their metadata while the space used by their files and directories is shared between them and the RW volume. But, of course, you don't want to remove your RW-volumes. May be, if you have removed all RO- and BK- volumes you have enough free space for the temporary volume being created when you try to move your smallest RW-volume to another partition/server. There is also a -live option for the vos move command which should doe the move without creating a clone. I suppose it has been written for such cases. Good luck, Hartmut - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info -- Steve Devine Network Storage and Printing Academic Computing Network Services Michigan State University 506 Computer Center East Lansing, MI 48824-1042 1-517-432-7327 Baseball is ninety percent mental; the other half is physical. - Yogi Berra ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
Hartmut Reuter wrote: Steve Devine wrote: Hartmut Reuter wrote: Steve Devine wrote: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? To remove the small V#.vol files doesn't help, they are really only 76 bytes long. What happens if you do a vos remove or a vos zap? both commands fail. Even when I use force. What says the VolserLog? Go the volumes away and the free space seems as low as before? This can happen, if you only removed readonly and backup volumes which typically can free only the space used by their metadata while the space used by their files and directories is shared between them and the RW volume. But, of course, you don't want to remove your RW-volumes. May be, if you have removed all RO- and BK- volumes you have enough free space for the temporary volume being created when you try to move your smallest RW-volume to another partition/server. There is also a -live option for the vos move command which should doe the move without creating a clone. I suppose it has been written for such cases. Good luck, Hartmut - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info Lot of lines like this .. Fri Jul 6 10:05:18 2007 trans 3811071 on volume 1938590434 is older than 29730 seconds Fri Jul 6 10:05:48 2007 trans 3811072 on volume 1937192577 is older than 28530 seconds Fri Jul 6 10:05:48 2007 trans 3811071 on volume 1938590434 is older than 29760 seconds Fri Jul 6 10:06:18 2007 trans 3811072 on volume 1937192577 is older than 28560 seconds Fri Jul 6 10:06:18 2007 trans 3811071 on volume 1938590434 is older than 29790 -- Steve Devine Network Storage and Printing Academic Computing Network Services Michigan State University 506 Computer Center East Lansing, MI 48824-1042 1-517-432-7327 Baseball is ninety percent mental; the other half is physical. - Yogi Berra ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
I tried a /afs/ipp/backups: vos listvldb 1938590434 -cell msu.edu vsu_ClientInit: Could not get afs tokens, running unauthenticated. svc.ml.mdsolids.31 RWrite: 1938590433ROnly: 1938590434RClone: 1938590434 number of sites - 3 server afsfs7.cl.msu.edu partition /vicepa RW Site server afsfs9.cl.msu.edu partition /vicepa RO Site -- Old release server afsfs7.cl.msu.edu partition /vicepa RO Site -- New release /afs/ipp/backups: and found out it's your machine afsfs9.cl.msu.edu which does the trouble. Then I did a vos status to this machine which did not respond. rxdebug afsfs9.cl.msu.edu 7005 shows a lot of connections in state precall with source ports != 7005. That means you have a lot vos commands running anywhere. Those you should stop first! Then perhaps restart your fileserver to get rid of the old transactions and then hopefully everthing is OK again. Hartmut Steve Devine wrote: Hartmut Reuter wrote: Steve Devine wrote: Hartmut Reuter wrote: Steve Devine wrote: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? To remove the small V#.vol files doesn't help, they are really only 76 bytes long. What happens if you do a vos remove or a vos zap? both commands fail. Even when I use force. What says the VolserLog? Go the volumes away and the free space seems as low as before? This can happen, if you only removed readonly and backup volumes which typically can free only the space used by their metadata while the space used by their files and directories is shared between them and the RW volume. But, of course, you don't want to remove your RW-volumes. May be, if you have removed all RO- and BK- volumes you have enough free space for the temporary volume being created when you try to move your smallest RW-volume to another partition/server. There is also a -live option for the vos move command which should doe the move without creating a clone. I suppose it has been written for such cases. Good luck, Hartmut - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info Lot of lines like this .. Fri Jul 6 10:05:18 2007 trans 3811071 on volume 1938590434 is older than 29730 seconds Fri Jul 6 10:05:48 2007 trans 3811072 on volume 1937192577 is older than 28530 seconds Fri Jul 6 10:05:48 2007 trans 3811071 on volume 1938590434 is older than 29760 seconds Fri Jul 6 10:06:18 2007 trans 3811072 on volume 1937192577 is older than 28560 seconds Fri Jul 6 10:06:18 2007 trans 3811071 on volume 1938590434 is older than 29790 -- - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
[OpenAFS] Error moving volume
While trying to move some volumes from one server to another, I have the following error: Dumping from clone 536871423 on source to volume 536871161 on destination ... Failed to move data for the volume 536871161 VOLSER: Problems encountered in doing the dump ! vos move: operation interrupted, cleanup in progress... Looking at Volserlog, I see: Volser: Clone: Cloning volume 536871161 to new volume 536871423 Volser: DumpVolume: Rx call failed during dump, error 1492325122 and translate_et 1492325122 returns badly formatted dump vos examine is OK on the volume and I can move other volumes, but I've got some others which always raise errors. Any idea? Cheers -- Gérald Macinenti - Administrator / / ___ / /_ / _/ /_ | | / /___ __ _ / / / _ \/ __/ / // __/ | | /| / / __ `/ | / / _ \ / /___/ __/ /_ _/ // /_ | |/ |/ / /_/ /| |/ / __/ /_/\___/\__//___/\__/ |__/|__/\__,_/ |___/\___/ 8-16, rue Paul-Vaillant-Couturier, 92240 Malakoff; FRANCE Tel +33 1 4092 5454 - Fax +33 1 4092 5441 - http://www.letitwave.fr smime.p7s Description: S/MIME Cryptographic Signature
Re: [OpenAFS] Vista, OpenAFS 1.5.20, Cisco VPN - AFS dead
Jason Edgecombe wrote: Jeffrey Altman wrote: I installed Cisco VPN 3.8.2 on Vista Ultimate with current Windows Updates and OpenAFS 1.5.20 works just fine when connecting to and disconnecting from a VPN using the UDP tunneling. This does not mean that you are not having a real problem. It does mean that the problem is not pervasive and requires something specific to your environment to reproduce it. Jeffrey Altman Secure Endpoints Inc. Just another data point. It's been my experience with the Cisco VPN client on my Mac that I have to reauthenticate to AFS after starting or stopping the VPN client. I don't think this is relevant. A refresher course on how the AFS client on Windows is implemented might be in order. | | | Windows Applications such a Office, NIM, AFS Creds | | | | | |../ | | | Windows CIFS client | | | '`''|''' | Loopback Adapter|10.254.254.253 | .. | AFS Client Service (SMB Server) | || | AFS Cache Manager| |J | | External| Network | _|_ | | | | | AFS Servers (File, VLDB) | | | `'' On versions of Windows prior to Vista, the Loopback Adapter interface was not plug-n-play and was unaware of Power Management events. Its configuration was static. Once it obtained its IP address the AFS client service could bind its SMB server to it and it would be stable until the machine was shutdown or the loopback adapter itself was manually disabled or uninstalled. In Windows Vista, the loopback adapter is a PnP driver and it is aware of power management events. When the network configuration on the machine is reconfigured or the machine is being suspended, the loopback adapter will be turned on and off just like any physical adapter. Beginning with the 1.5.12 release, the AFS client service was updated to handle the case in which the bound network adapter may shutdown unexpectedly. In this case, the AFS client service will periodically retry to bind to the adapter. Quoting the OpenAFS for Windows Release Notes: Due to a feature change in Windows Vista’s Plug-n-Play network stack, during a standby/hibernate operation the MSLA is disabled just as any other piece of hardware would be. This causes the OpenAFS Client’s network binding to be lost. As a result, it takes anywhere from 30 to 90 seconds after the operating system is resumed for access to the OpenAFS Client and the AFS file space to become available. Until the network bindings have be re-established ticket managers and other tools will report that the AFS Client Service may not have been started. During the period in which the AFS client is unable to receive CIFS requests, it is not possible to access \\AFS for files, to set tokens or to list tokens. From the perspective of the Windows CIFS client, the AFS server (which appears to be on a remote machine) is not present on the network. If there is a problem with the Cisco VPN client, my guess is that it would have something to do with temporarily reseting the loopback adapter or modifying the routing table. However, based upon my tests with OpenAFS 1.5.20 and Cisco VPN 3.8.0.2, I have not witnessed any such interference. Lars has described his problems as being more severe than just the AFS client not working. He says that he can't access any DNS server; not even on the private network. As a result, I think his problems are Cisco VPN configuration issues that have nothing to do with AFS. Jeffrey Altman Secure Endpoints Inc. ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
Am Freitag, 6. Juli 2007 schrieb Dirk Heinrichs: Am Freitag, 6. Juli 2007 schrieb ext Steve Devine: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Did you try vos remsite to remove readonly copies of rw volumes which are located on _another_ partition? Forget what I wrote, wouldn't be of any help. Bye... Dirk signature.asc Description: This is a digitally signed message part.
Re: [OpenAFS] Full disk woes
Hartmut Reuter wrote: I tried a /afs/ipp/backups: vos listvldb 1938590434 -cell msu.edu vsu_ClientInit: Could not get afs tokens, running unauthenticated. svc.ml.mdsolids.31 RWrite: 1938590433ROnly: 1938590434RClone: 1938590434 number of sites - 3 server afsfs7.cl.msu.edu partition /vicepa RW Site server afsfs9.cl.msu.edu partition /vicepa RO Site -- Old release server afsfs7.cl.msu.edu partition /vicepa RO Site -- New release /afs/ipp/backups: and found out it's your machine afsfs9.cl.msu.edu which does the trouble. Then I did a vos status to this machine which did not respond. rxdebug afsfs9.cl.msu.edu 7005 shows a lot of connections in state precall with source ports != 7005. That means you have a lot vos commands running anywhere. Those you should stop first! Then perhaps restart your fileserver to get rid of the old transactions and then hopefully everthing is OK again. Hartmut Steve Devine wrote: Hartmut Reuter wrote: Steve Devine wrote: Hartmut Reuter wrote: Steve Devine wrote: I committed the cardinal sin of letting a server partition fill up. I have tried vos remove and vos zap .. I can't get rid of any vols.Volume management fails on this machine. Its the old style (non namei) fileserver. It doesn't seem like I can just rm the V#.vol can I? Any help? To remove the small V#.vol files doesn't help, they are really only 76 bytes long. What happens if you do a vos remove or a vos zap? both commands fail. Even when I use force. What says the VolserLog? Go the volumes away and the free space seems as low as before? This can happen, if you only removed readonly and backup volumes which typically can free only the space used by their metadata while the space used by their files and directories is shared between them and the RW volume. But, of course, you don't want to remove your RW-volumes. May be, if you have removed all RO- and BK- volumes you have enough free space for the temporary volume being created when you try to move your smallest RW-volume to another partition/server. There is also a -live option for the vos move command which should doe the move without creating a clone. I suppose it has been written for such cases. Good luck, Hartmut - Hartmut Reuter e-mail [EMAIL PROTECTED] phone +49-89-3299-1328 RZG (Rechenzentrum Garching) fax +49-89-3299-1301 Computing Center of the Max-Planck-Gesellschaft (MPG) and the Institut fuer Plasmaphysik (IPP) - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info Lot of lines like this .. Fri Jul 6 10:05:18 2007 trans 3811071 on volume 1938590434 is older than 29730 seconds Fri Jul 6 10:05:48 2007 trans 3811072 on volume 1937192577 is older than 28530 seconds Fri Jul 6 10:05:48 2007 trans 3811071 on volume 1938590434 is older than 29760 seconds Fri Jul 6 10:06:18 2007 trans 3811072 on volume 1937192577 is older than 28560 seconds Fri Jul 6 10:06:18 2007 trans 3811071 on volume 1938590434 is older than 29790 Ok in the end we killed all vos commands to that server and restarted the Bosserver- volserver. Then we were able to vos remove the RO vols I stupidly put on there in the first place. Ran salvager on some horked vols and so far so good. Thanks to all that helped. /sd -- Steve Devine Network Storage and Printing Academic Computing Network Services Michigan State University 506 Computer Center East Lansing, MI 48824-1042 1-517-432-7327 Baseball is ninety percent mental; the other half is physical. - Yogi Berra ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info
Re: [OpenAFS] Full disk woes
On Fri, 6 Jul 2007, Steve Devine wrote: Ok in the end we killed all vos commands to that server and restarted the Bosserver- volserver. Then we were able to vos remove the RO vols I stupidly put on there in the first place. Ran salvager on some horked vols and so far so good. Just for documentation sake it was more like: bos stop backupsys vos unlock volume (in case any were still locked at the time..) bos restart -server fileserver -instance fs (used strace/truss to see it was volserver was dead ..fileserver was still hung.. and I may have just killed volserver and restarted the fileserver.) vos remsite (remove replication sites to free up space) vos syncserv fileserver -partition vicepa (sync partition with vldb not sure it is needed..) bos salvage -server fileserver -tmpdir /tmp -oktozap* -partition /vicepa vos zap -server fileserver -partition vicepa (whatever volumes you did remsite to. I think oktozap with the bos salvage command is the same thing..) bos restart -server -instance fs vos listvol -serv fileserver -part vicepa (make sure everything is back online correctly, if not, then fix problems..) bos start backupsys That is as close as I can document as my command history wasnt long enough.. :) Sean -- Sean O'Malley, Information Technologist Michigan State University - ___ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info