On Thu, Sep 13, 2018 at 11:14:20AM +0200, Andreas Ladanyi wrote: > I want to report my results: > > Salvaging the volume multiple times and switching back to 1.6.22 on the > client and waiting seems to solve the problem for now. > > Are there known problems with the cache manager in 1.8.0 ? Maybe which > result in broken cache information which could be synced to the server > which results in a broken volume ?????
There are not known issues of this nature with 1.8.x clients. > In the past we had a lot of question mark issues with 1.8.0 clients on > directories which are changed by users before. After rebooting the > clients the question marks has gone. So it sounds like a broken cache or > sync issues for me. I am not 100% sure, but I think that the 1.8.x salvager has a few more checks than the 1.6.x does. > Is the 1.8 client (cache manager) designed to work with an 1.6 server > generally ? Yes; any client is supposed to work fine with any server. -Ben > > Andreas > > I manualy salvaged the volume with bos salvage and the volume goes > > online. The user accessed the cache manager mountpoint on the client and > > the volume goes offline again. > > > > FileLog: > > > > CopyOnWrite corruption prevention: detected zero nlink for volume > > 536875101 inode 5465217102250391 (dest), forcing volume offline > > VRequestSalvage: volume 536875101 online salvaged too many times; forced > > offline. > > VRequestSalvage: volume 536875101 online salvaged too many times; forced > > offline. > > FSYNC_backgroundSalvage: unable to request salvage for volume 536875101 > > > > > > I could voldump the volume for testing and dont get an error message. > > > > For info: > > > > I have seen that the client was working with afs 1.8.0. Now its 1.6.22 > > back again. (Ubuntu 16.04, kernel 4.4.0-134) > > > > The server is on 1.6.22 (kernel 4.15.0-20, Ubtuntu 18.04) > > > > > > Andreas > > > > > >> Hi, > >> > >> one volume could not be attached. This is not a new created volume. > >> > >> OpenAFS 1.6.22.2 (dafs) / Ubuntu 18.04 > >> > >> > >> vos exa user.name: > >> > >> **** Volume 536875101 is busy **** > >> > >> RWrite: 536875101 Backup: 536875103 > >> number of sites -> 1 > >> server ... partition /vicepa RW Site > >> > >> > >> vos online / offline: > >> > >> SetVolumeStatus: TransCreate Failed > >> Failed to set volume. Code = 103 > >> > >> > >> VolserLog: > >> > >> 1 Volser: GetVolInfo: Could not attach volume 536875101 > >> (/vicepa:V0536875101.vol) error=113 > >> SYNC_ask: negative response on circuit 'FSSYNC' > >> FSYNC_askfs: FSSYNC request denied for reason=0 > >> VAttachVolume: attach of volume 536875101 apparently denied by file server > >> attach2: forcing vol 536875101 to error state (state 0 flags 0x0 ec 103) > >> SYNC_ask: negative response on circuit 'FSSYNC' > >> FSYNC_askfs: FSSYNC request denied for reason=0 > >> VAttachVolume: attach of volume 536875101 apparently denied by file server > >> attach2: forcing vol 536875101 to error state (state 0 flags 0x0 ec 103) > >> > >> FileLog: > >> > >> VRequestSalvage: volume 536875101 online salvaged too many times; forced > >> offline. > >> FSYNC_backgroundSalvage: unable to request salvage for volume 536875101 > >> > >> SalsrvLog: > >> > >> 09/10/2018 10:12:37 Salvaged user.name (536875101): 160944 files, > >> 11976937 blocks > >> 09/10/2018 10:12:39 dispatching child to salvage volume 536875101... > >> 09/10/2018 10:12:40 2 nVolumesInInodeFile 64 > >> 09/10/2018 10:12:40 CHECKING CLONED VOLUME 536875103. > >> 09/10/2018 10:12:40 user.name.backup (536875103) updated 09/08/2018 18:32 > >> 09/10/2018 10:12:40 SALVAGING VOLUME 536875101. > >> 09/10/2018 10:12:40 user.name (536875101) updated 09/10/2018 10:12 > >> 09/10/2018 10:12:40 totalInodes 160966 > >> 09/10/2018 10:12:41 Found 40 orphaned files and directories (approx. 80 KB) > >> 09/10/2018 10:12:41 Salvaged user.name (536875101): 160944 files, > >> 11976937 blocks > >> > >> volinfo: > >> > >> Inode 2305861000965914623: Good magic 78a1b2c5 and version 1 > >> Inode 2305861001033023487: Good magic 99776655 and version 1 > >> Inode 2305861001100132351: Good magic 88664433 and version 1 > >> Inode 2305861001301458943: Good magic 99877712 and version 1 > >> Volume header for volume 536875101 (user.name) > >> stamp.magic = 78a1b2c5, stamp.version = 1 > >> inUse = 0, inService = 1, blessed = 1, needsSalvaged = 1, dontSalvage = 0 > >> type = 0 (read/write), uniquifier = 3587963, needsCallback = 0, > >> destroyMe = 0 > >> id = 536875101, parentId = 536875101, cloneId = 0, backupId = 536875103, > >> restoredFromId = 0 > >> maxquota = 16777216, minquota = 0, maxfiles = 0, filecount = 160944, > >> diskused = 11976937 > >> creationDate = 1366961543 (2013/04/26.09:32:23), copyDate = 1533799185 > >> (2018/08/09.09:19:45) > >> backupDate = 1536512433 (2018/09/09.19:00:33), expirationDate = 0 > >> (1970/01/01.01:00:00) > >> accessDate = 1536568099 (2018/09/10.10:28:19), updateDate = 1536568099 > >> (2018/09/10.10:28:19) > >> owner = 29724, accountNumber = 0 > >> dayUse = 9473; week = (20149, 14021, 403285, 46815, 88402, 59592, > >> 32594), dayUseDate = 1536530400 (2018/09/10.00:00:00) > >> volUpdateCounter = 161079 > >> > >> > >> Andreas > >> > >> _______________________________________________ > >> OpenAFS-info mailing list > >> OpenAFS-info@openafs.org > >> https://lists.openafs.org/mailman/listinfo/openafs-info > > > > -- > > Karlsruher Institut für Technologie (KIT) > Fakultät für Informatik > ATIS – Abteilung Technische Infrastruktur > > Dipl.-Ing. Andreas Ladanyi > - Systemadministrator - > > Am Fasanengarten 5, Gebäude 50.34, Raum 013 > 76131 Karlsruhe > > Telefon: +49 721 608 - 4 3663 > Fax: +49 721 608 - 4 6699 > E-Mail: andreas.lada...@kit.edu > www.atis.informatik.kit.edu > > www.kit.edu > > KIT - Universität des Landes Baden-Württemberg und nationales > Forschungszentrum in der Helmholtz-Gemeinschaft > > Das KIT ist seit 2010 als familiengerechte Hochschule zertifiziert. > > _______________________________________________ > OpenAFS-info mailing list > OpenAFS-info@openafs.org > https://lists.openafs.org/mailman/listinfo/openafs-info _______________________________________________ OpenAFS-info mailing list OpenAFS-info@openafs.org https://lists.openafs.org/mailman/listinfo/openafs-info