My NFS storage was doing some interesting reporting about available inodes, erroneously reporting that we were using >95% of the available inodes. It’s interesting that BackupPC reported the following:
Yesterday 274 hosts were skipped because the file system containing /mnt/backups/BackupPC was too full. The threshold in the configuration file is currently 95%, while yesterday the file system was up to 68% full. The maximum inode usage yesterday was 99% and the threshold is currently 95%. I only have 160 hosts configured in BackupPC, and the logs indicate that all of them were backed up, so it’s not clear to me where BackupPC thinks it skipped 274 hosts! Here’s the log excerpt. ... 2019-03-19 20:27:16 Started incr backup on servr101 (pid=26521, share=/) 2019-03-19 20:28:13 Finished incr backup on servr121 2019-03-19 20:28:13 Started incr backup on aix02 (pid=26651, share=/) 2019-03-19 20:29:48 Finished incr backup on servr501 2019-03-19 20:30:00 Disk too full (usage 67%; inode 99%; thres 95%/95%); skipped 114 hosts 2019-03-19 20:30:13 Finished incr backup on aix01 2019-03-19 20:31:57 Started incr backup on servr002 (pid=26807, share=/) 2019-03-19 20:33:17 Finished incr backup on aix02 2019-03-19 20:33:34 Finished incr backup on servr101 2019-03-19 20:34:34 Finished incr backup on servr002 2019-03-19 21:00:01 Disk too full (usage 67%; inode 99%; thres 95%/95%);); skipped 160 hosts 2019-03-19 21:00:01 Next wakeup is 2019-03-19 22:00:00 2019-03-19 21:28:18 Finished incr backup on servr301 2019-03-19 22:00:00 Next wakeup is 2019-03-19 23:00:00 2019-03-19 22:00:01 Started incr backup on orocitym (pid=3142, share=/) 2019-03-19 22:00:01 Started incr backup on servr301 (pid=3143, share=/) 2019-03-19 22:00:01 Started incr backup on servr401 (pid=3144, share=/) 2019-03-19 22:00:01 Started incr backup on servr571 (pid=3145, share=/) 2019-03-19 22:00:01 Started incr backup on servr203 (pid=3146, share=/) ... What I believe is happening is that at the 20:00 wakeup, the hosts were cued up to run, but when the inode count (erroneously) crossed the 95% threshold all of the remaining cued backups got skipped. The problem still persisted at 21:00, and all 160 hosts were queued and skipped. By 22:00 the problem had self corrected, and the systems queued and ran backups as expected. Backups that were in progress continued to run to completion because we didn’t actually run out of inodes on the backend storage. Reporting as an FYI to let people know how BackupPC responds to some of the new threshold checking. -- Ray Frush "Either you are part of the solution T:970.491.5527 or part of the precipitate." -*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- Colorado State University | IS | System Administrator
_______________________________________________ BackupPC-users mailing list BackupPC-users@lists.sourceforge.net List: https://lists.sourceforge.net/lists/listinfo/backuppc-users Wiki: http://backuppc.wiki.sourceforge.net Project: http://backuppc.sourceforge.net/