Bill,
/backuppc/cpool/0/0/0 is just one of the 4096 3.x pool directories (each of
the last three digits is a single hex value from 0..9a..f). So to see the
total storage remaining in the 3.x pool you should do this:
du -csh /backuppc/cpool/?/?/?
I'm not sure why some of the 3.x files didn't get migrated. You could pick
one that has more than one link (eg: 2564f9012849e45bfa1f4fd47578
above) and find its inode:
ls -li /backuppc/cpool/0/0/0/2564f9012849e45bfa1f4fd47578
then look for other files that have that same inode (replace NNN with the
inode printed by ls -i):
find /backuppc -inum NNN -print
But given your point about 0/0/0 being quite small, it's unlikely this can
explain 3TB of extra usage, and I suspect the du command above won't show
more than a few MB.
So another path is to use du to find which directories are so large. For
example:
du -hs /backuppc
If that number is reasonable, then it must be something outside /backuppc
that is using so much space.
Next:
du -hs /backuppc/cpool /backuppc/pool /backuppc/pc
Are any of those close to 3TB? If so, do the du inside those directories
to narrow things down.
Is it possible your excludes aren't work after the 4.x transition? For
example, on some linux systems /var/log/lastlog is a sparse file, and
backing it up will create a huge (regular) file.
You could also use find to look for single huge files, eg:
find /backuppc -size +1G -print
will list all files over 1G.
Craig
On Fri, Apr 7, 2017 at 1:08 AM, Bill Broadley wrote:
>
> On 04/05/2017 03:25 PM, higuita wrote:
> > Hi
> >
> > On Tue, 4 Apr 2017 23:04:44 -0700, Bill Broadley
> > wrote:
> >> -rw-r- 1 backuppc backuppc145 Feb 11 2015
> 00012f3df3fef9176f4a08f470d1f5e6
> > ^ |
> > This field is the number of hardlinks.
> > So you have entries >1, then you still have backups pointing to the v3
> pool.
>
> Odd.
>
> I ran the V3 to V4 migration script several times and it wasn't finding
> anything
> and running quickly. I was worried that my filesystem was corrupt
> somehow, it
> had been up for 360 some days. I umounted and fsck'd, not a single
> complaint.
>
> root@node1:/backuppc/cpool/0/0/0# ls -al | awk ' { print $2 } ' | grep -v
> "1" |
> wc -l
> 39
>
> I didn't have many with more than one link. The entire dir is small:
> root@node1:/backuppc/cpool/0/0/0# du -hs .
> 380K.
>
> I see similar elsewhere:
> root@node1:/backuppc/cpool/8/8/8# ls -al |wc -l; ls -al | awk ' { print
> $2 } ' |
> grep -v "1" | wc -l
> 59
> 31
>
> (59 files, 31 with links).
>
> I'm still seeing crazy disk usage, over 3TB, only about 650GB (total from
> the
> host status "full size" column) visible to backuppc.
>
> Keep in mind this happened with no changes to the server. 30 hosts backed
> up
> for a week or so, then suddenly much more disk is used. No host has larger
> backups, just a factor of 6 larger pool one night.
>
> I was using the v3 to v4 migration script from git since it wasn't in the
> release package yet (that's been fixed.
>
> I upgraded to backuppc 4.1.1, and the current versions of backuppc-xs and
> rsync-bpc. I ran the V3 to V4 migration script (now included in the
> release)
> again and it's doing some serious chewing (unlike before). It used to
> just fly
> through them all with "refCnt directory; skipping this backup".
>
> So maybe this will fix it.
>
>
>
> --
> Check out the vibrant tech community on one of the world's most
> engaging tech sites, Slashdot.org! http://sdm.link/slashdot
> ___
> BackupPC-devel mailing list
> backuppc-de...@lists.sourceforge.net
> List:https://lists.sourceforge.net/lists/listinfo/backuppc-devel
> Wiki:http://backuppc.wiki.sourceforge.net
> Project: http://backuppc.sourceforge.net/
>
--
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot___
BackupPC-users mailing list
BackupPC-users@lists.sourceforge.net
List:https://lists.sourceforge.net/lists/listinfo/backuppc-users
Wiki:http://backuppc.wiki.sourceforge.net
Project: http://backuppc.sourceforge.net/