Re: empty disk reports full
On Mon, May 9, 2016 at 4:56 AM, Alejandro Vargaswrote: > El Martes, 26 de abril de 2016 00:08:49 Chris Murphy escribió: >> On Mon, Apr 25, 2016 at 8:03 AM, Alejandro Vargas wrote: > >> I suggest unmounting and running 'btrfs check' (without repair) and >> see if that gives any new information. > > I tried btrfs check but... see the result: > > # btrfs check /dev/sdb1 > Checking filesystem on /dev/sdb1 > UUID: cbfe8735-9f53-46f5-be7e-40f6a61a5506 > checking extents > Killed > > > I tried it several times with the same result. If this is btrfs-progs 4.5.2 it's worth filing a bug. You can trivially use 'strace btrfs check /dev/sdb1' and attach the entire output to the bug report as a file (pasting it in the bug will be messy). More advanced would be to use something like valgrind on it, but only a dev would be able to tell you if it's helpful, I can't: valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes -v btrfs check /dev/sdb1 If it's not progs v.4.5.2 then I suggest upgrading and see if the problem still happens. -- Chris Murphy -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: empty disk reports full
El Martes, 26 de abril de 2016 00:08:49 Chris Murphy escribió: > On Mon, Apr 25, 2016 at 8:03 AM, Alejandro Vargaswrote: > I suggest unmounting and running 'btrfs check' (without repair) and > see if that gives any new information. I tried btrfs check but... see the result: # btrfs check /dev/sdb1 Checking filesystem on /dev/sdb1 UUID: cbfe8735-9f53-46f5-be7e-40f6a61a5506 checking extents Killed I tried it several times with the same result. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: empty disk reports full
Alejandro Vargas posted on Wed, 27 Apr 2016 11:29:31 +0200 as excerpted: >> Also there are two compress mount options that conflict with each >> other, is this intentional? > > I did not thought that compress and compress-force are incompatible... > The intention is to force it to compress the data for using lower disk > space. Compress-force should be enough? Yes. Compress-force simply forces the compression instead of quick- testing whether the file seems easily/effectively compressed first (tho I think it still tests compressed block size and stores it uncompressed if the "compressed" block is actually larger, I don't believe it forces "compression" in /that/ case). It will result in better compression when the first 4k (I believe that's what the quick-test tests on) of a file doesn't compress well but much of the rest will. The problem with having both compress and compress-force in mount options is that I believe it's order-dependent which one ends up being applied, and unless you're a mount options guru, remembering whether it's the first or the last one that gets applied, or a special case where one overrules the other, is hard. So it's best just to use just the option you want and not confuse people, at least others trying to make sense of things even if you yourself know which one gets applied, with both. -- Duncan - List replies preferred. No HTML msgs. "Every nonfree program has a lord, a master -- and if you use the program, he is your master." Richard Stallman -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: empty disk reports full
El Martes, 26 de abril de 2016 00:08:49 Chris Murphy escribió: > > [root@backups ~]# btrfs fi df /mnt/backup > > Data, single: total=1.79TiB, used=1.78TiB > > System, DUP: total=32.00MiB, used=240.00KiB > > Metadata, DUP: total=17.00GiB, used=15.55GiB > > GlobalReserve, single: total=512.00MiB, used=37.72MiB > > This is an awfully full filesystem. Since ancient times it's been > considered best to avoid getting a file system even 95% full let alone > 100% full. Hummm... the problem is that in a big filesystem, 5% of space is very much space... I will modify my scripts for leaving a percent of space instead of a fixed size in bytes. > Surely you've waited a good long while for it to try to start deleting > things, My backup script checks for the available space and deletes the oldest snapshot when the free space is less than 100Gb. What sould be the calculation that tells me I am nearly running out of space and I need to remove the oldest snapshot? The answer of "df" is enoug or I should do some calculation including data and metadata? > > [root@backups ~]# cat /etc/fstab |grep backup > > LABEL=disco_backup /mnt/backup btrfs noauto,compress=zlib,compress- > > force=zlib,commit=60,noatime 0 0 > > When I delete subvolumes, I see it takes up to the commit time for the > delete transaction to be committed, and it can be longer than this by > up to a minute before the btrfs-cleaner process starts to work on > freeing up extents. It's probably unrelated to the problem, but what's > the use case for choosing a commit time of 60? The intention was to improve the speed. The wiki says the default is 30 seconcs and it prints a warning when the value is above 300 seconcs. Then I thought 60 should be a good value for speeding up the writing. Do you think I should use a lower value? May be this? btrfs fi usage /mnt/backup -b | awk ' { if ($1 " " $2 == "Device size:") size=$3; if ($1 " " $2 == "Device allocated:") alloc=$3; } END { print alloc*100/size } ' > Also there are two > compress mount options that conflict with each other, is this > intentional? I did not thought that compress and compress-force are incompatible... The intention is to force it to compress the data for using lower disk space. Compress-force should be enough? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: empty disk reports full
On Mon, Apr 25, 2016 at 8:03 AM, Alejandro Vargaswrote: > El Viernes, 1 de abril de 2016 10:05:07 Hugo Mills escribió: >> On Fri, Apr 01, 2016 at 11:50:50AM +0200, Alejandro Vargas wrote: >> > I am using a 2Tb disk for incremental backups. >> > >> > I use rsync for backing up to a subvolume, and each day I creates an >> > snapshot of the lastest snapshot and do rsync in this. >> > >> > When the disk becomes nearly full (100Gb or less available) I deletes the >> > oldest subvolume (withbtrfs subvolume delete). >> > >> > My problem is that *even removing ALL the subvolumes*, the free space does >> > not change. It continues reporting the same size (disk is nearly full). >> > >> > I tried "btrfs balance start /mnt/backup" but it takes hours and hours. >> > >> > I'm using linux 4.1.15 >> > btrfs-progs v4.1.2 >> >>Can you show us the output of both "sudo btrfs fi show" and "btrfs >> fi df /mnt/backup", please? > > Before deleting subvolumes: > > [root@backups ~]# df /mnt/backup > S.ficheros Tamaño Usados Disp Uso% Montado en > /dev/sdb11,9T 1,9T 5,0M 100% /mnt/backup > > > [root@backups ~]# ls -l /mnt/backup > total 0 > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160318/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160328/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160330/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160401/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160404/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160406/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160408/ > > > [root@backups ~]# btrfs fi show > Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 > Total devices 1 FS bytes used 1.80TiB > devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 > > btrfs-progs v4.1.2 > > [root@backups ~]# btrfs fi df /mnt/backup > Data, single: total=1.79TiB, used=1.79TiB > System, DUP: total=32.00MiB, used=240.00KiB > Metadata, DUP: total=17.00GiB, used=15.83GiB > GlobalReserve, single: total=512.00MiB, used=0.00B > > > Now I remove the oldest subvolume: > > > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160318/ > Delete subvolume (no-commit): '/mnt/backup/back20160318' > > [root@backups ~]# df /mnt/backup > S.ficheros Tamaño Usados Disp Uso% Montado en > /dev/sdb11,9T 1,9T 22M 100% /mnt/backup > > [root@backups ~]# btrfs fi show > Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 > Total devices 1 FS bytes used 1.80TiB > devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 > > [root@backups ~]# btrfs fi show > Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 > Total devices 1 FS bytes used 1.80TiB > devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 > > btrfs-progs v4.1.2 > [root@backups ~]# btrfs fi df /mnt/backup > Data, single: total=1.79TiB, used=1.79TiB > System, DUP: total=32.00MiB, used=240.00KiB > Metadata, DUP: total=17.00GiB, used=15.83GiB > GlobalReserve, single: total=512.00MiB, used=102.53MiB > > > > Now I remove 2 more subvolumes: > > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160328/ > Delete subvolume (no-commit): '/mnt/backup/back20160328' > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160330/ > Delete subvolume (no-commit): '/mnt/backup/back20160330' > > [root@backups ~]# df /mnt/backup/ > S.ficheros Tamaño Usados Disp Uso% Montado en > /dev/sdb11,9T 1,9T 348M 100% /mnt/backup > > [root@backups ~]# btrfs fi show > Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 > Total devices 1 FS bytes used 1.80TiB > devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 > > btrfs-progs v4.1.2 > > Data, single: total=1.79TiB, used=1.79TiB > System, DUP: total=32.00MiB, used=240.00KiB > Metadata, DUP: total=17.00GiB, used=15.83GiB > GlobalReserve, single: total=512.00MiB, used=98.94MiB > > > [root@backups ~]# ls -l /mnt/backup/ > total 0 > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160401/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160404/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160406/ > drwxr-xr-x 1 root root 86 mar 20 16:23 back20160408/ > > > Now I will remove the resting subvolumes > > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160401/ > Delete subvolume (no-commit): '/mnt/backup/back20160401' > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160404/ > Delete subvolume (no-commit): '/mnt/backup/back20160404' > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160406/ > Delete subvolume (no-commit): '/mnt/backup/back20160406' > [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160408/ > Delete subvolume (no-commit): '/mnt/backup/back20160408' > > [root@backups ~]# ls -l /mnt/backup/ > total 0 > > [root@backups ~]# df /mnt/backup/ > S.ficheros Tamaño Usados Disp Uso% Montado en > /dev/sdb11,9T 1,9T 4,6G 100% /mnt/backup > [root@backups ~]# btrfs fi show > Label: 'disco_backup'
Re: empty disk reports full
El Viernes, 1 de abril de 2016 10:05:07 Hugo Mills escribió: > On Fri, Apr 01, 2016 at 11:50:50AM +0200, Alejandro Vargas wrote: > > I am using a 2Tb disk for incremental backups. > > > > I use rsync for backing up to a subvolume, and each day I creates an > > snapshot of the lastest snapshot and do rsync in this. > > > > When the disk becomes nearly full (100Gb or less available) I deletes the > > oldest subvolume (withbtrfs subvolume delete). > > > > My problem is that *even removing ALL the subvolumes*, the free space does > > not change. It continues reporting the same size (disk is nearly full). > > > > I tried "btrfs balance start /mnt/backup" but it takes hours and hours. > > > > I'm using linux 4.1.15 > > btrfs-progs v4.1.2 > >Can you show us the output of both "sudo btrfs fi show" and "btrfs > fi df /mnt/backup", please? Before deleting subvolumes: [root@backups ~]# df /mnt/backup S.ficheros Tamaño Usados Disp Uso% Montado en /dev/sdb11,9T 1,9T 5,0M 100% /mnt/backup [root@backups ~]# ls -l /mnt/backup total 0 drwxr-xr-x 1 root root 86 mar 20 16:23 back20160318/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160328/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160330/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160401/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160404/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160406/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160408/ [root@backups ~]# btrfs fi show Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 Total devices 1 FS bytes used 1.80TiB devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 btrfs-progs v4.1.2 [root@backups ~]# btrfs fi df /mnt/backup Data, single: total=1.79TiB, used=1.79TiB System, DUP: total=32.00MiB, used=240.00KiB Metadata, DUP: total=17.00GiB, used=15.83GiB GlobalReserve, single: total=512.00MiB, used=0.00B Now I remove the oldest subvolume: [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160318/ Delete subvolume (no-commit): '/mnt/backup/back20160318' [root@backups ~]# df /mnt/backup S.ficheros Tamaño Usados Disp Uso% Montado en /dev/sdb11,9T 1,9T 22M 100% /mnt/backup [root@backups ~]# btrfs fi show Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 Total devices 1 FS bytes used 1.80TiB devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 [root@backups ~]# btrfs fi show Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 Total devices 1 FS bytes used 1.80TiB devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 btrfs-progs v4.1.2 [root@backups ~]# btrfs fi df /mnt/backup Data, single: total=1.79TiB, used=1.79TiB System, DUP: total=32.00MiB, used=240.00KiB Metadata, DUP: total=17.00GiB, used=15.83GiB GlobalReserve, single: total=512.00MiB, used=102.53MiB Now I remove 2 more subvolumes: [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160328/ Delete subvolume (no-commit): '/mnt/backup/back20160328' [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160330/ Delete subvolume (no-commit): '/mnt/backup/back20160330' [root@backups ~]# df /mnt/backup/ S.ficheros Tamaño Usados Disp Uso% Montado en /dev/sdb11,9T 1,9T 348M 100% /mnt/backup [root@backups ~]# btrfs fi show Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 Total devices 1 FS bytes used 1.80TiB devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 btrfs-progs v4.1.2 Data, single: total=1.79TiB, used=1.79TiB System, DUP: total=32.00MiB, used=240.00KiB Metadata, DUP: total=17.00GiB, used=15.83GiB GlobalReserve, single: total=512.00MiB, used=98.94MiB [root@backups ~]# ls -l /mnt/backup/ total 0 drwxr-xr-x 1 root root 86 mar 20 16:23 back20160401/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160404/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160406/ drwxr-xr-x 1 root root 86 mar 20 16:23 back20160408/ Now I will remove the resting subvolumes [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160401/ Delete subvolume (no-commit): '/mnt/backup/back20160401' [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160404/ Delete subvolume (no-commit): '/mnt/backup/back20160404' [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160406/ Delete subvolume (no-commit): '/mnt/backup/back20160406' [root@backups ~]# btrfs subvolume delete /mnt/backup/back20160408/ Delete subvolume (no-commit): '/mnt/backup/back20160408' [root@backups ~]# ls -l /mnt/backup/ total 0 [root@backups ~]# df /mnt/backup/ S.ficheros Tamaño Usados Disp Uso% Montado en /dev/sdb11,9T 1,9T 4,6G 100% /mnt/backup [root@backups ~]# btrfs fi show Label: 'disco_backup' uuid: cbfe8735-9f53-46f5-be7e-40f6a61a5506 Total devices 1 FS bytes used 1.80TiB devid1 size 1.82TiB used 1.82TiB path /dev/sdb1 btrfs-progs v4.1.2 [root@backups ~]# btrfs fi df /mnt/backup Data, single: total=1.79TiB, used=1.78TiB System, DUP: total=32.00MiB,
Re: empty disk reports full
On Fri, Apr 01, 2016 at 11:50:50AM +0200, Alejandro Vargas wrote: > I am using a 2Tb disk for incremental backups. > > I use rsync for backing up to a subvolume, and each day I creates an snapshot > of the lastest snapshot and do rsync in this. > > When the disk becomes nearly full (100Gb or less available) I deletes the > oldest subvolume (withbtrfs subvolume delete). > > My problem is that *even removing ALL the subvolumes*, the free space does > not change. It continues reporting the same size (disk is nearly full). > > I tried "btrfs balance start /mnt/backup" but it takes hours and hours. > > I'm using linux 4.1.15 > btrfs-progs v4.1.2 Can you show us the output of both "sudo btrfs fi show" and "btrfs fi df /mnt/backup", please? Hugo. -- Hugo Mills | The Creature from the Black Logon hugo@... carfax.org.uk | http://carfax.org.uk/ | PGP: E2AB1DE4 | signature.asc Description: Digital signature