Re: Cannot balance FS (No space left on device)

2016-07-04 Thread ojab //
On Sat, Jul 2, 2016 at 7:03 PM, Chris Murphy  wrote:
> On Sat, Jul 2, 2016 at 9:07 AM, Hans van Kranenburg
>  wrote:
>
>>
>> Also, the behaviour of *always* creating a new empty block group before
>> starting to work (which makes it impossible to free up space on a fully
>> allocated filesystem with balance) got reverted in:
>>
>> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=cf25ce518e8ef9d59b292e51193bed2b023a32da
>>
>> This patch is in 4.5 and 4.7-rc, but *not* in 4.6.
>
> Upstream it first appears in 4.5.7.
>
> --
> Chris Murphy
And looks like this patch also fixed my `balance` issue, yay. Thanks.

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-07-02 Thread Chris Murphy
On Sat, Jul 2, 2016 at 9:07 AM, Hans van Kranenburg
 wrote:

>
> Also, the behaviour of *always* creating a new empty block group before
> starting to work (which makes it impossible to free up space on a fully
> allocated filesystem with balance) got reverted in:
>
> https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=cf25ce518e8ef9d59b292e51193bed2b023a32da
>
> This patch is in 4.5 and 4.7-rc, but *not* in 4.6.

Upstream it first appears in 4.5.7.

-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-07-02 Thread Hans van Kranenburg

On 06/13/2016 02:33 PM, Austin S. Hemmelgarn wrote:

On 2016-06-10 18:39, Hans van Kranenburg wrote:

On 06/11/2016 12:10 AM, ojab // wrote:

On Fri, Jun 10, 2016 at 9:56 PM, Hans van Kranenburg
 wrote:

You can work around it by either adding two disks (like Henk said),
or by
temporarily converting some chunks to single. Just enough to get some
free
space on the first two disks to get a balance going that can fill the
third
one. You don't have to convert all of your data or metadata to single!

Something like:

btrfs balance start -v -dconvert=single,limit=10 /mnt/xxx/


Unfortunately it fails even if I set limit=1:

$ sudo btrfs balance start -v -dconvert=single,limit=1 /mnt/xxx/
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x120): converting, target=281474976710656, soft is
off, limit=1
ERROR: error during balancing '/mnt/xxx/': No space left on device
There may be more info in syslog - try dmesg | tail


Ah, apparently the balance operation *always* wants to allocate some new
empty space before starting to look more close at the task you give it...

No, that's not exactly true.  It seems to be a rather common fallacy
right now that balance repacks data into existing chunks, which is
absolutely false.  What a balance does is to send everything selected by
the filters through the allocator again, and specifically prevent any
existing chunks from being used to satisfy the allocation.  When you
have 5 data chunks that are 20% used and run 'balance -dlimit=20', it
doesn't pack that all into the first chunk, it allocates a new chunk,
and then packs it all into that, then frees all the other chunks.  This
behavior is actually a pretty important property when adding or removing
devices or converting between profiles, because it's what forces things
into the new configuration of the filesystem.

In an ideal situation, the limit filters should make it repack into
existing chunks when specified alone, but currently that's not how it
works, and I kind of doubt that that will ever be how it works.


I have to disagree with you here, based on what I see happening. Two 
examples will follow, providing some pudding for the proof.


Also, the behaviour of *always* creating a new empty block group before 
starting to work (which makes it impossible to free up space on a fully 
allocated filesystem with balance) got reverted in:


https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git/commit/?id=cf25ce518e8ef9d59b292e51193bed2b023a32da

This patch is in 4.5 and 4.7-rc, but *not* in 4.6.

Script used to provide block group output, using pyton-btrfs:

-# cat show_block_groups.py
#!/usr/bin/python

from __future__ import print_function
import btrfs
import sys

fs = btrfs.FileSystem(sys.argv[1])
for chunk in fs.chunks():
print(fs.block_group(chunk.vaddr, chunk.length))

Example 1:

-# uname -a
Linux ichiban 4.5.0-0.bpo.2-amd64 #1 SMP Debian 4.5.4-1~bpo8+1 
(2016-05-13) x86_64 GNU/Linux


-# ./show_block_groups.py /
block group vaddr 86211821568 length 1073741824 flags DATA used 
83712 used_pct 78
block group vaddr 87285563392 length 33554432 flags SYSTEM used 16384 
used_pct 0
block group vaddr 87319117824 length 1073741824 flags DATA used 
1070030848 used_pct 100
block group vaddr 88392859648 length 1073741824 flags DATA used 
1057267712 used_pct 98
block group vaddr 89466601472 length 1073741824 flags DATA used 
1066360832 used_pct 99
block group vaddr 90540343296 length 268435456 flags METADATA used 
238256128 used_pct 89
block group vaddr 90808778752 length 268435456 flags METADATA used 
226082816 used_pct 84
block group vaddr 91077214208 length 268435456 flags METADATA used 
242548736 used_pct 90
block group vaddr 91345649664 length 268435456 flags METADATA used 
218415104 used_pct 81
block group vaddr 91614085120 length 268435456 flags METADATA used 
223723520 used_pct 83
block group vaddr 91882520576 length 268435456 flags METADATA used 
68272128 used_pct 25
block group vaddr 92150956032 length 1073741824 flags DATA used 
1048154112 used_pct 98
block group vaddr 93224697856 length 1073741824 flags DATA used 
800985088 used_pct 75
block group vaddr 94298439680 length 1073741824 flags DATA used 62197760 
used_pct 6
block group vaddr 95372181504 length 1073741824 flags DATA used 49541120 
used_pct 5
block group vaddr 96445923328 length 1073741824 flags DATA used 
142856192 used_pct 13
block group vaddr 97519665152 length 1073741824 flags DATA used 
102051840 used_pct 10


Now do a balance, to remove the least used block group:

1st terminal:
-# watch -d './show_block_groups.py /'

2nd terminal:
-# btrfs balance start -v -dusage=5 /
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x2): balancing, usage=5
Done, had to relocate 1 out of 17 chunks

After:

-# ./show_block_groups.py /
block group vaddr 86211821568 length 1073741824 flags DATA used 
83712 used_pct 78
block group vaddr 87285563392 length 33554432 flags SYSTEM used 16384 
used_pct 0
block group va

Re: Cannot balance FS (No space left on device)

2016-06-15 Thread ojab //
On Wed, Jun 15, 2016 at 12:41 PM, E V  wrote:
> In my experience phantom ENOSPC messages are frequently due to the
> free space cache being corrupt. Mounting with nospace_cache or
> space_cache=v2 may help.

Unfortunately this is not the case.

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-15 Thread E V
In my experience phantom ENOSPC messages are frequently due to the
free space cache being corrupt. Mounting with nospace_cache or
space_cache=v2 may help.

On Wed, Jun 15, 2016 at 6:59 AM, ojab //  wrote:
> On Fri, Jun 10, 2016 at 2:58 PM, ojab //  wrote:
>> [Please CC me since I'm not subscribed to the list]
>
> So I'm still playing w/ btrfs and again I have 'No space left on
> device' during balance:
>>$ sudo /usr/bin/btrfs balance start --full-balance /mnt/xxx/
>>ERROR: error during balancing '/mnt/xxx/': No space left on device
>>There may be more info in syslog - try dmesg | tail
>>$ sudo dmesg -T  | grep BTRFS | tail
>>[Wed Jun 15 10:28:53 2016] BTRFS info (device sdc1): relocating block group 
>>13043037372416 flags 9
>>[Wed Jun 15 10:28:53 2016] BTRFS info (device sdc1): relocating block group 
>>13041963630592 flags 20
>>[Wed Jun 15 10:29:54 2016] BTRFS info (device sdc1): found 25155 extents
>>[Wed Jun 15 10:29:54 2016] BTRFS info (device sdc1): relocating block group 
>>13040889888768 flags 20
>>[Wed Jun 15 10:30:50 2016] BTRFS info (device sdc1): found 63700 extents
>>[Wed Jun 15 10:30:50 2016] BTRFS info (device sdc1): relocating block group 
>>13040856334336 flags 18
>>[Wed Jun 15 10:30:51 2016] BTRFS info (device sdc1): found 9 extents
>>[Wed Jun 15 10:30:52 2016] BTRFS info (device sdc1): relocating block group 
>>13039782592512 flags 20
>>[Wed Jun 15 10:32:08 2016] BTRFS info (device sdc1): found 61931 extents
>>[Wed Jun 15 10:32:08 2016] BTRFS info (device sdc1): 896 enospc errors during 
>>balance
>>$ sudo /usr/bin/btrfs balance start -dusage=75 /mnt/xxx/
>>Done, had to relocate 1 out of 901 chunks
>>$ sudo /usr/bin/btrfs balance start -dusage=76 /mnt/xxx/
>>ERROR: error during balancing '/mnt/xxx/': No space left on device
>>There may be more info in syslog - try dmesg | tail
>>$ sudo /usr/bin/btrfs fi usage /mnt/xxx/
>>Overall:
>>Device size:   1.98TiB
>>Device allocated:  1.85TiB
>>Device unallocated:135.06GiB
>>Device missing:0.00B
>>Used:  1.85TiB
>>Free (estimated):  135.68GiB  (min: 68.15GiB)
>>Data ratio:1.00
>>Metadata ratio:2.00
>>Global reserve:512.00MiB  (used: 0.00B)
>>
>>Data,RAID0: Size:1.84TiB, Used:1.84TiB
>>   /dev/sdb1   895.27GiB
>>   /dev/sdc1   895.27GiB
>>   /dev/sdd1   37.27GiB
>>   /dev/sdd2   37.27GiB
>>   /dev/sde1   11.27GiB
>>   /dev/sde2   11.27GiB
>>
>>Metadata,RAID1: Size:4.00GiB, Used:2.21GiB
>>   /dev/sdb1   2.00GiB
>>   /dev/sdc1   2.00GiB
>>   /dev/sde1   2.00GiB
>>   /dev/sde2   2.00GiB
>>
>>System,RAID1: Size:32.00MiB, Used:160.00KiB
>>   /dev/sde132.00MiB
>>   /dev/sde232.00MiB
>>
>>Unallocated:
>>   /dev/sdb1  34.25GiB
>>   /dev/sdc1  34.25GiB
>>   /dev/sdd1  1.11MiB
>>   /dev/sdd2  1.05MiB
>>   /dev/sde1  33.28GiB
>>   /dev/sde2  33.28GiB
>>$ sudo /usr/bin/btrfs fi show /mnt/xxx/
>>Label: none  uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
>>   Total devices 6 FS bytes used 1.84TiB
>>   devid1 size 931.51GiB used 897.27GiB path /dev/sdc1
>>   devid2 size 931.51GiB used 897.27GiB path /dev/sdb1
>>   devid3 size 37.27GiB used 37.27GiB path /dev/sdd1
>>   devid4 size 37.27GiB used 37.27GiB path /dev/sdd2
>>   devid5 size 46.58GiB used 13.30GiB path /dev/sde1
>>   devid6 size 46.58GiB used 13.30GiB path /dev/sde2
>
> show_usage.py output can be found here:
> https://gist.github.com/ojab/a24ce373ce5bede001140c572879fce8
>
> Balance always fails with '896 enospc errors during balance' message
> in dmesg. I don't quite understand the logic: there is a plenty of
> space on four devices, why `btrfs` apparently trying to use sdd[0-1]
> drives, is it a bug or intended behaviour?
> What is the proper way of fixing such an issue in general, adding more
> devices and rebalancing? How can I determine how many devices should
> be added and their capacity?
>
> I'm still on 4.6.2 vanilla kernel and using btrfs-progs-4.6.
>
> //wbr ojab
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-15 Thread ojab //
On Fri, Jun 10, 2016 at 2:58 PM, ojab //  wrote:
> [Please CC me since I'm not subscribed to the list]

So I'm still playing w/ btrfs and again I have 'No space left on
device' during balance:
>$ sudo /usr/bin/btrfs balance start --full-balance /mnt/xxx/
>ERROR: error during balancing '/mnt/xxx/': No space left on device
>There may be more info in syslog - try dmesg | tail
>$ sudo dmesg -T  | grep BTRFS | tail
>[Wed Jun 15 10:28:53 2016] BTRFS info (device sdc1): relocating block group 
>13043037372416 flags 9
>[Wed Jun 15 10:28:53 2016] BTRFS info (device sdc1): relocating block group 
>13041963630592 flags 20
>[Wed Jun 15 10:29:54 2016] BTRFS info (device sdc1): found 25155 extents
>[Wed Jun 15 10:29:54 2016] BTRFS info (device sdc1): relocating block group 
>13040889888768 flags 20
>[Wed Jun 15 10:30:50 2016] BTRFS info (device sdc1): found 63700 extents
>[Wed Jun 15 10:30:50 2016] BTRFS info (device sdc1): relocating block group 
>13040856334336 flags 18
>[Wed Jun 15 10:30:51 2016] BTRFS info (device sdc1): found 9 extents
>[Wed Jun 15 10:30:52 2016] BTRFS info (device sdc1): relocating block group 
>13039782592512 flags 20
>[Wed Jun 15 10:32:08 2016] BTRFS info (device sdc1): found 61931 extents
>[Wed Jun 15 10:32:08 2016] BTRFS info (device sdc1): 896 enospc errors during 
>balance
>$ sudo /usr/bin/btrfs balance start -dusage=75 /mnt/xxx/
>Done, had to relocate 1 out of 901 chunks
>$ sudo /usr/bin/btrfs balance start -dusage=76 /mnt/xxx/
>ERROR: error during balancing '/mnt/xxx/': No space left on device
>There may be more info in syslog - try dmesg | tail
>$ sudo /usr/bin/btrfs fi usage /mnt/xxx/
>Overall:
>Device size:   1.98TiB
>Device allocated:  1.85TiB
>Device unallocated:135.06GiB
>Device missing:0.00B
>Used:  1.85TiB
>Free (estimated):  135.68GiB  (min: 68.15GiB)
>Data ratio:1.00
>Metadata ratio:2.00
>Global reserve:512.00MiB  (used: 0.00B)
>
>Data,RAID0: Size:1.84TiB, Used:1.84TiB
>   /dev/sdb1   895.27GiB
>   /dev/sdc1   895.27GiB
>   /dev/sdd1   37.27GiB
>   /dev/sdd2   37.27GiB
>   /dev/sde1   11.27GiB
>   /dev/sde2   11.27GiB
>
>Metadata,RAID1: Size:4.00GiB, Used:2.21GiB
>   /dev/sdb1   2.00GiB
>   /dev/sdc1   2.00GiB
>   /dev/sde1   2.00GiB
>   /dev/sde2   2.00GiB
>
>System,RAID1: Size:32.00MiB, Used:160.00KiB
>   /dev/sde132.00MiB
>   /dev/sde232.00MiB
>
>Unallocated:
>   /dev/sdb1  34.25GiB
>   /dev/sdc1  34.25GiB
>   /dev/sdd1  1.11MiB
>   /dev/sdd2  1.05MiB
>   /dev/sde1  33.28GiB
>   /dev/sde2  33.28GiB
>$ sudo /usr/bin/btrfs fi show /mnt/xxx/
>Label: none  uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
>   Total devices 6 FS bytes used 1.84TiB
>   devid1 size 931.51GiB used 897.27GiB path /dev/sdc1
>   devid2 size 931.51GiB used 897.27GiB path /dev/sdb1
>   devid3 size 37.27GiB used 37.27GiB path /dev/sdd1
>   devid4 size 37.27GiB used 37.27GiB path /dev/sdd2
>   devid5 size 46.58GiB used 13.30GiB path /dev/sde1
>   devid6 size 46.58GiB used 13.30GiB path /dev/sde2

show_usage.py output can be found here:
https://gist.github.com/ojab/a24ce373ce5bede001140c572879fce8

Balance always fails with '896 enospc errors during balance' message
in dmesg. I don't quite understand the logic: there is a plenty of
space on four devices, why `btrfs` apparently trying to use sdd[0-1]
drives, is it a bug or intended behaviour?
What is the proper way of fixing such an issue in general, adding more
devices and rebalancing? How can I determine how many devices should
be added and their capacity?

I'm still on 4.6.2 vanilla kernel and using btrfs-progs-4.6.

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-13 Thread Austin S. Hemmelgarn

On 2016-06-10 18:39, Hans van Kranenburg wrote:

On 06/11/2016 12:10 AM, ojab // wrote:

On Fri, Jun 10, 2016 at 9:56 PM, Hans van Kranenburg
 wrote:

You can work around it by either adding two disks (like Henk said),
or by
temporarily converting some chunks to single. Just enough to get some
free
space on the first two disks to get a balance going that can fill the
third
one. You don't have to convert all of your data or metadata to single!

Something like:

btrfs balance start -v -dconvert=single,limit=10 /mnt/xxx/


Unfortunately it fails even if I set limit=1:

$ sudo btrfs balance start -v -dconvert=single,limit=1 /mnt/xxx/
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x120): converting, target=281474976710656, soft is
off, limit=1
ERROR: error during balancing '/mnt/xxx/': No space left on device
There may be more info in syslog - try dmesg | tail


Ah, apparently the balance operation *always* wants to allocate some new
empty space before starting to look more close at the task you give it...
No, that's not exactly true.  It seems to be a rather common fallacy 
right now that balance repacks data into existing chunks, which is 
absolutely false.  What a balance does is to send everything selected by 
the filters through the allocator again, and specifically prevent any 
existing chunks from being used to satisfy the allocation.  When you 
have 5 data chunks that are 20% used and run 'balance -dlimit=20', it 
doesn't pack that all into the first chunk, it allocates a new chunk, 
and then packs it all into that, then frees all the other chunks.  This 
behavior is actually a pretty important property when adding or removing 
devices or converting between profiles, because it's what forces things 
into the new configuration of the filesystem.


In an ideal situation, the limit filters should make it repack into 
existing chunks when specified alone, but currently that's not how it 
works, and I kind of doubt that that will ever be how it works.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-12 Thread ojab //
On Fri, Jun 10, 2016 at 9:00 PM, Henk Slager  wrote:
> I have seldom seen an fs so full, very regular numbers :)
>
> But can you provide the output of this script:
> https://github.com/knorrie/btrfs-heatmap/blob/master/show_usage.py
>
> It gives better info w.r.t. devices and it is then easier to say what
> has to be done.
>
> But you have btrfs raid0 data (2 stripes) and raid1 metadata, and they
> both want 2 devices currently and there is only one device with place
> for your 2G chunks. So in theory you need 2 empty devices added for a
> balance to succeed. If you can allow reduces redundancy for some time,
> you could shrink the fs used space on hdd1 to half, same for the
> partition itself, add a hdd2 parttition and add that to the fs. Or
> just add another HDD.
> Then your 50Gb of deletions could get into effect if you start
> balancing. Also have a look at the balance stripe filters I would say.

So after adding another one [100Gb] disk I've successfully run `btrfs
balance` and deleted new disks without any issues.
Thanks for your help.

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-10 Thread Hans van Kranenburg

On 06/11/2016 12:10 AM, ojab // wrote:

On Fri, Jun 10, 2016 at 9:56 PM, Hans van Kranenburg
 wrote:

You can work around it by either adding two disks (like Henk said), or by
temporarily converting some chunks to single. Just enough to get some free
space on the first two disks to get a balance going that can fill the third
one. You don't have to convert all of your data or metadata to single!

Something like:

btrfs balance start -v -dconvert=single,limit=10 /mnt/xxx/


Unfortunately it fails even if I set limit=1:

$ sudo btrfs balance start -v -dconvert=single,limit=1 /mnt/xxx/
Dumping filters: flags 0x1, state 0x0, force is off
  DATA (flags 0x120): converting, target=281474976710656, soft is off, limit=1
ERROR: error during balancing '/mnt/xxx/': No space left on device
There may be more info in syslog - try dmesg | tail


Ah, apparently the balance operation *always* wants to allocate some new 
empty space before starting to look more close at the task you give it...


This means that it's trying to allocate a new set of RAID0 chunks 
first... and that's exactly the opposite of what we want to accomplish here.


If you really can add only one extra device now, there's always a more 
dirty way to get the job done.


What you can do for example is:
- partition the new disk in two partitions
- add them both to the filesystem (btrfs doesn't know both block devices 
are on the same physical disk, ghehe)

- convert a small number of data blocks to single
- then device delete the third disk again so the single chunks move back 
to the two first disks

- add the third disk back as one whole block device
- etc...

:D

Moo,

--
Hans van Kranenburg - System / Network Engineer
T +31 (0)10 2760434 | hans.van.kranenb...@mendix.com | www.mendix.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-10 Thread ojab //
On Fri, Jun 10, 2016 at 9:56 PM, Hans van Kranenburg
 wrote:
> You can work around it by either adding two disks (like Henk said), or by
> temporarily converting some chunks to single. Just enough to get some free
> space on the first two disks to get a balance going that can fill the third
> one. You don't have to convert all of your data or metadata to single!
>
> Something like:
>
> btrfs balance start -v -dconvert=single,limit=10 /mnt/xxx/

Unfortunately it fails even if I set limit=1:
>$ sudo btrfs balance start -v -dconvert=single,limit=1 /mnt/xxx/
>Dumping filters: flags 0x1, state 0x0, force is off
>  DATA (flags 0x120): converting, target=281474976710656, soft is off, limit=1
>ERROR: error during balancing '/mnt/xxx/': No space left on device
>There may be more info in syslog - try dmesg | tail

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-10 Thread Hans van Kranenburg

On 06/10/2016 11:33 PM, ojab // wrote:

On Fri, Jun 10, 2016 at 9:00 PM, Henk Slager  wrote:

I have seldom seen an fs so full, very regular numbers :)

But can you provide the output of this script:
https://github.com/knorrie/btrfs-heatmap/blob/master/show_usage.py

It gives better info w.r.t. devices and it is then easier to say what
has to be done.

But you have btrfs raid0 data (2 stripes) and raid1 metadata, and they
both want 2 devices currently and there is only one device with place
for your 2G chunks. So in theory you need 2 empty devices added for a
balance to succeed. If you can allow reduces redundancy for some time,
you could shrink the fs used space on hdd1 to half, same for the
partition itself, add a hdd2 parttition and add that to the fs. Or
just add another HDD.
Then your 50Gb of deletions could get into effect if you start
balancing. Also have a look at the balance stripe filters I would say.


Output of show_usage.py:
https://gist.githubusercontent.com/ojab/850276af6ff3aa566b8a3ce6ec444521/raw/4d77e02d556ed0edb0f9823259f145f65e80bc66/gistfile1.txt
Looks like I only have smaller spare drives at the moment (largest is
100GB), is it ok to use? Or there is some minimal drive size needed
for my setup?


You can work around it by either adding two disks (like Henk said), or 
by temporarily converting some chunks to single. Just enough to get some 
free space on the first two disks to get a balance going that can fill 
the third one. You don't have to convert all of your data or metadata to 
single!


Something like:

btrfs balance start -v -dconvert=single,limit=10 /mnt/xxx/

New allocated chunks will go to the third disk, because it has the most 
free space.


After this, you can convert the single data back to raid0:

btrfs balance start -v -dconvert=raid0,soft /mnt/xxx/

soft is important, because it only touches everything that is not raid0 yet.

And in the end there should be a few GB of free space on the first two 
disks, so you can do the big balance to spread all data over the three 
disks, just btrfs balance start -v -dusage=100 /mnt/xxx/


Review the commands before doing anything, as I haven't tested this 
here. The man page for btrfs-balance contains all info :)


Looking at btrfs balance status, btrfs fi show etc, in another terminal 
while it's working is always nice, so you see what's happening, and you 
can always stop it when you think it moved around enough data with btrfs 
balance cancel.


Moo,

--
Hans van Kranenburg - System / Network Engineer
T +31 (0)10 2760434 | hans.van.kranenb...@mendix.com | www.mendix.com
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-10 Thread ojab //
On Fri, Jun 10, 2016 at 9:00 PM, Henk Slager  wrote:
> I have seldom seen an fs so full, very regular numbers :)
>
> But can you provide the output of this script:
> https://github.com/knorrie/btrfs-heatmap/blob/master/show_usage.py
>
> It gives better info w.r.t. devices and it is then easier to say what
> has to be done.
>
> But you have btrfs raid0 data (2 stripes) and raid1 metadata, and they
> both want 2 devices currently and there is only one device with place
> for your 2G chunks. So in theory you need 2 empty devices added for a
> balance to succeed. If you can allow reduces redundancy for some time,
> you could shrink the fs used space on hdd1 to half, same for the
> partition itself, add a hdd2 parttition and add that to the fs. Or
> just add another HDD.
> Then your 50Gb of deletions could get into effect if you start
> balancing. Also have a look at the balance stripe filters I would say.

Output of show_usage.py:
https://gist.githubusercontent.com/ojab/850276af6ff3aa566b8a3ce6ec444521/raw/4d77e02d556ed0edb0f9823259f145f65e80bc66/gistfile1.txt
Looks like I only have smaller spare drives at the moment (largest is
100GB), is it ok to use? Or there is some minimal drive size needed
for my setup?

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Cannot balance FS (No space left on device)

2016-06-10 Thread Henk Slager
On Fri, Jun 10, 2016 at 8:04 PM, ojab //  wrote:
> [Please CC me since I'm not subscribed to the list]
> Hi,
> I've tried to `/usr/bin/btrfs fi defragment -r` my btrfs partition,
> but it's failed w/ "No space left on device" and now I can't get any
> free space on that partition (deleting some files or adding new device
> doesn't help). During defrag I've used `space_cache=v2` mount option,
> but remounted FS w/ `clear_cache` flag since then. Also I've deleted
> about 50Gb of files and added new 250Gb disk since then:
>
>>$ df -h /mnt/xxx/
>>Filesystem  Size  Used Avail Use% Mounted on
>>/dev/sdc1   2,1T  1,8T   37G  99% /mnt/xxx
>>$ sudo /usr/bin/btrfs fi show
>>Label: none  uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
>>Total devices 3 FS bytes used 1.78TiB
>>devid1 size 931.51GiB used 931.51GiB path /dev/sdc1
>>devid2 size 931.51GiB used 931.51GiB path /dev/sdb1
>>devid3 size 230.41GiB used 0.00B path /dev/sdd1
>>$ sudo /usr/bin/btrfs fi usage /mnt/xxx/
>>Overall:
>>Device size:   2.04TiB
>>Device allocated:  1.82TiB
>>Device unallocated:230.41GiB
>>Device missing:0.00B
>>Used:  1.78TiB
>>Free (estimated):  267.23GiB  (min: 152.03GiB)
>>Data ratio:1.00
>>Metadata ratio:2.00
>>Global reserve:512.00MiB  (used: 0.00B)
>>
>>Data,RAID0: Size:1.81TiB, Used:1.78TiB
>>   /dev/sdb1   928.48GiB
>>   /dev/sdc1   928.48GiB
>>
>>Metadata,RAID1: Size:3.00GiB, Used:2.30GiB
>>   /dev/sdb1   3.00GiB
>>   /dev/sdc1   3.00GiB
>>
>>System,RAID1: Size:32.00MiB, Used:176.00KiB
>>   /dev/sdb132.00MiB
>>   /dev/sdc132.00MiB
>>
>>Unallocated:
>>   /dev/sdb1   1.01MiB
>>   /dev/sdc1   1.00MiB
>>   /dev/sdd1   230.41GiB
>>$ sudo /usr/bin/btrfs balance start -dusage=66 /mnt/xxx/
>>Done, had to relocate 0 out of 935 chunks
>>$ sudo /usr/bin/btrfs balance start -dusage=67 /mnt/xxx/
>>ERROR: error during balancing '/mnt/xxx/': No space left on device
>>There may be more info in syslog - try dmesg | tail
>
> I assume that there is something wrong with metadata, since I can copy
> files to FS.
> I'm on 4.6.2 vanilla kernel and using btrfs-progs-4.6, btrfs-debugfs
> output can be found here:
> https://gist.githubusercontent.com/ojab/1a8b1f83341403a169a8e66995c7c3da/raw/61621d22f706d7543a93a3d005415543af9a0db0/gistfile1.txt.
> Any hint what else can I try to fix the issue?

I have seldom seen an fs so full, very regular numbers :)

But can you provide the output of this script:
https://github.com/knorrie/btrfs-heatmap/blob/master/show_usage.py

It gives better info w.r.t. devices and it is then easier to say what
has to be done.

But you have btrfs raid0 data (2 stripes) and raid1 metadata, and they
both want 2 devices currently and there is only one device with place
for your 2G chunks. So in theory you need 2 empty devices added for a
balance to succeed. If you can allow reduces redundancy for some time,
you could shrink the fs used space on hdd1 to half, same for the
partition itself, add a hdd2 parttition and add that to the fs. Or
just add another HDD.
Then your 50Gb of deletions could get into effect if you start
balancing. Also have a look at the balance stripe filters I would say.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Cannot balance FS (No space left on device)

2016-06-10 Thread ojab //
[Please CC me since I'm not subscribed to the list]
Hi,
I've tried to `/usr/bin/btrfs fi defragment -r` my btrfs partition,
but it's failed w/ "No space left on device" and now I can't get any
free space on that partition (deleting some files or adding new device
doesn't help). During defrag I've used `space_cache=v2` mount option,
but remounted FS w/ `clear_cache` flag since then. Also I've deleted
about 50Gb of files and added new 250Gb disk since then:

>$ df -h /mnt/xxx/
>Filesystem  Size  Used Avail Use% Mounted on
>/dev/sdc1   2,1T  1,8T   37G  99% /mnt/xxx
>$ sudo /usr/bin/btrfs fi show
>Label: none  uuid: 8a65465d-1a8c-4f80-abc6-c818c38567c3
>Total devices 3 FS bytes used 1.78TiB
>devid1 size 931.51GiB used 931.51GiB path /dev/sdc1
>devid2 size 931.51GiB used 931.51GiB path /dev/sdb1
>devid3 size 230.41GiB used 0.00B path /dev/sdd1
>$ sudo /usr/bin/btrfs fi usage /mnt/xxx/
>Overall:
>Device size:   2.04TiB
>Device allocated:  1.82TiB
>Device unallocated:230.41GiB
>Device missing:0.00B
>Used:  1.78TiB
>Free (estimated):  267.23GiB  (min: 152.03GiB)
>Data ratio:1.00
>Metadata ratio:2.00
>Global reserve:512.00MiB  (used: 0.00B)
>
>Data,RAID0: Size:1.81TiB, Used:1.78TiB
>   /dev/sdb1   928.48GiB
>   /dev/sdc1   928.48GiB
>
>Metadata,RAID1: Size:3.00GiB, Used:2.30GiB
>   /dev/sdb1   3.00GiB
>   /dev/sdc1   3.00GiB
>
>System,RAID1: Size:32.00MiB, Used:176.00KiB
>   /dev/sdb132.00MiB
>   /dev/sdc132.00MiB
>
>Unallocated:
>   /dev/sdb1   1.01MiB
>   /dev/sdc1   1.00MiB
>   /dev/sdd1   230.41GiB
>$ sudo /usr/bin/btrfs balance start -dusage=66 /mnt/xxx/
>Done, had to relocate 0 out of 935 chunks
>$ sudo /usr/bin/btrfs balance start -dusage=67 /mnt/xxx/
>ERROR: error during balancing '/mnt/xxx/': No space left on device
>There may be more info in syslog - try dmesg | tail

I assume that there is something wrong with metadata, since I can copy
files to FS.
I'm on 4.6.2 vanilla kernel and using btrfs-progs-4.6, btrfs-debugfs
output can be found here:
https://gist.githubusercontent.com/ojab/1a8b1f83341403a169a8e66995c7c3da/raw/61621d22f706d7543a93a3d005415543af9a0db0/gistfile1.txt.
Any hint what else can I try to fix the issue?

//wbr ojab
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html