Re: Is it possible to reclaim block groups once they are allocated to data or metadata?

2012-05-06 Thread Ilya Dryomov
On Sun, May 06, 2012 at 01:07:06PM +1000, Mike Sampson wrote:
 On Sat, May 5, 2012 at 10:52 PM, Hugo Mills h...@carfax.org.uk wrote:
  On Sat, May 05, 2012 at 10:37:01PM +1000, Mike Sampson wrote:
  Hello list,
 
  recently reformatted my home partition from XFS to RAID1 btrfs. I used
  the default options to mkfs.btrfs except for enabling raid1 for data
  as well as metadata. Filesystem is made up of two 1TB drives.
 
  mike@mercury (0) pts/3 ~ $ sudo btrfs filesystem show
  Label: none  uuid: f08a8896-e03e-4064-9b94-9342fb547e47
        Total devices 2 FS bytes used 888.06GB
        devid    1 size 931.51GB used 931.51GB path /dev/sdb1
        devid    2 size 931.51GB used 931.49GB path /dev/sdc1
 
  Btrfs Btrfs v0.19
 
  mike@mercury (0) pts/3 ~ $ btrfs filesystem df /home
  Data, RAID1: total=893.48GB, used=886.60GB
  Data: total=8.00MB, used=0.00
  System, RAID1: total=8.00MB, used=136.00KB
  System: total=4.00MB, used=0.00
  Metadata, RAID1: total=38.00GB, used=2.54GB
  Metadata: total=8.00MB, used=0.00
 
  As can be seen I don't have a lot of free space left and while I am
  planning on adding more storage soon I would like to gain a little
  breathing room until I can do this. While I don't have a lot of space
  remaining in Data, RAID1 I do have a good chunk in Metadata, RAID1.
  2.5GB used out of 38GB. Does this excess become available
  automatically to the file system when the block groups in Data, RAID1
  are exhausted or, if not, is there a way to manually reallocate them?
 
    Youre best bet at the moment is to try a partial balance of
  metadata chunks:
 
  # btrfs balance start -m /home
 
    That will rewrite all of your metadata, putting it through the
  allocator again, and removing the original allocated chunks. This
  should have the effect of reducing the allocation of metadata chunks.
 
    You will need a 3.3 kernel, or later, and an up-to-date userspace
  from cmason's git repository.
 
 Gave this a shot and it did help.
 
 mike@mercury (0) pts/3 ~/.../btrfs-progs git:master $ uname -r
 3.3.4-2-ARCH
 
 mike@mercury (0) pts/3 ~/.../btrfs-progs git:master $ sudo ./btrfs
 balance start -m /home
 
 Done, had to relocate 40 out of 934 chunks
 
 mike@mercury (0) pts/3 ~/.../btrfs-progs git:master $ ./btrfs
 filesystem df /home
 Data, RAID1: total=900.97GB, used=880.06GB
 Data: total=8.00MB, used=0.00
 System, RAID1: total=32.00MB, used=136.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=30.50GB, used=2.54GB
 
 There is now 8GB less in Metadata and I was able to delete some files
 as well to free up space. There is still a lot of wasted space in the
 metadata block groups. It seems that it allocates more metadata block
 groups than required for my filesystem. This will do until I am able
 to add a couple of devices to the system. Is there anyway to adjust
 the block group allocation strategy at filesystem creation?

No.  Chunk allocator currently allocates a lot more chunks than actually
needed, and it impacts both balancing and normal operation.  Try this:

# btrfs balance start -musage=10 /home

This is suboptimal, but it should get rid of more chunks.

Thanks,

Ilya
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is it possible to reclaim block groups once they are allocated to data or metadata?

2012-05-06 Thread Hugo Mills
On Sun, May 06, 2012 at 01:26:45PM +0300, Ilya Dryomov wrote:
 On Sun, May 06, 2012 at 01:07:06PM +1000, Mike Sampson wrote:
  There is now 8GB less in Metadata and I was able to delete some files
  as well to free up space. There is still a lot of wasted space in the
  metadata block groups. It seems that it allocates more metadata block
  groups than required for my filesystem. This will do until I am able
  to add a couple of devices to the system. Is there anyway to adjust
  the block group allocation strategy at filesystem creation?
 
 No.  Chunk allocator currently allocates a lot more chunks than actually
 needed, and it impacts both balancing and normal operation.  Try this:
 
 # btrfs balance start -musage=10 /home
 
 This is suboptimal, but it should get rid of more chunks.

   While we're talking about it, what is the parameter to the usage
option? I'm assuming it selects chunks which are less than some amount
full -- but is the value a percentage, or a quantity in megabytes
(power-of-10 or power-of-2), or something else?

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- Doughnut furs ache me, Omar Dorlin. ---   


signature.asc
Description: Digital signature


Re: balancing metadata fails with no space left on device

2012-05-06 Thread Martin Steigerwald
Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:
 Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:
  Hi!
  
  merkaba:~ btrfs balance start -m /
  ERROR: error during balancing '/' - No space left on device
  There may be more info in syslog - try dmesg | tail
  merkaba:~#19 dmesg | tail -22
  [   62.918734] CPU0: Package power limit normal
  [  525.229976] btrfs: relocating block group 20422066176 flags 1
  [  526.940452] btrfs: found 3048 extents
  [  528.803778] btrfs: found 3048 extents
[…]
  [  635.906517] btrfs: found 1 extents
  [  636.038096] btrfs: 1 enospc errors during balance
  
  
  merkaba:~ btrfs filesystem show
  failed to read /dev/sr0
  Label: 'debian'  uuid: […]
  
  Total devices 1 FS bytes used 7.89GB
  devid1 size 18.62GB used 17.58GB path /dev/dm-0
  
  Btrfs Btrfs v0.19
  merkaba:~ btrfs filesystem df /
  Data: total=15.52GB, used=7.31GB
  System, DUP: total=32.00MB, used=4.00KB
  System: total=4.00MB, used=0.00
  Metadata, DUP: total=1.00GB, used=587.83MB
 
 I thought data tree might have been to big, so out of curiousity I
 tried a full balance. It shrunk the data tree but it failed as well:
 
 merkaba:~ btrfs balance start /
 ERROR: error during balancing '/' - No space left on device
 There may be more info in syslog - try dmesg | tail
 merkaba:~#19 dmesg | tail -63
 [   89.306718] postgres (2876): /proc/2876/oom_adj is deprecated,
 please use /proc/2876/oom_score_adj instead.
 [  159.939728] btrfs: relocating block group 21994930176 flags 34
 [  160.010427] btrfs: relocating block group 21860712448 flags 1
 [  161.188104] btrfs: found 6 extents
 [  161.507388] btrfs: found 6 extents
[…]
 [  335.897953] btrfs: relocating block group 1103101952 flags 1
 [  347.888295] btrfs: found 28458 extents
 [  352.736987] btrfs: found 28458 extents
 [  353.099659] btrfs: 1 enospc errors during balance
 
 merkaba:~ btrfs filesystem df /
 Data: total=10.00GB, used=7.31GB
 System, DUP: total=64.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, DUP: total=1.12GB, used=587.20MB
 
 merkaba:~ btrfs filesystem show
 failed to read /dev/sr0
 Label: 'debian'  uuid: […]
 Total devices 1 FS bytes used 7.88GB
 devid1 size 18.62GB used 12.38GB path /dev/dm-0
 
 
 For the sake of it I tried another time. It failed again:
 
 martin@merkaba:~ dmesg | tail -32
 [  353.099659] btrfs: 1 enospc errors during balance
 [  537.057375] btrfs: relocating block group 32833011712 flags 36
[…]
 [  641.479140] btrfs: relocating block group 22062039040 flags 34
 [  641.695614] btrfs: relocating block group 22028484608 flags 34
 [  641.840179] btrfs: found 1 extents
 [  641.965843] btrfs: 1 enospc errors during balance
 
 
 merkaba:~#19 btrfs filesystem df /
 Data: total=10.00GB, used=7.31GB
 System, DUP: total=32.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, DUP: total=1.12GB, used=586.74MB
 merkaba:~ btrfs filesystem show
 failed to read /dev/sr0
 Label: 'debian'  uuid: […]
 Total devices 1 FS bytes used 7.88GB
 devid1 size 18.62GB used 12.32GB path /dev/dm-0
 
 Btrfs Btrfs v0.19
 
 
 Well, in order to be gentle to the SSD again I stop my experiments now
 ;).

I had subjective impression that the speed of the BTRFS filesystem 
decreased after all these

Anyway, after reading the a -musage hint by Ilya in thread

Is it possible to reclaim block groups once they ar allocated to data or 
metadata?


I tried:

merkaba:~ btrfs filesystem df /
Data: total=10.00GB, used=7.34GB
System, DUP: total=32.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.12GB, used=586.39MB

merkaba:~ btrfs balance start -musage=1 / 
Done, had to relocate 2 out of 13 chunks

merkaba:~ btrfs filesystem df /  
Data: total=10.00GB, used=7.34GB
System, DUP: total=32.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=586.39MB

So this worked.

But I wasn´t able to specify less than a Gig:

merkaba:~ btrfs balance start -musage=0.8 /
Invalid usage argument: 0.8
merkaba:~#1 btrfs balance start -musage=700M /
Invalid usage argument: 700M


When I try without usage I get the old behavior back:

merkaba:~#1 btrfs balance start -m /  
ERROR: error during balancing '/' - No space left on device
There may be more info in syslog - try dmesg | tail


merkaba:~ btrfs balance start -musage=1 /   
Done, had to relocate 2 out of 13 chunks
merkaba:~ btrfs balance start -musage=1 /
Done, had to relocate 1 out of 12 chunks
merkaba:~ btrfs balance start -musage=1 /
Done, had to relocate 1 out of 12 chunks
merkaba:~ btrfs balance start -musage=1 /
Done, had to relocate 1 out of 12 chunks
merkaba:~ btrfs filesystem df /
Data: total=10.00GB, used=7.34GB
System, DUP: total=32.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=1.00GB, used=586.41MB

Ciao,
-- 
Martin 'Helios' Steigerwald - http://www.Lichtvoll.de
GPG: 03B0 0D6C 0040 0710 4AFA  B82F 991B EAAC A599 84C7
--
To unsubscribe from this 

[GIT PULL] Btrfs fixes

2012-05-06 Thread Chris Mason
Hi everyone,

The for-linus branch in the btrfs git repo has some fixes:

git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs.git for-linus

The big ones here are a memory leak we introduced in rc1, and 
a scheduling while atomic if the transid on disk doesn't match the
transid we expected.  This happens for corrupt blocks, or out of date
disks.

It also fixes up the ioctl definition for our ioctl to resolve logical
inode numbers.  The __u32 was a merging error and doesn't match what we
ship in the progs.

Chris Mason (2) commits (+36/-17):
Btrfs: avoid sleeping in verify_parent_transid while atomic (+34/-17)
Btrfs: Add properly locking around add_root_to_dirty_list (+2/-0)

Stefan Behrens (1) commits (+7/-0):
Btrfs: fix crash in scrub repair code when device is missing

Josef Bacik (1) commits (+2/-2):
Btrfs: fix page leak when allocing extent buffers

Alexander Block (1) commits (+2/-2):
btrfs: Fix mismatching struct members in ioctl.h

Total: (5) commits (+47/-21)

 fs/btrfs/ctree.c   |   28 +++-
 fs/btrfs/disk-io.c |   18 +-
 fs/btrfs/disk-io.h |3 ++-
 fs/btrfs/extent-tree.c |2 +-
 fs/btrfs/extent_io.c   |4 ++--
 fs/btrfs/ioctl.h   |4 ++--
 fs/btrfs/scrub.c   |7 +++
 fs/btrfs/tree-log.c|2 +-
 8 files changed, 47 insertions(+), 21 deletions(-)
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs-raid10 - btrfs-raid1 confusion

2012-05-06 Thread Alexander Koch
Greetings,

until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.

As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.

Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is currently not possible (and
later found out that it is indeed).
Then I read this [1] mailing list post basically saying that, in the
special case of four disks, btrfs-raid1 behaves exactly like RAID10.

So I added the two new disks to my existing filesystem

$ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive

and as the capacity reported by 'btrfs filesystem df' did not increase,
I started a balancing run:

$ btrfs filesystem balance start /mnt/archive


Waiting for the balancing run to finish (which will take much longer
than I thought; still running) I found out that as of kernel 3.3
changing the RAID level (aka restriping) is now possible: [2].

I got two questions now:

1.) Is there really no difference between btrfs-raid1 and btrfs-raid10
in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault
tolerance?

2.) Summing up the capacities reported by 'btrfs filesystem df' I only
get ~2.25 TiB for my filesystem, is that a realistic net size for
3 TiB gross?

$ btrfs filesystem df /mnt/archive
Data, RAID1: total=2.10TB, used=1.68TB
Data: total=8.00MB, used=0.00
System, RAID1: total=40.00MB, used=324.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=112.50GB, used=3.21GB
Metadata: total=8.00MB, used=0.00


Thanks in advance for any advice!

Regards,

lynix


[1] http://www.spinics.net/lists/linux-btrfs/msg15867.html
[2] https://lkml.org/lkml/2012/1/17/381
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is it possible to reclaim block groups once they are allocated to data or metadata?

2012-05-06 Thread Ilya Dryomov
On Sun, May 06, 2012 at 11:37:27AM +0100, Hugo Mills wrote:
 On Sun, May 06, 2012 at 01:26:45PM +0300, Ilya Dryomov wrote:
  On Sun, May 06, 2012 at 01:07:06PM +1000, Mike Sampson wrote:
   There is now 8GB less in Metadata and I was able to delete some files
   as well to free up space. There is still a lot of wasted space in the
   metadata block groups. It seems that it allocates more metadata block
   groups than required for my filesystem. This will do until I am able
   to add a couple of devices to the system. Is there anyway to adjust
   the block group allocation strategy at filesystem creation?
  
  No.  Chunk allocator currently allocates a lot more chunks than actually
  needed, and it impacts both balancing and normal operation.  Try this:
  
  # btrfs balance start -musage=10 /home
  
  This is suboptimal, but it should get rid of more chunks.
 
While we're talking about it, what is the parameter to the usage
 option? I'm assuming it selects chunks which are less than some amount
 full -- but is the value a percentage, or a quantity in megabytes
 (power-of-10 or power-of-2), or something else?

It's a percentage, so the command above will balance out chunks that are
less than 10 percent full.  I'll update btrfs man page and the wiki page
you started as soon as I can.

Thanks,

Ilya
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-raid10 - btrfs-raid1 confusion

2012-05-06 Thread Hugo Mills
On Sun, May 06, 2012 at 04:48:48PM +0200, Alexander Koch wrote:
 Greetings,
 
 until yesterday I was running a btrfs filesystem across two 2.0 TiB
 disks in RAID1 mode for both metadata and data without any problems.
 
 As space was getting short I wanted to extend the filesystem by two
 additional drives lying around, which both are 1.0 TiB in size.
 
 Knowing little about the btrfs RAID implementation I thought I had to
 switch to RAID10 mode, which I was told is currently not possible (and
 later found out that it is indeed).
 Then I read this [1] mailing list post basically saying that, in the
 special case of four disks, btrfs-raid1 behaves exactly like RAID10.
 
 So I added the two new disks to my existing filesystem
 
 $ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive
 
 and as the capacity reported by 'btrfs filesystem df' did not increase,

   It won't -- btrfs fi df reports what's been allocated out of the
raw pool. To check that the disks have been added, you need btrfs fi
show (no parameters).

 I started a balancing run:
 
 $ btrfs filesystem balance start /mnt/archive
 
 Waiting for the balancing run to finish (which will take much longer
 than I thought; still running) I found out that as of kernel 3.3
 changing the RAID level (aka restriping) is now possible: [2].

   It is indeed.

 I got two questions now:
 
 1.) Is there really no difference between btrfs-raid1 and btrfs-raid10
 in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault
 tolerance?

   There's the same degree of fault tolerance -- you're guaranteed to
be able to lose one disk from the array and still have all your data.

   The data will be laid out in a different way on the disks, though.
In your case, with four unevenly-sized disks, you will get the best
usage out of the filesystem with RAID-1. With only 4 disks, RAID-10
will run out of space when the smallest disk is full. (So, in your
configuration, you'd still have only 2TB of space usable, rather
defeating the point of having the new disks in the first place).

 2.) Summing up the capacities reported by 'btrfs filesystem df' I only
 get ~2.25 TiB for my filesystem, is that a realistic net size for
 3 TiB gross?

   You're not comparing the right numbers here. btrfs fi show shows
the raw available unallocated space that the filesystem has to play
with. btrfs fi df shows only what it's allocated so far, and how
much of the atllocation it has used -- in this case, because you've
added new disks, there's quite a bit of free space unallocated still,
so the numbers below won't add up to anything like 3TB.

 $ btrfs filesystem df /mnt/archive
 Data, RAID1: total=2.10TB, used=1.68TB
 Data: total=8.00MB, used=0.00
 System, RAID1: total=40.00MB, used=324.00KB
 System: total=4.00MB, used=0.00
 Metadata, RAID1: total=112.50GB, used=3.21GB
 Metadata: total=8.00MB, used=0.00

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
  --- My doctor tells me that I have a malformed public-duty gland, ---  
and a natural deficiency in moral fibre. 


signature.asc
Description: Digital signature


Re: balancing metadata fails with no space left on device

2012-05-06 Thread Ilya Dryomov
On Sun, May 06, 2012 at 01:19:38PM +0200, Martin Steigerwald wrote:
 Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:
  Am Freitag, 4. Mai 2012 schrieb Martin Steigerwald:
   Hi!
   
   merkaba:~ btrfs balance start -m /
   ERROR: error during balancing '/' - No space left on device
   There may be more info in syslog - try dmesg | tail
   merkaba:~#19 dmesg | tail -22
   [   62.918734] CPU0: Package power limit normal
   [  525.229976] btrfs: relocating block group 20422066176 flags 1
   [  526.940452] btrfs: found 3048 extents
   [  528.803778] btrfs: found 3048 extents
 […]
   [  635.906517] btrfs: found 1 extents
   [  636.038096] btrfs: 1 enospc errors during balance
   
   
   merkaba:~ btrfs filesystem show
   failed to read /dev/sr0
   Label: 'debian'  uuid: […]
   
   Total devices 1 FS bytes used 7.89GB
   devid1 size 18.62GB used 17.58GB path /dev/dm-0
   
   Btrfs Btrfs v0.19
   merkaba:~ btrfs filesystem df /
   Data: total=15.52GB, used=7.31GB
   System, DUP: total=32.00MB, used=4.00KB
   System: total=4.00MB, used=0.00
   Metadata, DUP: total=1.00GB, used=587.83MB
  
  I thought data tree might have been to big, so out of curiousity I
  tried a full balance. It shrunk the data tree but it failed as well:
  
  merkaba:~ btrfs balance start /
  ERROR: error during balancing '/' - No space left on device
  There may be more info in syslog - try dmesg | tail
  merkaba:~#19 dmesg | tail -63
  [   89.306718] postgres (2876): /proc/2876/oom_adj is deprecated,
  please use /proc/2876/oom_score_adj instead.
  [  159.939728] btrfs: relocating block group 21994930176 flags 34
  [  160.010427] btrfs: relocating block group 21860712448 flags 1
  [  161.188104] btrfs: found 6 extents
  [  161.507388] btrfs: found 6 extents
 […]
  [  335.897953] btrfs: relocating block group 1103101952 flags 1
  [  347.888295] btrfs: found 28458 extents
  [  352.736987] btrfs: found 28458 extents
  [  353.099659] btrfs: 1 enospc errors during balance
  
  merkaba:~ btrfs filesystem df /
  Data: total=10.00GB, used=7.31GB
  System, DUP: total=64.00MB, used=4.00KB
  System: total=4.00MB, used=0.00
  Metadata, DUP: total=1.12GB, used=587.20MB
  
  merkaba:~ btrfs filesystem show
  failed to read /dev/sr0
  Label: 'debian'  uuid: […]
  Total devices 1 FS bytes used 7.88GB
  devid1 size 18.62GB used 12.38GB path /dev/dm-0
  
  
  For the sake of it I tried another time. It failed again:
  
  martin@merkaba:~ dmesg | tail -32
  [  353.099659] btrfs: 1 enospc errors during balance
  [  537.057375] btrfs: relocating block group 32833011712 flags 36
 […]
  [  641.479140] btrfs: relocating block group 22062039040 flags 34
  [  641.695614] btrfs: relocating block group 22028484608 flags 34
  [  641.840179] btrfs: found 1 extents
  [  641.965843] btrfs: 1 enospc errors during balance
  
  
  merkaba:~#19 btrfs filesystem df /
  Data: total=10.00GB, used=7.31GB
  System, DUP: total=32.00MB, used=4.00KB
  System: total=4.00MB, used=0.00
  Metadata, DUP: total=1.12GB, used=586.74MB
  merkaba:~ btrfs filesystem show
  failed to read /dev/sr0
  Label: 'debian'  uuid: […]
  Total devices 1 FS bytes used 7.88GB
  devid1 size 18.62GB used 12.32GB path /dev/dm-0
  
  Btrfs Btrfs v0.19
  
  
  Well, in order to be gentle to the SSD again I stop my experiments now
  ;).
 
 I had subjective impression that the speed of the BTRFS filesystem 
 decreased after all these
 
 Anyway, after reading the a -musage hint by Ilya in thread
 
 Is it possible to reclaim block groups once they ar allocated to data or 
 metadata?

Currently there is no way to reclaim block groups other than performing
a balance.  We will add a kernel thread for this in future, but a couple
of things have to be fixed before that can happen.

 
 
 I tried:
 
 merkaba:~ btrfs filesystem df /
 Data: total=10.00GB, used=7.34GB
 System, DUP: total=32.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, DUP: total=1.12GB, used=586.39MB
 
 merkaba:~ btrfs balance start -musage=1 / 
 Done, had to relocate 2 out of 13 chunks
 
 merkaba:~ btrfs filesystem df /  
 Data: total=10.00GB, used=7.34GB
 System, DUP: total=32.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, DUP: total=1.00GB, used=586.39MB
 
 So this worked.
 
 But I wasn´t able to specify less than a Gig:

A follow up to the -musage hint says that the argument it takes is the
percentage.  That is -musage=X will balance out block groups that are
less than X percent used.

 
 merkaba:~ btrfs balance start -musage=0.8 /
 Invalid usage argument: 0.8
 merkaba:~#1 btrfs balance start -musage=700M /
 Invalid usage argument: 700M
 
 
 When I try without usage I get the old behavior back:
 
 merkaba:~#1 btrfs balance start -m /  
 ERROR: error during balancing '/' - No space left on device
 There may be more info in syslog - try dmesg | tail
 
 
 merkaba:~ btrfs balance start -musage=1 /   
 Done, had to relocate 2 out of 13 chunks
 

Re: balancing metadata fails with no space left on device

2012-05-06 Thread Robin Nehls
Am Fri, 4 May 2012 18:35:39 +0200
schrieb Martin Steigerwald mar...@lichtvoll.de:

 Hi!
 
 merkaba:~ btrfs balance start -m /
 ERROR: error during balancing '/' - No space left on device
 There may be more info in syslog - try dmesg | tail
 merkaba:~#19 dmesg | tail -22
 [   62.918734] CPU0: Package power limit normal
 [  525.229976] btrfs: relocating block group 20422066176 flags 1
 [  526.940452] btrfs: found 3048 extents
 [  528.803778] btrfs: found 3048 extents
 [  528.988440] btrfs: relocating block group 17746100224 flags 34
 [  529.116424] btrfs: found 1 extents
 [  529.247866] btrfs: relocating block group 17611882496 flags 36
 [  536.003596] btrfs: found 14716 extents
 [  536.170073] btrfs: relocating block group 17477664768 flags 36
 [  542.230713] btrfs: found 13170 extents
 [  542.353089] btrfs: relocating block group 17343447040 flags 36
 [  547.446369] btrfs: found 9809 extents
 [  547.663141] btrfs: 1 enospc errors during balance
 [  629.238168] btrfs: relocating block group 21894266880 flags 34
 [  629.359284] btrfs: found 1 extents
 [  629.520614] btrfs: 1 enospc errors during balance
 [  630.715766] btrfs: relocating block group 21927821312 flags 34
 [  630.749973] btrfs: found 1 extents
 [  630.899621] btrfs: 1 enospc errors during balance
 [  635.872857] btrfs: relocating block group 21961375744 flags 34
 [  635.906517] btrfs: found 1 extents
 [  636.038096] btrfs: 1 enospc errors during balance
 
 
 merkaba:~ btrfs filesystem show
 failed to read /dev/sr0
 Label: 'debian'  uuid: […]
 Total devices 1 FS bytes used 7.89GB
 devid1 size 18.62GB used 17.58GB path /dev/dm-0
 
 
 Btrfs Btrfs v0.19
 merkaba:~ btrfs filesystem df /
 Data: total=15.52GB, used=7.31GB
 System, DUP: total=32.00MB, used=4.00KB
 System: total=4.00MB, used=0.00
 Metadata, DUP: total=1.00GB, used=587.83MB
 
 
 This is repeatable.
 
 martin@merkaba:~ cat /proc/version
 Linux version 3.3.0-trunk-amd64 (Debian 3.3.4-1~experimental.1)
 (debian- kernel  AT  lists.debian.org) (gcc version 4.6.3 (Debian
 4.6.3-1) ) #1 SMP Wed May 2 06:54:24 UTC 2012
 
 
 Which is Debian´s variant of 3.3.4 with
 
 commit bfe050c8857bbc0cd6832c8bf978422573c439f5
 Author: Chris Mason chris.mason  AT  oracle.com
 Date:   Thu Apr 12 13:46:48 2012 -0400
 
 Revert Btrfs: increase the global block reserve estimates
 
 commit 8e62c2de6e23e5c1fee04f59de51b54cc2868ca5 upstream.
 
 This reverts commit 5500cdbe14d7435e04f66ff3cfb8ecd8b8e44ebf.
 
 We've had a number of complaints of early enospc that bisect down
 to this patch.  We'll hae to fix the reservations differently.
 
 Signed-off-by: Chris Mason chris.mason  AT  oracle.com
 Signed-off-by: Greg Kroah-Hartman gregkh  AT
 linuxfoundation.org
 
 from 3.3.3.
 
 May I need to wait for a proper fix to global block reserve for the
 balance to succeed or do I see a different issue?
 
 
 Since scrubbing still works I take it that balancing was aborted 
 gracefully and thus the filesystem is still intact. This is on a
 ThinkPad T520 with Intel SSD 320. I only wanted to reorder metadata
 trees, I do not think it makes much sense to relocate data blocks on
 a SSD. Maybe the reordering metadata blocks may not make much sense
 also, but I thought I still report this.
 
 Thanks,

Hi,
I think I have a similar problem, but in my case there is lots of free
space available. So this might also be a bug.

My problem: I wanted to convert the data of my btrfs from RAID0 to
single. No matter if I use soft or not, the progress always stops with
3GB RAID0 remaining. The conversion is newer completed so new files are
allways written to the RAID0 part of data. If i do a balance without
special options, data is converted back to RAID0.
This enospc error can't be correct because there is about 1 TB of space
available.

What I do:
# ./btrfs balance start -dconvert=single,soft /mnt/btrfs/
ERROR: error during balancing '/mnt/btrfs/' - No space left on device
There may be more info in syslog - try dmesg | tail

Relevant Dmesg:
[418912.485276] btrfs: relocating block group 11165392437248 flags 9
[418914.044328] btrfs: 1 enospc errors during balance

FS Information:
# ./btrfs filesystem show
Label: none  uuid: 0251aa44-4e39-4db5-b18d-ffc8e85042ab
Total devices 3 FS bytes used 2.24TB
devid1 size 1.82TB used 1.59TB path /dev/sdc1
devid3 size 931.51GB used 696.06GB path /dev/sdd1
devid2 size 931.51GB used 696.00GB path /dev/sdb1

Btrfs Btrfs v0.19-dirty

# ./btrfs filesystem df /mnt/btrfs/
Data, RAID0: total=3.00GB, used=3.00GB
Data: total=2.80TB, used=2.24TB
System, RAID1: total=64.00MB, used=328.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=75.00GB, used=2.94GB

# cat /proc/version
Linux version 3.4.0-rc5-amd64 (root@hermes) (gcc version 4.6.3 (Debian
4.6.3-1) ) #1 SMP Tue May 1 23:52:34 CEST 2012

So long,
Robi


signature.asc
Description: PGP signature


Re: btrfs-raid10 - btrfs-raid1 confusion

2012-05-06 Thread cwillu
On Sun, May 6, 2012 at 9:23 AM, Hugo Mills h...@carfax.org.uk wrote:
 On Sun, May 06, 2012 at 04:48:48PM +0200, Alexander Koch wrote:
 So I added the two new disks to my existing filesystem

     $ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive

 and as the capacity reported by 'btrfs filesystem df' did not increase,

   It won't -- btrfs fi df reports what's been allocated out of the
 raw pool. To check that the disks have been added, you need btrfs fi
 show (no parameters).

Worth pointing out that plain old /bin/df will report the added space
(I believe), but without taking into account raid level for space that
isn't allocated to a block group yet.

 I got two questions now:

 1.) Is there really no difference between btrfs-raid1 and btrfs-raid10
     in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault
     tolerance?

   There's the same degree of fault tolerance -- you're guaranteed to
 be able to lose one disk from the array and still have all your data.

   The data will be laid out in a different way on the disks, though.
 In your case, with four unevenly-sized disks, you will get the best
 usage out of the filesystem with RAID-1. With only 4 disks, RAID-10
 will run out of space when the smallest disk is full. (So, in your
 configuration, you'd still have only 2TB of space usable, rather
 defeating the point of having the new disks in the first place).

Well, compared to the 1TB he had before, but yes, still short of the
3TB of capacity he'd have with RAID-1.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: btrfs-raid10 - btrfs-raid1 confusion

2012-05-06 Thread Hugo Mills
On Sun, May 06, 2012 at 09:49:36PM +0200, Alexander Koch wrote:
 Thanks for clarifying things, Hugo :)
 
 It won't -- btrfs fi df reports what's been allocated out of the
  raw pool. To check that the disks have been added, you need btrfs fi
  show (no parameters).
 
 Okay, that gives me
 
 Label: 'archive'  uuid: 3818eedb-5379-4c40-9d3d-bd91f60d9094
 Total devices 4 FS bytes used 1.68TB
 devid4 size 931.51GB used 664.03GB path /dev/dm-10
 devid3 size 931.51GB used 664.03GB path /dev/dm-9
 devid2 size 1.82TB used 1.56TB path /dev/dm-8
 devid1 size 1.82TB used 1.56TB path /dev/dm-7
 
 so I conclude all disks are successfully assigned to the raw pool for my
 'archive' volume.

   Yes, that all looks good.

  You're not comparing the right numbers here. btrfs fi show shows
  the raw available unallocated space that the filesystem has to play
  with. btrfs fi df shows only what it's allocated so far, and how
  much of the atllocation it has used -- in this case, because you've
  added new disks, there's quite a bit of free space unallocated still,
  so the numbers below won't add up to anything like 3TB.
 
 So how is the available space in the raw pool finally allocated to the
 usable area? Must I manually enlarge the filesystem by issuing a
 'btrfs fi resize max /mountpoint' (like assigning space of a VG to a
 logical volume in LVM) or is the space allocated automatically when the
 filesystem gets filled with data?

   The space is automatically allocated as it's needed.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
   --- Hey, Virtual Memory! Now I can have a *really big* ramdisk! ---   


signature.asc
Description: Digital signature


[PATCH] Add missing printing newlines

2012-05-06 Thread Daniel J Blueman
Fix BTRFS messages to print a newline where there should be one.

Signed-off-by: Daniel J Blueman dan...@quora.org
---
 fs/btrfs/super.c |8 
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
index c5f8fca..c99cb72 100644
--- a/fs/btrfs/super.c
+++ b/fs/btrfs/super.c
@@ -216,7 +216,7 @@ void __btrfs_abort_transaction(struct btrfs_trans_handle 
*trans,
   struct btrfs_root *root, const char *function,
   unsigned int line, int errno)
 {
-   WARN_ONCE(1, KERN_DEBUG btrfs: Transaction aborted);
+   WARN_ONCE(1, KERN_DEBUG btrfs: Transaction aborted\n);
trans-aborted = errno;
/* Nothing used. The other threads that have joined this
 * transaction may be able to continue. */
@@ -511,11 +511,11 @@ int btrfs_parse_options(struct btrfs_root *root, char 
*options)
btrfs_set_opt(info-mount_opt, ENOSPC_DEBUG);
break;
case Opt_defrag:
-   printk(KERN_INFO btrfs: enabling auto defrag);
+   printk(KERN_INFO btrfs: enabling auto defrag\n);
btrfs_set_opt(info-mount_opt, AUTO_DEFRAG);
break;
case Opt_recovery:
-   printk(KERN_INFO btrfs: enabling auto recovery);
+   printk(KERN_INFO btrfs: enabling auto recovery\n);
btrfs_set_opt(info-mount_opt, RECOVERY);
break;
case Opt_skip_balance:
@@ -1501,7 +1501,7 @@ static int btrfs_interface_init(void)
 static void btrfs_interface_exit(void)
 {
if (misc_deregister(btrfs_misc)  0)
-   printk(KERN_INFO misc_deregister failed for control device);
+   printk(KERN_INFO misc_deregister failed for control device\n);
 }
 
 static int __init init_btrfs_fs(void)
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html