Re: btrfs-raid10 <-> btrfs-raid1 confusion

2012-05-06 Thread Alexander Koch
Thanks for clarifying things, Hugo :)

>It won't -- "btrfs fi df" reports what's been allocated out of the
> raw pool. To check that the disks have been added, you need "btrfs fi
> show" (no parameters).

Okay, that gives me

Label: 'archive'  uuid: 3818eedb-5379-4c40-9d3d-bd91f60d9094
Total devices 4 FS bytes used 1.68TB
devid4 size 931.51GB used 664.03GB path /dev/dm-10
devid3 size 931.51GB used 664.03GB path /dev/dm-9
devid2 size 1.82TB used 1.56TB path /dev/dm-8
devid1 size 1.82TB used 1.56TB path /dev/dm-7

so I conclude all disks are successfully assigned to the raw pool for my
'archive' volume.


> You're not comparing the right numbers here. "btrfs fi show" shows
> the raw available unallocated space that the filesystem has to play
> with. "btrfs fi df" shows only what it's allocated so far, and how
> much of the atllocation it has used -- in this case, because you've
> added new disks, there's quite a bit of free space unallocated still,
> so the numbers below won't add up to anything like 3TB.

So how is the available space in the raw pool finally allocated to the
usable area? Must I manually enlarge the filesystem by issuing a
'btrfs fi resize max /mountpoint' (like assigning space of a VG to a
logical volume in LVM) or is the space allocated automatically when the
filesystem gets filled with data?


Regards,

Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


btrfs-raid10 <-> btrfs-raid1 confusion

2012-05-06 Thread Alexander Koch
Greetings,

until yesterday I was running a btrfs filesystem across two 2.0 TiB
disks in RAID1 mode for both metadata and data without any problems.

As space was getting short I wanted to extend the filesystem by two
additional drives lying around, which both are 1.0 TiB in size.

Knowing little about the btrfs RAID implementation I thought I had to
switch to RAID10 mode, which I was told is currently not possible (and
later found out that it is indeed).
Then I read this [1] mailing list post basically saying that, in the
special case of four disks, btrfs-raid1 behaves exactly like RAID10.

So I added the two new disks to my existing filesystem

$ btrfs device add /dev/sde1 /dev/sdf1 /mnt/archive

and as the capacity reported by 'btrfs filesystem df' did not increase,
I started a balancing run:

$ btrfs filesystem balance start /mnt/archive


Waiting for the balancing run to finish (which will take much longer
than I thought; still running) I found out that as of kernel 3.3
changing the RAID level (aka restriping) is now possible: [2].

I got two questions now:

1.) Is there really no difference between btrfs-raid1 and btrfs-raid10
in my case (2 x 2TiB, 2 x 1TiB disks)? Same degree of fault
tolerance?

2.) Summing up the capacities reported by 'btrfs filesystem df' I only
get ~2.25 TiB for my filesystem, is that a realistic net size for
3 TiB gross?

$ btrfs filesystem df /mnt/archive
Data, RAID1: total=2.10TB, used=1.68TB
Data: total=8.00MB, used=0.00
System, RAID1: total=40.00MB, used=324.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=112.50GB, used=3.21GB
Metadata: total=8.00MB, used=0.00


Thanks in advance for any advice!

Regards,

lynix


[1] http://www.spinics.net/lists/linux-btrfs/msg15867.html
[2] https://lkml.org/lkml/2012/1/17/381
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html