RAID-10 arrays built with btrfs & md report 2x difference in available size?

2010-01-23 Thread 0bo0
I created a btrfs RAID-10 array across 4-drives,

 mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd
 btrfs-show
Label: TEST  uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8
Total devices 4 FS bytes used 28.00KB
devid1 size 931.51GB used 2.03GB path /dev/sda
devid2 size 931.51GB used 2.01GB path /dev/sdb
devid4 size 931.51GB used 2.01GB path /dev/sdd
devid3 size 931.51GB used 2.01GB path /dev/sdc

@ mount,

 mount /dev/sda /mnt
 df -H | grep /dev/sda
/dev/sda   4.1T29k   4.1T   1% /mnt

for RAID-10 across 4-drives, shouldn't the reported/available size be
1/2x4TB ~ 2TB?

e.g., using mdadm to build a RAID-10 array across the same drives,

 mdadm -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/sd[abcd]1
 pvcreate /dev/md0
pvs
  PV VG   Fmt  Attr PSize   PFree
  /dev/md0lvm2 --   1.82T 1.82T

is the difference in available array space real, an artifact, or a
misunderstanding on my part?

thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


mount after reboot of btrfs RAID-10 fails with "btrfs: failed to read the system array on sda"

2010-01-23 Thread 0bo0
after a simple reboot,

btrfs-show
Label: TEST uuid: 2ac85206-2d88-47d7-a1e7-a93d80b199f8
Total devices 4 FS bytes used 28.00KB
devid1 size 931.51GB used 2.03GB path /dev/sda
devid2 size 931.51GB used 2.01GB path /dev/sdb
devid3 size 931.51GB used 2.01GB path /dev/sdc
devid4 size 931.51GB used 2.01GB path /dev/sdd

but,

mount /dev/sda /mnt
mount: wrong fs type, bad option, bad superblock on /dev/sda,
   missing codepage or helper program, or other error
   In some cases useful info is found in syslog - try
   dmesg | tail  or so

where,

tail -f /var/log/messages,

Jan 23 21:49:23 test kernel: [   94.949335] device fsid
f9452f77524a701a-28bb2c0e9bab5a99 devid 1 transid 17 /dev/sda
Jan 23 21:49:23 test kernel: [   94.951716] btrfs: failed to 
read
the system array on sda
Jan 23 21:49:23 test kernel: [   94.952748] btrfs: open_ctree 
failed

mkfs.btrfs -m raid10 -d raid10 /dev/sd[abcd]

WARNING! - Btrfs Btrfs v0.19 IS EXPERIMENTAL
WARNING! - see http://btrfs.wiki.kernel.org before using

adding device /dev/sdb id 2
adding device /dev/sdc id 3
adding device /dev/sdd id 4
fs created label (null) on /dev/sda
nodesize 4096 leafsize 4096 sectorsize 4096 size 3.64TB
Btrfs Btrfs v0.19

mount /dev/sda /mnt
df -H | grep -i sda
/dev/sda   4.1T29k   4.1T   1% /mnt

fyi,

lsb_release -ri
  Distributor ID: SUSE LINUX
  Release:11.2
uname -a
  Linux test 2.6.31.8-0.1-xen #1 SMP 2009-12-15 23:55:40 +0100 x86_64
x86_64 x86_64 GNU/Linux
rpm -qa | grep btr
  btrfsprogs-0.19-10.1.x86_64


a bug?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: when/why to use diffferent raid values for btrfs data & metadata?

2010-01-24 Thread 0bo0
hi

On Sun, Jan 24, 2010 at 3:28 AM, RK  wrote:
> try this article "Linux Don't Need No Stinkin' ZFS: BTRFS Intro &
> Benchmarks"
> http://www.linux-mag.com/id/7308/3/
> , there is a benchmark table and speed analysis (very informative), but
> all the benchmarks are done with same -m and -d mkfs.btrfs option

that's one of the articles i' read.  it also does mention that you can
define data/metadata as differnt RAID, afaict, it doesn't (?) say
anything about the what/why you would ... which is what i'm unclear
about.

thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mount after reboot of btrfs RAID-10 fails with "btrfs: failed to read the system array on sda"

2010-01-24 Thread 0bo0
hi

On Sun, Jan 24, 2010 at 12:02 AM, Goffredo Baroncelli
 wrote:
> On Sunday 24 January 2010, 0bo0 wrote:
>> after a simple reboot,
>                 ^^
> Have you do
>
>  # btrfsctl -a
>
> before mounting the filesystem ? This command scans all the block devices
> searching the btrfs volume. So when you mount a device of an array the system
> is able to retrieves the others.

that does the trick!  and, i found/understood the reference in tthe
wiki. thanks.

how would that, then, get handled for automount @ boot via fstab?  i
guess that the scan needs to get done as well ...
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?

2010-01-24 Thread 0bo0
noticing from above

>>  ... size 931.51GB used 2.03GB ...

'used' more than the 'size'?

more confused ...
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mount after reboot of btrfs RAID-10 fails with "btrfs: failed to read the system array on sda"

2010-01-24 Thread 0bo0
On Sun, Jan 24, 2010 at 3:35 PM, Leszek Ciesielski  wrote:
>> how would that, then, get handled for automount @ boot via fstab?  i
>> guess that the scan needs to get done as well ...
>> --
>
> Please see this discussion:
> http://thread.gmane.org/gmane.comp.file-systems.btrfs/4126/focus=4187

Thanks for the reference.

@ that link,

  "Would this option ["mount -t btrfs -o device=/dev/sdb2 /dev/sda2
  /mnt"] work on boot, bypasing the need for "btrfsctl -a" to mount a
  multi-device filesystem?"


would translate how, in my case, to an fstab entry?

   /dev/sda/mntbtrfs
device=/dev/sdb,device=/dev/sdc,device=/dev/sdd 1 2

?

thanks
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mount after reboot of btrfs RAID-10 fails with "btrfs: failed to read the system array on sda"

2010-01-25 Thread 0bo0
On Mon, Jan 25, 2010 at 10:19 AM, Goffredo Baroncelli
 wrote:
>>    /dev/sda    /mnt    btrfs  
>> device=/dev/sdb,device=/dev/sdc,device=/dev/sdd     1 2
>
> Yes; it works for me.

thanks for the confirmation.

verifying, with that in /etc/fstab, after boot i see,

  mount | grep sda
/dev/sda on /mnt type btrfs
(rw,device=/dev/sdb,device=/dev/sdc,device=/dev/sdd)

which apparenlty worked.

still, the RAID size is wrong ...

 df -H | grep sda
  /dev/sda   4.1T29k   4.1T   1% /mnt

but that's a different thread.

thanks again.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?

2010-01-29 Thread 0bo0
> For me, it looks as if 2.03GB is way smaller than 931.51GB (2 << 931), no? 
> Everything seems to be fine here.

gagh!  i "saw" TB, not GB.  8-/

> And regarding your original mail: it seems that df is still lying about the 
> size of the btrfs fs, check 
> http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg00758.html

it is, and reading -> "df is lying.  The total bytes in the FS include
all 4 drives.  I need to fix up the math for the total available
space.", it looks like its under control.  thx!
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?

2010-01-29 Thread 0bo0
On Fri, Jan 29, 2010 at 3:46 PM, jim owens  wrote:
> but it is the only method
> that can remain accurate under the mixed raid modes possible
> on a per-file-basis in btrfs.

can you clarify, then, the intention/goal behind cmason's

"df is lying.  The total bytes in the FS include all 4 drives.  I need to
fix up the math for the total available space."

Is the goal NOT to accurately represent the actual available space?
Seems rather odd that users are simply to know/accept that "available
space" in btrfs RAID-10 != "available space" in md RIAD-10 ...
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


semantics in btrfs multi-device (RAID) mount-by-disk-label ?

2010-02-05 Thread 0bo0
i created a array,

 mkfs.btrfs -L TEST -m raid10 -d raid10 /dev/sda /dev/sdb /dev/sdc /dev/sdd

btrfs-show
Label: TEST  uuid: 85aa9ac8-0089-4dd3-b8b2-3c0cbb96c924
Total devices 4 FS bytes used 28.00KB
devid3 size 931.51GB used 2.01GB path /dev/sdc
devid4 size 931.51GB used 2.01GB path /dev/sdd
devid2 size 931.51GB used 2.01GB path /dev/sdb
devid1 size 931.51GB used 2.03GB path /dev/sda

this,

 /dev/sda /mnt/TEST btrfs
compress,device=/dev/sdb,device=/dev/sdc,device=/dev/sdd  1 2

in /etc/fstab, mounts the array on boot.

is it correct that it does NOT matter which device I actually mount,
specifying the 'other' devices in the array as options?

i.e.,

 /dev/sda ... device=/dev/sdb,device=/dev/sdc,device=/dev/sdd ...
 /dev/sdb ... device=/dev/sda,device=/dev/sdc,device=/dev/sdd ...
 /dev/sdc ... device=/dev/sda,device=/dev/sdb,device=/dev/sdd ...
 /dev/sdd ... device=/dev/sda,device=/dev/sdb,device=/dev/sdc ...

would all be equivalent?

i understand (http://marc.info/?l=btrfs-devel&m=121302854724031&w=2)
that i can also mount by label.

ls -al /dev/disk/by-label/NAS_TEST
 lrwxrwxrwx 1 root root 9 2010-02-05 15:56 /dev/disk/by-label/TEST -> ../../sdd


 /dev/disk/by-label/TEST /mnt/TEST btrfs
compress,device=/dev/sda,device=/dev/sdb,device=/dev/sdc  1 2

apparently, at device creation, the TEST-label is symlinked
(arbitrarily?) to /dev/sdd, so as devices I suppose I add the "other"
devices, /dev/sd[abc].

in this instance, given the symlink to /dev/sdd, do I still have the
option to use any combination of "other three" devices, or am I fixed
to the symlinked /dev/sdd?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: when/why to use diffferent raid values for btrfs data & metadata?

2010-02-05 Thread 0bo0
anyone on when/why to use different RAID geometries for data & metadata?

On Sun, Jan 24, 2010 at 8:38 AM, 0bo0 <0.bugs.onl...@gmail.com> wrote:
> hi
>
> On Sun, Jan 24, 2010 at 3:28 AM, RK  wrote:
>> try this article "Linux Don't Need No Stinkin' ZFS: BTRFS Intro &
>> Benchmarks"
>> http://www.linux-mag.com/id/7308/3/
>> , there is a benchmark table and speed analysis (very informative), but
>> all the benchmarks are done with same -m and -d mkfs.btrfs option
>
> that's one of the articles i' read.  it also does mention that you can
> define data/metadata as differnt RAID, afaict, it doesn't (?) say
> anything about the what/why you would ... which is what i'm unclear
> about.
>
> thanks!
>
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


does btrfs have RAID I/O throughput (un)limiting sysctls, similar to md?

2010-02-05 Thread 0bo0
i've a 4 drive array connected via a PCIe SATA card.

per OS (opensuse) default, md RAID I/O performance was being limited by,

  cat /proc/sys/dev/raid/speed_limit_min
1000
  cat /proc/sys/dev/raid/speed_limit_max
20

changing,

  echo "dev.raid.speed_limit_min=10" >> /etc/sysctl.conf
  echo "dev.raid.speed_limit_max=60" >> /etc/sysctl.conf

enabled full/best I/O throughput.

does btrfs have a similar construct that I need to set/tweak for
maximum I/O throughput?
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: does btrfs have RAID I/O throughput (un)limiting sysctls, similar to md?

2010-02-06 Thread 0bo0
On Sat, Feb 6, 2010 at 5:10 AM, Daniel J Blueman
 wrote:
> These proc entries affect just array reconstruction, not general I/O
> performance/throughput, so affect just an edge-case of applications
> requiring maximum latency/minimum throughout guarantees.

although i'd 1st seen the perf hit at the (re)construction stage, i
didn't recognize that the sysctls were limited to that case.

so, iiuc, btrfs has no such issues?

thanks for clarifying!
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: when/why to use diffferent raid values for btrfs data & metadata?

2010-02-06 Thread 0bo0
On Sat, Feb 6, 2010 at 5:16 AM, Goffredo Baroncelli  wrote:
>> anyone on when/why to use different RAID geometries for data & metadata?
>>
>
> I expected that the size of data and meta-data are different by several order
> of magnitude. So I can choice different trade-off between
> space/speed/reliability for data and/or metadata.
>
> If I need speed I can put the meta-data in a "fast" raid (like raid10) and put
> the data in a slow raid (like raid6).
> Or if I can tolerate the lost of data, I can put the meta-data in raid1 and
> the data in raid0. A fault of a disk, may lead to lost of data, but not to
> lost of the meta-data (the file-system is fully working).

sounds like there's no further, subtle considerations beyond the usual
 "which RAID" considerations.  then, i suppose that as long as i find
RAID-10 "good enough"(as it has been in the md-case), there's no
compelling reason NOT tp place both data/metadata in RAID-10
constructs in btrfs.

thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?

2010-02-07 Thread 0bo0
On Sat, Jan 30, 2010 at 7:36 AM, jim owens  wrote:
> So Josef Bacik has sent patches to btrfs and btrfs-progs that
> allow you to see raid-mode data and metadata adjusted values
> with btrfs-ctrl -i instead of using "df".
>
> These patches have not been merged yet so you will have to pull
> them and apply yourself.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RAID-10 arrays built with btrfs & md report 2x difference in available size?

2010-02-07 Thread 0bo0
On Sat, Jan 30, 2010 at 7:36 AM, jim owens  wrote:
> So Josef Bacik has sent patches to btrfs and btrfs-progs that
> allow you to see raid-mode data and metadata adjusted values
> with btrfs-ctrl -i instead of using "df".
>
> These patches have not been merged yet so you will have to pull
> them and apply yourself.

Where exactly can these be pulled from? Is there a separate git tree?
I just built from the btrfs & btrfs-progs heads, and still do not see
these add'l features.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html