Raid1 with 3 drives

2010-03-05 Thread Grady Neely
Hello, I have a 3 1TB drives that I wanted to make a Raid1 system on. I issued the following command mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd And it seems to have created the fs, with no issue. When I do an df -h, I see that the available space is 3TB. Seems like with RAID1 I

Re: Raid1 with 3 drives

2010-03-05 Thread Josef Bacik
On Fri, Mar 05, 2010 at 01:28:00PM -0600, Grady Neely wrote: Hello, I have a 3 1TB drives that I wanted to make a Raid1 system on. I issued the following command mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc /dev/sdd And it seems to have created the fs, with no issue. When I do an df

[RFC 0/2] removing hard coded 512 byte size from direct I/O.

2010-03-05 Thread jim owens
The following patches add the field for tracking the smallest device block size in the filesystem and using it instead of the hard coded 512 byte values in dio.c. I also implemented a simpler test for user misalignment on devices with larger block sizes. It passes fsx, but I have not tested

[PATCH 1/2] Btrfs: add multi-device minimum logical block size for direct I/O.

2010-03-05 Thread jim owens
In a multi-device filesystem, it is possible to have devices with different block sizes, as in 512, 1024, 2048, 4096. DirectIO read will check user request alignment is valid for at least one device. Signed-off-by: jim owens jim6...@gmail.com --- fs/btrfs/volumes.c | 24

[PATCH 2/2] Btrfs: change dio.c to use dio_min_blocksize instead of 512.

2010-03-05 Thread jim owens
Instead of hard coding the minimum I/O alignment, use the smallest bdev_logical_blocksize in the filesystem. Also change the alignment tests to determine the real user request minimum alignment and make all eof tail and device checks on that user blocksize. Signed-off-by: jim owens

[PATCH 2/2] Btrfs: change dio.c to use dio_min_blocksize instead of 512.

2010-03-05 Thread jim owens
Instead of hard coding the minimum I/O alignment, use the smallest bdev_logical_blocksize in the filesystem. Also change the alignment tests to determine the real user request minimum alignment and make all eof tail and device checks on that user blocksize. Signed-off-by: jim owens

Re: Raid1 with 3 drives

2010-03-05 Thread Chris Ball
Hi, DF with btrfs is a loaded question. In the RAID1 case you are going to show 3TB of free space, but everytime you use some space you are going to show 3 times the amount used (I think thats right). There are some patches forthcoming to make the reporting for RAID stuff

Oops while attempting to mount degraded multi-device raid1 data/metadata btrfs filesystem

2010-03-05 Thread Mike Fedyk
Hi, I get an oops with 2.6.33-0.46.rc8.git1.fc13.x86_64 while trying to mount a degraded raid1 btrfs filesystem. Here are the steps I performed to get to this stage. - Install fedora12 btrfs / on sda2 - mkfs.btrfs -m raid1 -d raid1 /dev/sda7 - cp -a from sda2 to sda7 - reboot into sda7 as / -

Re: Raid1 with 3 drives

2010-03-05 Thread Josef Bacik
On Fri, Mar 05, 2010 at 02:29:56PM -0600, Grady Neely wrote: Thank you! One more question: Since I have three devices in a RAID1 pool, can it survive 2 drive failures? Yes, tho you won't be able to remove more than 1 at a time (since it wants you to keep at least two disks around).

Re: Raid1 with 3 drives

2010-03-05 Thread Grady Neely
Thank you! One more question: Since I have three devices in a RAID1 pool, can it survive 2 drive failures? On Mar 5, 2010, at 1:58 PM, Chris Ball wrote: Hi, DF with btrfs is a loaded question. In the RAID1 case you are going to show 3TB of free space, but everytime you use some space

Re: Raid1 with 3 drives

2010-03-05 Thread Bart Noordervliet
On Fri, Mar 5, 2010 at 21:31, Josef Bacik jo...@redhat.com wrote: Since I have three devices in a RAID1 pool, can it survive 2 drive failures? Yes, tho you won't be able to remove more than 1 at a time (since it wants you to keep at least two disks around).  Thanks, Josef Hmm, I would

[PATCH] Btrfs: make df be a little bit more understandable

2010-03-05 Thread Josef Bacik
The way we report df usage is way confusing for everybody, including some other utilities (bacula for one). So this patch makes df a little bit more understandable. First we make used actually count the total amount of used space in all space info's. This will give us a real view of how much

Re: Raid1 with 3 drives

2010-03-05 Thread Mike Fedyk
On Fri, Mar 5, 2010 at 1:49 PM, Bart Noordervliet b...@noordervliet.net wrote: On Fri, Mar 5, 2010 at 21:31, Josef Bacik jo...@redhat.com wrote: Since I have three devices in a RAID1 pool, can it survive 2 drive failures? Yes, tho you won't be able to remove more than 1 at a time (since it

Re: Raid1 with 3 drives

2010-03-05 Thread Hubert Kario
On Friday 05 March 2010 23:13:54 Mike Fedyk wrote: On Fri, Mar 5, 2010 at 1:49 PM, Bart Noordervliet b...@noordervliet.net wrote: Maybe it's worth to consider leaving the burdened raid* terminology behind and name the btrfs redundancy modes more clearly by what they do. For instance -d