Re: btrfs raid56 Was:

2014-05-03 Thread Jaap Pieroen
Duncan 1i5t5.duncan at cox.net writes:

  - How can I salvage this situation and convert to raid1?
  
  Unfortunately I have little spare drives left. Not enough to contain
  4.7TiB of data.. :(
 
 [OK, this goes a bit philosophical, but it's something to think about...]
 
 ... 

 Anyway, at least for now you should still be able to recover most of the 
 data using skip_balance or read-only mounting.  My guess is that if push 
 comes to shove you can either prioritize that data and give up a TiB or 
 two if it comes to that, or scrimp here and there, putting a few gigs on 
 the odd blank DVD you may have lying around or downgrading a few meals to 
 Raman-noodle to afford the $100 or so shipped that pricewatch says a new 
 3 TB drive costs, these days.  I've been there, and have found that if I 
 think I need it bad enough, that $100 has a way of appearing, like I said 
 even if I'm noodling it for a few meals to do it.
 

Thanks for the philosophical response. Both telling me I can't simply
convert, and reminding me that this was an outcome I was prepared
to face. :) Because you are right. When push comes to shove, it's 
data I'm prepared to lose.

I'm going to hedge my bets and convince the Mrs te let me invest in
some new hardware. 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Help with csum failed errors

2014-05-03 Thread Paul Jones
Hi all,
I'm getting some strange errors and I need some help diagnosing where the 
problem is. 
You can see from below that the error is csum failed ino 5641.
This is a new SSD that is running in raid1. When I first noticed the error (on 
both drives) I copied all the data off the drives, reformatted
and copied the data back. I was running 3.13.11 and upgraded to 3.14.2 just 
incase there was a bugfix.
I still had the error on one drive so I converted the array back to single, ran 
dd if=/dev/zero of=/dev/sdd1
and readded sdd1 and rebalanced. No error was reported 
I'm also running the root and swap partitions on the same physical drives and 
they are ok (root is btrfs also), which makes me suspect
that the SSD is ok. I did a scrub on both drives and that found no errors. What 
do I try next?

[44778.232540] BTRFS info (device sdd1): relocating block group 333996621824 
flags 1
[44780.458408] BTRFS info (device sdd1): found 339 extents
[44783.494674] BTRFS info (device sdd1): found 339 extents
[44783.546293] BTRFS info (device sdd1): relocating block group 331849138176 
flags 1
[44786.143536] BTRFS info (device sdd1): found 164 extents
[44789.256777] BTRFS info (device sdd1): found 164 extents
[49217.915725] kvm: zapping shadow pages for mmio generation wraparound
[141968.885166] BTRFS error (device sdd1): csum failed ino 5641 off 54112157696 
csum 2741395493 expected csum 3151521372
[141968.885216] BTRFS error (device sdd1): csum failed ino 5641 off 54412632064 
csum 3489516372 expected csum 2741395493
[141968.887816] BTRFS error (device sdd1): csum failed ino 5641 off 27601571840 
csum 1878206089 expected csum 3203096954
[141969.887794] BTRFS error (device sdd1): csum failed ino 5641 off 54112157696 
csum 2741395493 expected csum 3151521372
[141970.895408] BTRFS error (device sdd1): csum failed ino 5641 off 7849897984 
csum 2833474655 expected csum 2585631118
[141970.895437] BTRFS error (device sdd1): csum failed ino 5641 off 8398065664 
csum 2001723841 expected csum 2913537154
[141970.895450] BTRFS error (device sdd1): csum failed ino 5641 off 10395713536 
csum 2001723841 expected csum 2833474655
[141971.895529] BTRFS error (device sdd1): csum failed ino 5641 off 10395713536 
csum 2913537154 expected csum 2833474655
[141971.895541] BTRFS error (device sdd1): csum failed ino 5641 off 7849897984 
csum 2913537154 expected csum 2585631118
[141972.894867] BTRFS error (device sdd1): csum failed ino 5641 off 10395713536 
csum 369396853 expected csum 2833474655
[145579.088097] BTRFS error (device sdd1): csum failed ino 5641 off 54317748224 
csum 1538824619 expected csum 2260594561
[145579.088110] BTRFS error (device sdd1): csum failed ino 5641 off 54412632064 
csum 257502146 expected csum 3543777931
[145580.087459] BTRFS error (device sdd1): csum failed ino 5641 off 54317748224 
csum 3543777931 expected csum 2260594561
[171255.071570] 3w-9xxx: scsi0: AEN: INFO (0x04:0x0029): Verify started:unit=2.
[181556.516699] BTRFS error (device sdd1): csum failed ino 5641 off 54317748224 
csum 1227121981 expected csum 4177569466
[181557.517271] BTRFS error (device sdd1): csum failed ino 5641 off 53179822080 
csum 4130109553 expected csum 1722742324
[188752.222042] BTRFS error (device sdd1): csum failed ino 5641 off 54317748224 
csum 50100434 expected csum 4177569466
[200568.511268] 3w-9xxx: scsi0: AEN: INFO (0x04:0x002B): Verify 
completed:unit=2.
[202095.611465] BTRFS error (device sdd1): csum failed ino 5641 off 12205871104 
csum 910010336 expected csum 2948706421
[203143.317917] BTRFS error (device sdd1): csum failed ino 5641 off 55994003456 
csum 648604663 expected csum 2485194978

vm-server ~ # btrfs scrub stat /dev/sdd1
scrub status for 9baf63f7-a9d6-456c-8fdd-1a8fdb21958f
scrub started at Sat May  3 17:59:47 2014 and finished after 659 seconds
total bytes scrubbed: 273.64GiB with 0 errors

vm-server ~ # btrfs scrub stat /
scrub status for 58d27dbd-7c1e-4ef7-8d43-e93df1537b08
scrub started at Sat May  3 19:22:58 2014 and finished after 49 seconds
total bytes scrubbed: 18.49GiB with 0 errors

vm-server ~ # uname -a
Linux vm-server 3.14.2-gentoo #1 SMP PREEMPT Thu May 1 00:02:32 EST 2014 x86_64 
Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz GenuineIntel GNU/Linux

vm-server ~ #   btrfs --version
Btrfs v3.14.1

vm-server ~ #   btrfs fi show
Label: 'Root'  uuid: 58d27dbd-7c1e-4ef7-8d43-e93df1537b08
Total devices 2 FS bytes used 18.49GiB
devid3 size 40.00GiB used 35.03GiB path /dev/sde3
devid4 size 40.00GiB used 35.03GiB path /dev/sdd3

Label: 'storage'  uuid: df3d4a9c-ed6c-4867-8991-a018276f6f3c
Total devices 2 FS bytes used 1.13TiB
devid5 size 2.69TiB used 1.16TiB path /dev/sdb1
devid6 size 2.69TiB used 1.16TiB path /dev/sda1

Label: 'backup'  uuid: b24d05da-6b0a-4ab0-8f2f-21ea5416e9e9
Total devices 3 FS bytes used 899.20GiB
devid3 size 901.92GiB used 616.03GiB path /dev/sdf1
devid4 size 892.25GiB 

Re: csum failed that was not detected by scrub

2014-05-03 Thread Marc MERLIN
On Fri, May 02, 2014 at 10:20:03AM +, Duncan wrote:
 The raid5/6 page (which I didn't otherwise see conveniently linked, I dug 

It's linked off
https://btrfs.wiki.kernel.org/index.php/FAQ#Can_I_use_RAID.5B56.5D_on_my_Btrfs_filesystem.3F

 it out of the recent changes list since I knew it was there from on-list 
 discussion):
 
 https://btrfs.wiki.kernel.org/index.php/RAID56
 
 
 @ Marc or Hugo or someone with a wiki account:  Can this be more visibly 

@ Marc relies on a lot for me to see this, never mind at the bottom of a
message when my inbox is over 900 and I'm boarding a plane in a few hours ;)

More seriously, please Cc me (and I'd say generally others) if you're trying
to get their attention. I typically also put one liner at the top to tell
the Cced person to look for a bit with their name.

 linked from the user-docs contents, added to the user docs category list, 
 and probably linked from at least the multiple devices and (for now) the 
 gotchas pages?

I added it here
https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices
note that it's first result on google for raid56.
Also raid5 btrfs brings you to
https://btrfs.wiki.kernel.org/index.php/FAQ#Case_study:_btrfs-raid_5.2F6_versus_MD-RAID_5.2F6
which also links to the raid56 page.

Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/  
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help with space

2014-05-03 Thread Austin S Hemmelgarn
On 05/02/2014 03:21 PM, Chris Murphy wrote:
 
 On May 2, 2014, at 2:23 AM, Duncan 1i5t5.dun...@cox.net wrote:
 
 Something tells me btrfs replace (not device replace, simply
 replace) should be moved to btrfs device replaceā€¦
 
 The syntax for btrfs device is different though; replace is like
 balance: btrfs balance start and btrfs replace start. And you can
 also get a status on it. We don't (yet) have options to stop,
 start, resume, which could maybe come in handy for long rebuilds
 and a reboot is required (?) although maybe that just gets handled
 automatically: set it to pause, then unmount, then reboot, then
 mount and resume.
 
 Well, I'd say two copies if it's only two devices in the raid1...
 would be true raid1.  But if it's say four devices in the raid1,
 as is certainly possible with btrfs raid1, that if it's not
 mirrored 4-way across all devices, it's not true raid1, but
 rather some sort of hybrid raid,  raid10 (or raid01) if the
 devices are so arranged, raid1+linear if arranged that way, or
 some form that doesn't nicely fall into a well defined raid level
 categorization.
 
 Well, md raid1 is always n-way. So if you use -n 3 and specify
 three devices, you'll get 3-way mirroring (3 mirrors). But I don't
 know any hardware raid that works this way. They all seem to be
 raid 1 is strictly two devices. At 4 devices it's raid10, and only
 in pairs.
 
 Btrfs raid1 with 3+ devices is unique as far as I can tell. It is
 something like raid1 (2 copies) + linear/concat. But that
 allocation is round robin. I don't read code but based on how a 3
 disk raid1 volume grows VDI files as it's filled it looks like 1GB
 chunks are copied like this
Actually, MD RAID10 can be configured to work almost the same with an
odd number of disks, except it uses (much) smaller chunks, and it does
more intelligent striping of reads.
 
 Disk1 Disk2   Disk3 134   124 235 679 578 
 689
 
 So 1 through 9 each represent a 1GB chunk. Disk 1 and 2 each have a
 chunk 1; disk 2 and 3 each have a chunk 2, and so on. Total of 9GB
 of data taking up 18GB of space, 6GB on each drive. You can't do
 this with any other raid1 as far as I know. You do definitely run
 out of space on one disk first though because of uneven metadata to
 data chunk allocation.
 
 Anyway I think we're off the rails with raid1 nomenclature as soon
 as we have 3 devices. It's probably better to call it replication,
 with an assumed default of 2 replicates unless otherwise
 specified.
 
 There's definitely a benefit to a 3 device volume with 2
 replicates, efficiency wise. As soon as we go to four disks 2
 replicates it makes more sense to do raid10, although I haven't
 tested odd device raid10 setups so I'm not sure what happens.
 
 
 Chris Murphy
 
 -- To unsubscribe from this list: send the line unsubscribe
 linux-btrfs in the body of a message to majord...@vger.kernel.org 
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help with space

2014-05-03 Thread Chris Murphy

On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:

 On 05/02/2014 03:21 PM, Chris Murphy wrote:
 
 On May 2, 2014, at 2:23 AM, Duncan 1i5t5.dun...@cox.net wrote:
 
 Something tells me btrfs replace (not device replace, simply
 replace) should be moved to btrfs device replaceā€¦
 
 The syntax for btrfs device is different though; replace is like
 balance: btrfs balance start and btrfs replace start. And you can
 also get a status on it. We don't (yet) have options to stop,
 start, resume, which could maybe come in handy for long rebuilds
 and a reboot is required (?) although maybe that just gets handled
 automatically: set it to pause, then unmount, then reboot, then
 mount and resume.
 
 Well, I'd say two copies if it's only two devices in the raid1...
 would be true raid1.  But if it's say four devices in the raid1,
 as is certainly possible with btrfs raid1, that if it's not
 mirrored 4-way across all devices, it's not true raid1, but
 rather some sort of hybrid raid,  raid10 (or raid01) if the
 devices are so arranged, raid1+linear if arranged that way, or
 some form that doesn't nicely fall into a well defined raid level
 categorization.
 
 Well, md raid1 is always n-way. So if you use -n 3 and specify
 three devices, you'll get 3-way mirroring (3 mirrors). But I don't
 know any hardware raid that works this way. They all seem to be
 raid 1 is strictly two devices. At 4 devices it's raid10, and only
 in pairs.
 
 Btrfs raid1 with 3+ devices is unique as far as I can tell. It is
 something like raid1 (2 copies) + linear/concat. But that
 allocation is round robin. I don't read code but based on how a 3
 disk raid1 volume grows VDI files as it's filled it looks like 1GB
 chunks are copied like this
 Actually, MD RAID10 can be configured to work almost the same with an
 odd number of disks, except it uses (much) smaller chunks, and it does
 more intelligent striping of reads.

The efficiency of storage depends on the file system placed on top. Btrfs will 
allocate space exclusively for metadata, and it's possible much of that space 
either won't or can't be used. So ext4 or XFS on md probably is more efficient 
in that regard; but then Btrfs also has compression options so this clouds the 
efficiency analysis.

For striping of reads, there is a note in man 4 md about the layout with 
respect to raid10: The 'far' arrangement can give sequential read performance 
equal to that of a RAID0 array, but at the cost of reduced write performance. 
The default layout for raid10 is near 2. I think either the read performance is 
a wash with defaults, and md reads are better while writes are worse with the 
far layout.

I'm not sure how Btrfs performs reads with multiple devices.

Chris Murphy

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Help with space

2014-05-03 Thread Chris Murphy

On May 3, 2014, at 1:09 PM, Chris Murphy li...@colorremedies.com wrote:

 
 On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn ahferro...@gmail.com wrote:
 
 On 05/02/2014 03:21 PM, Chris Murphy wrote:
 
 Btrfs raid1 with 3+ devices is unique as far as I can tell. It is
 something like raid1 (2 copies) + linear/concat. But that
 allocation is round robin. I don't read code but based on how a 3
 disk raid1 volume grows VDI files as it's filled it looks like 1GB
 chunks are copied like this
 Actually, MD RAID10 can be configured to work almost the same with an
 odd number of disks, except it uses (much) smaller chunks, and it does
 more intelligent striping of reads.
 
 The efficiency of storage depends on the file system placed on top. Btrfs 
 will allocate space exclusively for metadata, and it's possible much of that 
 space either won't or can't be used. So ext4 or XFS on md probably is more 
 efficient in that regard; but then Btrfs also has compression options so this 
 clouds the efficiency analysis.
 
 For striping of reads, there is a note in man 4 md about the layout with 
 respect to raid10: The 'far' arrangement can give sequential read 
 performance equal to that of a RAID0 array, but at the cost of reduced write 
 performance. The default layout for raid10 is near 2. I think either the 
 read performance is a wash with defaults, and md reads are better while 
 writes are worse with the far layout.
 
 I'm not sure how Btrfs performs reads with multiple devices.


Also, for unequal sized devices, for example 12G,6G,6G, Btrfs raid1 is OK with 
this and efficiently uses the space, whereas md does not in raid10. First it 
complains when creating, asking if I want to continue anyway, and then it 



Second it ends up with *less* usable space than if it had 3x 6GB drives.

12G,6G,6G md raid10
# mdadm -C /dev/md0 -n 3 -l raid10 --assume-clean /dev/sd[bcd]
mdadm: largest drive (/dev/sdb) exceeds size (6283264K) by more than 1%.
# mdadm -D /dev/md0 (partial)
 Array Size : 9424896 (8.99 GiB 9.65 GB)
  Used Dev Size : 6283264 (5.99 GiB 6.43 GB)

# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/md09.0G   33M  9.0G   1% /mnt

12G,6G,6G btrfs raid1

# mkfs.btrfs -d raid1 -m raid1 /dev/sd[bcd]
# df -h
Filesystem  Size  Used Avail Use% Mounted on
/dev/sdb 24G  1.3M   12G   1% /mnt


For performance workloads, this is probably a pathological configuration since 
it depends on disproportionate reading almost no matter what. But for those who 
happen to have uneven devices available, and favor space usage efficiency over 
performance, it's a nice capability.


Chris Murphy--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


metadata vs data errors

2014-05-03 Thread Russell Coker
# btrfs scrub status /mnt/backup/
scrub status for 97972ab2-02f7-42dd-a23b-d92efbf9d9b5
scrub started at Thu May  1 14:29:57 2014 and finished after 97253 
seconds
total bytes scrubbed: 1.11TB with 13684 errors
error details: read=13684
corrected errors: 2113, uncorrectable errors: 11571, unverified 
errors: 0

Above is the output from a scrub of a damaged disk.  It was formatted with 
default options (dup for metadata and single for data) so obviously the 
corrected errors are all for metadata.

May  1 18:23:51 server kernel: [14216.461759] BTRFS: i/o error at logical 
505396924416 on dev /dev/sde, sector 989216904: metadata leaf (level 0) in 
tree 823


May  1 18:23:51 server kernel: [14216.461762] BTRFS: i/o error at logical 
505396924416 on dev /dev/sde, sector 989216904: metadata leaf (level 0) in 
tree 823


May  1 18:23:54 server kernel: [14219.704819] BTRFS: i/o error at logical 
505398464512 on dev /dev/sde, sector 989219912: metadata leaf (level 0) in 
tree 823


May  1 18:23:54 server kernel: [14219.704825] BTRFS: i/o error at logical 
505398464512 on dev /dev/sde, sector 989219912: metadata leaf (level 0) in 
tree 823


May  1 18:24:30 server kernel: [14255.174994] BTRFS: i/o error at logical 
505614372864 on dev /dev/sde, sector 991738760: metadata leaf (level 0) in 
tree 2  


May  1 18:24:30 server kernel: [14255.174998] BTRFS: i/o error at logical 
505614372864 on dev /dev/sde, sector 991738760: metadata leaf (level 0) in 
tree 2

To discover whether there were any metadata errors I grepped for metadata in 
the kernel message log and found lots of lines like the above.  Will all 
errors that involve metadata match a grep for metadata in the kernel message 
log?

I think it would be good to have a scrub count of the number of uncorrectable 
metadata vs data errors.  When there are uncorrectable data errors you know 
the name of the file (it's in the kernel message log) and can recover just 
that file.  When there are uncorrectable metadata errors you don't.

Also would it be possible to log the names of directories that are affected by 
uncorrectable metadata errors?  When BTRFS scales up to the systems where a 
find / takes days to complete and run 24*7 there won't be an option to just 
restore from backup.  In this case the root of every subvol appears undamaged 
so BTRFS should be able to tell me part of the path related to metadata 
corruption.

# btrfs subvol list /mnt/backup/



   
ID 823 gen 3212 top level 5 path backup 



 
ID 826 gen 1832 top level 5 path backup-2013-05-21

Above is the start of the output of a subvol list, there is no ID 2.  What 
does the tree 2 in the above kernel error log mean?

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


copies= option

2014-05-03 Thread Russell Coker
Are there any plans for a feature like the ZFS copies= option?

I'd like to be able to set copies= separately for data and metadata.  In most 
cases RAID-1 provides adequate data protection but I'd like to have RAID-1 and 
copies=2 for metadata so that if one disk dies and another has some bad 
sectors during recovery I'm unlikely to lose metadata.

-- 
My Main Blog http://etbe.coker.com.au/
My Documents Bloghttp://doc.coker.com.au/
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: copies= option

2014-05-03 Thread Duncan
Russell Coker posted on Sun, 04 May 2014 12:16:54 +1000 as excerpted:

 Are there any plans for a feature like the ZFS copies= option?
 
 I'd like to be able to set copies= separately for data and metadata.  In
 most cases RAID-1 provides adequate data protection but I'd like to have
 RAID-1 and copies=2 for metadata so that if one disk dies and another
 has some bad sectors during recovery I'm unlikely to lose metadata.

Hugo's the guy with the better info on this one, but until he answers...

The zfs license issues mean it's not an option for me and I'm thus not 
familiar with its options in any detail, but if I understand the question 
correctly, yes.

And of course since btrfs treats data and metadata separately, it's 
extremely unlikely that any sort of copies= option wouldn't be separately 
configurable for each.

There was a discussion of a very nice multi-way-configuration schema that 
I deliberately stayed out of as both a bit above my head and far enough 
in the future that I didn't want to get my hopes up too high about it 
yet.  I already want N-way-mirroring so bad I can taste it, and this was 
that and way more... if/when it ever actually gets coded and committed to 
the mainline kernel btrfs.  As I said, Hugo should have more on it, as he 
was active in that discussion as it seemed to line up perfectly with his 
area of interest.

-- 
Duncan - List replies preferred.   No HTML msgs.
Every nonfree program has a lord, a master --
and if you use the program, he is your master.  Richard Stallman

--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Copying related snapshots to another server with btrfs send/receive?

2014-05-03 Thread Marc MERLIN
Another question I just came up with.

If I have historical snapshots like so:
backup
backup.sav1
backup.sav2
backup.sav3

If I want to copy them up to another server, can btrfs send/receive
let me copy all of the to another btrfs pool while keeping the
duplicated block relationship between all of them?
Note that the backup.sav dirs will never change, so I won't need
incremental backups on those, just a one time send.
I believe this is supposed to work, correct?

The only part I'm not clear about is am I supposed to copy them all at
once in the same send command, or one by one?

If they had to be copied together and if I create a new snapshot of
backup: backup.sav4

If I use btrfs send to that same destination, is btrfs send/receive indeed 
able to keep the shared block relationship?

Thanks,
Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/ | PGP 1024R/763BE901
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


How does Suse do live filesystem revert with btrfs?

2014-05-03 Thread Marc MERLIN
(more questions I'm asking myself while writing my talk slides)

I know Suse uses btrfs to roll back filesystem changes.

So I understand how you can take a snapshot before making a change, but
not how you revert to that snapshot without rebooting or using rsync,

How do you do a pivot-root like mountpoint swap to an older snapshot,
especially if you have filehandles opened on the current snapshot?

Is that what Suse manages, or are they doing something simpler?

Thanks,
Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/ | PGP 1024R/763BE901
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Is metadata redundant over more than one drive with raid0 too?

2014-05-03 Thread Marc MERLIN
So, I was thinking. In the past, I've done this:
mkfs.btrfs -d raid0 -m raid1 -L btrfs_raid0 /dev/mapper/raid0d*

My rationale at the time was that if I lose a drive, I'll still have
full metadata for the entire filesystem and only missing files.
If I have raid1 with 2 drives, I should end up with 4 copies of each
file's metadata, right?

But now I have 2 questions
1) btrfs has two copies of all metadata on even a single drive, correct?
If so, and I have a -d raid0 -m raid0 filesystem, are both copies of the
metadata on the same drive or is btrfs smart enough to spread out
metadata copies so that they're not on the same drive?

2) does btrfs lay out files on raid0 so that files aren't striped across
more than one drive, so that if I lose a drive, I only lose whole files,
but not little chunks of all my files, making my entire FS toast?

Thanks,
Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/ | PGP 1024R/763BE901
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Using mount -o bind vs mount -o subvol=vol

2014-05-03 Thread Marc MERLIN
Is there any functional difference between 

mount -o subvol=usr /dev/sda1 /usr
and
mount /dev/sda1 /mnt/btrfs_pool
mount -o bind /mnt/btrfs_pool/usr /usr

?

Thanks,
Marc
-- 
A mouse is a device used to point at the xterm you want to type in - A.S.R.
Microsoft is to operating systems 
   what McDonalds is to gourmet cooking
Home page: http://marc.merlins.org/ | PGP 1024R/763BE901
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html