Re: [PATCH] Btrfs: make sure logged extents complete in the current transaction

2014-11-19 Thread Liu Bo
On Tue, Nov 18, 2014 at 05:19:41PM -0500, Josef Bacik wrote: > Liu Bo pointed out that my previous fix would lose the generation update in > the > scenario I described. It is actually much worse than that, we could lose the > entire extent if we lose power right after the transaction commits. Co

[PATCH] btrfs-progs: use system attr instead of attr library

2014-11-19 Thread David Sterba
We use the attr version provided by system in other places already, now we can remove dependency on the separate attr library. Signed-off-by: David Sterba --- props.c | 8 +++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/props.c b/props.c index 9fd612f97026..c7c67529fd79 100

Re: [PATCH] Btrfs: do not move em to modified list when unpinning

2014-11-19 Thread Josef Bacik
On 11/18/2014 10:45 PM, Dave Chinner wrote: On Fri, Nov 14, 2014 at 04:16:30PM -0500, Josef Bacik wrote: We use the modified list to keep track of which extents have been modified so we know which ones are candidates for logging at fsync() time. Newly modified extents are added to the list at m

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/18/2014 9:40 PM, Chris Murphy wrote: > It’s well known on linux-raid@ that consumer drives have well over > 30 second "deep recoveries" when they lack SCT command support. The > WDC and Seagate “green” drives are over 2 minutes apparently. This >

Re: BTRFS messes up snapshot LV with origin

2014-11-19 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/18/2014 9:54 PM, Chris Murphy wrote: > Why is it silly? Btrfs on a thin volume has practical use case > aside from just being thinly provisioned, its snapshots are block > device based, not merely that of an fs tree. Umm... because one of the bi

[PATCH] fstests: mark replace tests in btrfs/group

2014-11-19 Thread Eric Sandeen
A couple tests exercise replace but were not marked as such in the group file. Signed-off-by: Eric Sandeen --- diff --git a/tests/btrfs/group b/tests/btrfs/group index 9adf862..1f23979 100644 --- a/tests/btrfs/group +++ b/tests/btrfs/group @@ -13,7 +13,7 @@ 008 auto quick 009 auto quick 010 a

[PATCH] Fix lockups from btrfs_clear_path_blocking

2014-11-19 Thread Chris Mason
The fair reader/writer locks mean that btrfs_clear_path_blocking needs to strictly follow lock ordering rules even when we already have blocking locks on a given path. Before we can clear a blocking lock on the path, we need to make sure all of the locks have been converted to blocking. This will

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/18/2014 9:46 PM, Duncan wrote: > I'm not sure about normal operation, but certainly, many drives > take longer than 30 seconds to stabilize after power-on, and I > routinely see resets during this time. As far as I have seen, typical drive spin

btrfs send and an existing backup

2014-11-19 Thread Jakob Schürz
Hi there! I'm new on btrfs, and I like it :) But i have a question. I have a existing backup on an external HDD. This was ext4 before i converted it to btrfs. And i installed my debian new on btrfs with some subvolumes. (f.e. home, var, multimedia/Video multimedia/Audio...) On my backup ther

[PATCH] Btrfs: make sure logged extents complete in the current transaction V2

2014-11-19 Thread Josef Bacik
Liu Bo pointed out that my previous fix would lose the generation update in the scenario I described. It is actually much worse than that, we could lose the entire extent if we lose power right after the transaction commits. Consider the following write extent 0-4k log extent in log tree commit

Re: Btrfs on a failing drive

2014-11-19 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Again, please stop taking this conversation private; keep the mailing list on the Cc. On 11/19/2014 11:37 AM, Fennec Fox wrote: > well ive used spinrite and its found a few sectors and they > never move so obviously the drives firmware isnt dealin

Re: BTRFS messes up snapshot LV with origin

2014-11-19 Thread Chris Murphy
On Wed, Nov 19, 2014 at 8:20 AM, Phillip Susi wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 11/18/2014 9:54 PM, Chris Murphy wrote: >> Why is it silly? Btrfs on a thin volume has practical use case >> aside from just being thinly provisioned, its snapshots are block >> device bas

Re: BTRFS messes up snapshot LV with origin

2014-11-19 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/19/2014 1:33 PM, Chris Murphy wrote: > Thin volumes are more efficient. And the user creating them doesn't > have to mess around with locating physical devices or possibly > partitioning them. Plus in enterprise environments with lots of > storag

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Robert White
On 11/19/2014 08:07 AM, Phillip Susi wrote: On 11/18/2014 9:46 PM, Duncan wrote: I'm not sure about normal operation, but certainly, many drives take longer than 30 seconds to stabilize after power-on, and I routinely see resets during this time. As far as I have seen, typical drive spin up ti

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 11/19/2014 4:05 PM, Robert White wrote: > It's cheaper, and less error prone, and less likely to generate > customer returns if the generic controller chips just "send init, > wait a fixed delay, then request a status" compared to trying to > "are-y

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Robert White
Shame you already know everything? On 11/19/2014 01:47 PM, Phillip Susi wrote: On 11/19/2014 4:05 PM, Robert White wrote: One of the reasons that the whole industry has started favoring point-to-point (SATA, SAS) or physical intercessor chaining point-to-point (eSATA) buses is to remove a

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Robert White
P.S. On 11/19/2014 01:47 PM, Phillip Susi wrote: Another common cause is having a dedicated hardware RAID controller (dell likes to put LSI MegaRaid controllers in their boxes for example), many mother boards have hardware RAID support available through the bios, etc, leaving that feature active

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Duncan
Phillip Susi posted on Wed, 19 Nov 2014 11:07:43 -0500 as excerpted: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 11/18/2014 9:46 PM, Duncan wrote: >> I'm not sure about normal operation, but certainly, many drives take >> longer than 30 seconds to stabilize after power-on, and I rout

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Chris Murphy
On Wed, Nov 19, 2014 at 8:11 AM, Phillip Susi wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 11/18/2014 9:40 PM, Chris Murphy wrote: >> It’s well known on linux-raid@ that consumer drives have well over >> 30 second "deep recoveries" when they lack SCT command support. The >> WDC

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Duncan
Robert White posted on Wed, 19 Nov 2014 13:05:13 -0800 as excerpted: > One of the reasons that the whole industry has started favoring > point-to-point (SATA, SAS) or physical intercessor chaining > point-to-point (eSATA) buses is to remove a lot of those wait-and-see > delays. > > That said, you

Re: scrub implies failing drive - smartctl blissfully unaware

2014-11-19 Thread Robert White
On 11/19/2014 04:25 PM, Duncan wrote: Most often, however, it's at resume, not original startup, which is understandable as state at resume doesn't match state at suspend/ hibernate. The irritating thing, as previously discussed, is when one device takes long enough to come back that mdraid or b

[PATCH] btrfs: remove empty fs_devices to prevent memory runout

2014-11-19 Thread Gui Hecheng
There is a global list @fs_uuids to keep @fs_devices object for each created btrfs. But when a btrfs becomes "empty" (all devices belong to it are gone), its @fs_devices remains in @fs_uuids list until module exit. If we keeps mkfs.btrfs on the same device again and again, all empty @fs_devices pro