Larkin Lowrey posted on Sun, 26 Oct 2014 12:20:45 -0500 as excerpted:

> One unusual property of my setup is I have my fs on top of bcache. More
> specifically, the stack is md raid6  -> bcache -> lvm -> btrfs. When the
> fs mounts it has mount option 'ssd' due to the fact that bcache sets
> /sys/block/bcache0/queue/rotational to 0.
> 
> Is there any reason why either the 'ssd' mount option or being backed by
> bcache could be responsible?

Bcache... Some kernel cycles ago btrfs on bcache had known issues but IDR 
the details.  I /think/ that was fixed, but if you don't know what I'm 
referring to, I'd suggest looking back in the btrfs list archives (and 
assuming there's a bcache list, there's too), to see what it was, whether 
it was fixed, and (presumably on the bcache list) current status.

... Actually just did a bcache keyword search in my archive and see you 
on a thread, saying it was working fine for you, so never mind, looks 
like you are aware of that thread, and actually know more about the 
status than I do...

I don't believe the ssd mount option /should/ be triggering 
fragmentation; I use it here on real ssd, but as I said, I don't have 
that sort of large-internal-write-pattern file to worry about and have 
autodefrag set too, plus compress=lzo so filefrag's reports aren't 
trustworthy here anyway.

But what I DO know is that there's a nossd mount option available if the 
detection's going whacky and it's adding the ssd mount option 
inappropriately.  That has been there for a couple kernel cycles now.  
See the btrfs (5) manpage for the mount options.

So you could try the nossd mount option and see if it makes a difference.


Meanwhile, that's quite a stack you have there.  Before I switched to 
btrfs and btrfs raid, I was running mdraid here, and for a period ran lvm 
on top of mdraid.  But as an admin I decided that was simply too complex 
a setup for me to be confident in my own ability to properly handle 
disaster recovery.  And because I could feed the appropriate root on 
mdraid parameters directly to the kernel and didn't need an initr* for 
it, while I did for lvm, I kept mdraid, and actually had a few chances to 
practice disaster recovery on mdraid over time, becoming quite 
comfortable with it.

But not only do you have that, you have bcache thrown in too, and in 
place of the traditional reiserfs I was using (and still use on my second 
backups and media partitions on spinning rust as I've had very good 
results with reiserfs since data=ordered became the default, even thru 
various hardware issues... I'll avoid the stories), you're using btrfs, 
which has its own raid modes, altho I suppose you're not using them.

So that is indeed quite a stack.  If you're comfortable with your ability 
to properly handle disaster recovery at all those levels, wow, you 
definitely have my respect.  Or do you just have it all backed up and 
figure if it blows up and disaster recovery isn't going to be trivial you 
simply rebuild and restore from backup?  I guess with btrfs not yet fully 
stable and mature that's the best idea at its level anyway, and if you 
have it backed up for that, then you have it backed up for the others 
and /can/ simply rebuild your stack and restore from backup, should you 
need to.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to