On 05/24/2014 12:44 PM, john terragon wrote:
> Hi.
> 
> I'm playing around with (software) raid0 on SSDs and since I remember
> I read somewhere that intel recommends 128K stripe size for HDD arrays
> but only 16K stripe size for SSD arrays, I wanted to see how a
> small(er) stripe size would work on my system. Obviously with btrfs on
> top of md-raid I could use the stripe size I want. But if I'm not
> mistaken the stripe size with the native raid0 in btrfs is fixed to
> 64K in BTRFS_STRIPE_LEN (volumes.h).
> So I was wondering if it would be reasonably safe to just change that
> to 16K (and duck and wait for the explosion ;) ).
> 
> Can anyone adept to the inner workings of btrfs raid0 code confirm if
> that would be the right way to proceed? (obviously without absolutely
> any blame to be placed on anyone other than myself if things should go
> badly :) )
I personally can't render an opinion on whether changing it would make
things break or not, but I do know that it would need to be changed both
in the kernel and the tools, and the resultant kernel and tools would
not be entirely compatible with filesystems produced by the regular
tools and kernel, possibly to the point of corrupting any filesystem
they touch.

As for the 64k default strip size, that sounds correct, and is probably
because that's the largest block that the I/O schedulers on Linux will
dispatch as a single write to the underlying device.

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to