On 05/23/2013 09:37 PM, Chris Mason wrote:
> Quoting Bernd Schubert (2013-05-23 15:33:24)
>> Btw, any chance to generally use chunksize/chunklen instead of stripe,
>> such as the md layer does it? IMHO it is less confusing to use
>> n-datadisks * chunksize = stripesize.
> 
> Definitely, it will become much more configurable.

Actually I meant in the code. I'm going to write a patch during the weekend.

> 
>>
>>>
>>> Using buffered writes makes it much more likely the VM will break up the
>>> IOs as they go down.  The btrfs writepages code does try to do full
>>> stripe IO, and it also caches stripes as the IO goes down.  But for
>>> buffered IO it is surprisingly hard to get a 100% hit rate on full
>>> stripe IO at larger stripe sizes.
>>
>> I have not found that part yet, somehow it looks like as if writepages
>> would submit single pages to another layer. I'm going to look into it
>> again during the weekend. I can reserve the hardware that long, but I
>> think we first need to fix striped writes in general.
> 
> The VM calls writepages and btrfs tries to suck down all the pages that
> belong to the same extent.  And we try to allocate the extents on
> boundaries.  There is definitely some bleeding into rmw when I do it
> here, but overall it does well.
> 
> But I was using 8 drives.  I'll try with 12.

Hmm, I already tried with 10 drives (8+2), doesn't make a difference for
RMW.

> 
>> Direct-io works as expected and without any RMW cycles. And that
>> provides more than 40% better performance than the Megasas controller or
>> buffered MD writes (I didn't compare with direct-io MD, as that is very
>> slow).
> 
> You can improve MD performance quite a lot by increasing the size of the
> stripe cache.

I'm already doing that, without a higher stripe cache the performance is
much lower.

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to