On 2015-08-14 15:54, Chris Murphy wrote:
On Fri, Aug 14, 2015 at 1:50 PM, Austin S Hemmelgarn
<ahferro...@gmail.com> wrote:
On 2015-08-14 14:31, Chris Murphy wrote:

On Fri, Aug 14, 2015 at 9:16 AM, Eduardo Bach <hellb...@gmail.com> wrote:

With btrfs the result approaches 3.5GB/s. When using mdadm+xfs the
result reaches 6gb/s, which is the expected value when compared with
parallel dd made on discs.


mdadm with what chunk (strip) size? The default for mdadm is 512KiB.
On Btrfs it's fixed at 64KiB. While testing with 64KiB chunk with XFS
on md RAID might improve its performance relative to Btrfs, at least
it's a more apples to apples comparison.

I have a feeling that XFS will still win this.  It is one of the slower
filesystems for Linux, but it still beats BTRFS senseless when it comes to
performance as of right now.

Yeah I was suggesting with a 64KiB chunk the XFS case might get even faster.


Ah, misunderstood what you meant. Yeah, that will almost certainly make things faster for XFS.

FWIW, running BTRFS on top of MDRAID actually works very well, especially for BTRFS raid1 on top of MD-RAID0 (I get an almost 50% performance increase for this usage over BTRFS raid10, although most of this is probably due to how btrfs dispatches I/O's to disks in multi-disk stetups).

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

Reply via email to