On Wed, 2015-06-10 at 15:06 -0700, Ming Lin wrote:
> On Wed, Jun 10, 2015 at 2:46 PM, Mike Snitzer <snit...@redhat.com> wrote:
> > On Wed, Jun 10 2015 at  5:20pm -0400,
> > Ming Lin <m...@kernel.org> wrote:
> >
> >> On Mon, Jun 8, 2015 at 11:09 PM, Ming Lin <m...@kernel.org> wrote:
> >> > On Thu, 2015-06-04 at 17:06 -0400, Mike Snitzer wrote:
> >> >> We need to test on large HW raid setups like a Netapp filer (or even
> >> >> local SAS drives connected via some SAS controller).  Like a 8+2 drive
> >> >> RAID6 or 8+1 RAID5 setup.  Testing with MD raid on JBOD setups with 8
> >> >> devices is also useful.  It is larger RAID setups that will be more
> >> >> sensitive to IO sizes being properly aligned on RAID stripe and/or chunk
> >> >> size boundaries.
> >> >
> >> > Here are tests results of xfs/ext4/btrfs read/write on HW RAID6/MD 
> >> > RAID6/DM stripe target.
> >> > Each case run 0.5 hour, so it took 36 hours to finish all the tests on 
> >> > 4.1-rc4 and 4.1-rc4-patched kernels.
> >> >
> >> > No performance regressions were introduced.
> >> >
> >> > Test server: Dell R730xd(2 sockets/48 logical cpus/264G memory)
> >> > HW RAID6/MD RAID6/DM stripe target were configured with 10 HDDs, each 
> >> > 280G
> >> > Stripe size 64k and 128k were tested.
> >> >
> >> > devs="/dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh 
> >> > /dev/sdi /dev/sdj /dev/sdk"
> >> > spare_devs="/dev/sdl /dev/sdm"
> >> > stripe_size=64 (or 128)
> >> >
> >> > MD RAID6 was created by:
> >> > mdadm --create --verbose /dev/md0 --level=6 --raid-devices=10 $devs 
> >> > --spare-devices=2 $spare_devs -c $stripe_size
> >> >
> >> > DM stripe target was created by:
> >> > pvcreate $devs
> >> > vgcreate striped_vol_group $devs
> >> > lvcreate -i10 -I${stripe_size} -L2T -nstriped_logical_volume 
> >> > striped_vol_group
> >
> > DM had a regression relative to merge_bvec that wasn't fixed until
> > recently (it wasn't in 4.1-rc4), see commit 1c220c69ce0 ("dm: fix
> > casting bug in dm_merge_bvec()").  It was introduced in 4.1.
> >
> > So your 4.1-rc4 DM stripe testing may have effectively been with
> > merge_bvec disabled.
> 
> I'l rebase it to latest Linus tree and re-run DM stripe testing.

Here is the results for 4.1-rc7. Also looks good.

5. DM: stripe size 64k
                4.1-rc7         4.1-rc7-patched
                -------         ---------------
                (MB/s)          (MB/s)
xfs read:       784.0           783.5  -0.06%
xfs write:      751.8           768.8  +2.26%
ext4 read:      837.0           832.3  -0.56%
ext4 write:     806.8           814.3  +0.92%
btrfs read:     787.5           786.1  -0.17%
btrfs write:    722.8           718.7  -0.56%


6. DM: stripe size 128k
                4.1-rc7         4.1-rc7-patched
                -------         ---------------
                (MB/s)          (MB/s)
xfs read:       1045.5          1068.8  +2.22%
xfs write:      1058.9          1052.7  -0.58%
ext4 read:      1001.8          1020.7  +1.88%
ext4 write:     1049.9          1053.7  +0.36%
btrfs read:     1082.8          1084.8  +0.18%
btrfs write:    948.15          948.74  +0.06%


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to