Re: [Lsf-pc] [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion

2017-01-15 Thread James Bottomley
On Mon, 2017-01-16 at 11:33 +0800, Guoqing Jiang wrote:
> 
> On 01/10/2017 12:38 AM, Coly Li wrote:
> > Hi Folks,
> > 
> > I'd like to propose a general md raid discussion, it is quite 
> > necessary for most of active md raid developers sit together to 
> > discuss current challenge of Linux software raid and development
> > trends.
> > 
> > In the last years, we have many development activities in md raid, 
> > e.g. raid5 cache, raid1 clustering, partial parity log, fast fail
> > upstreaming, and some effort for raid1 & raid0 performance
> > improvement.
> > 
> > I see there are some kind of functionality overlap between r5cache
> > (raid5 cache) and PPL (partial parity log), currently I have no 
> > idea where we will go for these two development activities.
> > Also I receive reports from users that raid1 performance is desired 
> > when it is built on NVMe SSDs as a cache (maybe bcache or dm
> > -cache). I am working on some raid1 performance improvement (e.g. 
> > new raid1 I/O barrier and lockless raid1 I/O submit), and have some 
> > more ideas to discuss.
> > 
> > Therefore, if md raid developers may have a chance to sit together,
> > discuss how to efficiently collaborate in next year, it will be 
> > much more productive then communicating on mailing list.
> 
> I would like to attend raid discussion, besides above topics I think 
> we can talk about improve the test suite of mdadm to make it more 
> robust (I can share related test suite which is used for clustered
> raid).

Just so you know ... and just in case others are watching.  You're not
going to be getting an invite to LSF/MM unless you send an attend or
topic request in as the CFP asks:

http://marc.info/?l=linux-fsdevel&m=148285919408577

The rationale is simple: it's to difficult to track all the "me too"
reply emails and even if we could, it's not actually clear what the
intention of the sender is.  So you taking the time to compose an
official email as the CFP requests allows the programme committee to
distinguish.

James

--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Lsf-pc] [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion

2017-01-12 Thread Coly Li
On 2017/1/12 下午11:09, Sagi Grimberg wrote:
> Hey Coly,
> 
>> Also I receive reports from users that raid1 performance is desired when
>> it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
>> working on some raid1 performance improvement (e.g. new raid1 I/O
>> barrier and lockless raid1 I/O submit), and have some more ideas to
>> discuss.
> 
> Do you have some performance measurements to share?
> 
> Mike used null devices to simulate very fast devices which
> led to nice performance enhancements in dm-multipath code.

I have several performance data of raid1 and raid0, which is still work
in progress.

- md raid1
  Current md raid1 read performance is not ideal. A raid1 with 2 NVMe
SSD, only observe 2.6GB/s throughput for multi I/O and depth reading.
Most of the time spending on I/O barrier locking. Now I am working on a
lockless I/O submit patch (the original idea is from Hannes Reinecke),
which improves reading throughput to 4.7~5GB/s. When using md raid1 as a
cache device, reading performance improvement is critical.
  On my hardware, the ideal reading throughput of 2 NVMe is 6GB/s,
currently the reading performance number is 4.7~5GB/s, still have a
little some space to improve.
- md raid0
  People reports on linux-raid mailing list that DISCARD/TRIM
performance on raid0 is slow. In my reproducing, a raid0 built by 4x3TB
NVMe SSD, formatting a XFS volume on top of it takes 306 seconds. Most
of the time is inside md raid0 code to issue DISCARD/TRIM request in
chunk size range. I compose a POC patch to re-combine a large
DISCARD/TRIM command into per-device request, which reduces the
formatting time to 15 seconds. Now I work on patch simplifying by the
suggestions from upstream maintainers.

For raid1, currently most of feed backs are from read performance.

Coly
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Lsf-pc] [LSF/MM TOPIC] [LSF/MM ATTEND] md raid general discussion

2017-01-12 Thread Sagi Grimberg

Hey Coly,


Also I receive reports from users that raid1 performance is desired when
it is built on NVMe SSDs as a cache (maybe bcache or dm-cache). I am
working on some raid1 performance improvement (e.g. new raid1 I/O
barrier and lockless raid1 I/O submit), and have some more ideas to discuss.


Do you have some performance measurements to share?

Mike used null devices to simulate very fast devices which
led to nice performance enhancements in dm-multipath code.
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html