On 01/24/2014 03:37 AM, Mike Christie wrote:
> On 01/13/2014 05:36 AM, Hannes Reinecke wrote:
>> On 01/10/2014 07:27 PM, Mike Snitzer wrote:
>>> I would like to attend to participate in discussions related to topics
>>> listed in the subject.  As a maintainer of DM I'd be interested to
>>> learn/discuss areas that should become a development focus in the months
>>> following LSF.
>>>
>> +1
>>
>> I've been thinking on (re-) implementing multipathing on top of
>> blk-mq, and would like to discuss the probability of which.
>> There are some design decisions in blk-mq (eg statically allocating
>> the number of queues) which do not play well with that.
>>
> 
> I think I have been thinking about going a completely different direction.
> 
> The thing about dm-multipath is that request based adds the extra queue
> locking and that of course is bad. In our testing it is a major perf
> issue. We got things like ioscheduling though.
> 
Indeed. And without that we cannot do true load-balancing.

> If we went back to bio based multipathing then it turns out that when
> scsi also supports multiqueue then it all works pretty nicely. There is
> room for improvement in general like with some dm allocations being
> numa/cpu aware, but the request_queue locking issues we have go away and
> it is very simple code wise.
> 
If and when.

The main issue I see with that is that it might take some time (if
ever) for SCSI LLDDs to go fully multiqueue.
In fact, I strongly suspect that only newer LLDDs will ever support
multiqueue; for the older cards the HW interface it too much tied
to single queue operations.

> We could go the route of making request based dm-multipath:
> 
> 1. aware of underlying multiqueue devices. So just basically keep what
> we have more or less but then have dm-multipath make a request that can
> be sent to a multiqueue device then call blk_mq_insert_request. This
> would all be hidden by nice interfaces that hide if it is multiqueue
> underlying device or not.
> 
> 2. make dm-multipath do multiqueue (so implement map_queue, queue_rq,
> etc) and also making it aware of underlying multiqueue devices.
> 
> #1 just keeps the existing request spin_lock problem so there is not
> much point other than just getting things working.
> 
> #2 is a good deal of work and what does it end up buying us over just
> making multipath bio based. We lose iosched support. If we are going to
> make advanced multiqueue ioschedulers that rely on request structs then
> #2 could be useful.
> 

Obviously we need iosched support when going multiqueue.
I wouldn't dream of dropping them.

So my overall idea here is to move multipath over to block-mq,
making each path identical to one queue.
(As mentioned above, currently every single FC HBA exposes a single
HW queue anyway)
The ioschedulers would be moved to the map_queue function.

This approach has several issues which I would like to discuss:
- block-mq ctx allocation currently is static. This doesn't play
  well with multipathing, were paths (=queues) might get configured
  on-the-fly.
- Queues might be coming from different HBAs; one would need to
  audit the block-mq stuff if that's possible.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                   zSeries & Storage
h...@suse.de                          +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
--
To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to