On 12/15/2017 03:31 PM, Peter Zijlstra wrote:
> On Fri, Dec 15, 2017 at 10:12:50AM +0800, jianchao.wang wrote:
>>> That only makes it a little better:
>>>
>>> Task-A Worker
>>>
>>> write_seqcount_begin()
>>> blk_mq_rw_update_state(rq, IN_FLIGHT)
>>>
Hello, Peter.
On Thu, Dec 14, 2017 at 09:20:42PM +0100, Peter Zijlstra wrote:
> But now that I look at this again, TJ, why can't the below happen?
>
> write_seqlock_begin();
> blk_mq_rq_update_state(rq, IN_FLIGHT);
> blk_add_timer(rq);
>
> read_seqcount_begi
Hello, Bart.
On Thu, Dec 14, 2017 at 09:13:32PM +, Bart Van Assche wrote:
...
> however is called before a every use of a request. Sorry but I'm not
> enthusiast about the code in blk_rq_init() that reinitializes state
> information that should survive request reuse.
If it wasn't clear, me ne
On Fri, Dec 15, 2017 at 10:12:50AM +0800, jianchao.wang wrote:
> > That only makes it a little better:
> >
> > Task-A Worker
> >
> > write_seqcount_begin()
> > blk_mq_rw_update_state(rq, IN_FLIGHT)
> > blk_add_timer(rq)
> >
> > sch
On Thu, 2017-12-14 at 22:54 +0100, Peter Zijlstra wrote:
> On Thu, Dec 14, 2017 at 09:42:48PM +, Bart Van Assche wrote:
>
> > Some time ago the block layer was changed to handle timeouts in thread
> > context
> > instead of interrupt context. See also commit 287922eb0b18 ("block: defer
> > ti
On 12/15/2017 05:54 AM, Peter Zijlstra wrote:
> On Thu, Dec 14, 2017 at 09:42:48PM +, Bart Van Assche wrote:
>> On Thu, 2017-12-14 at 21:20 +0100, Peter Zijlstra wrote:
>>> On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote
On Thu, Dec 14, 2017 at 09:42:48PM +, Bart Van Assche wrote:
> On Thu, 2017-12-14 at 21:20 +0100, Peter Zijlstra wrote:
> > On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
> > > On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> > > > + write_seqcount_begin(&rq->gstate
On Thu, 2017-12-14 at 21:20 +0100, Peter Zijlstra wrote:
> On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
> > On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> > > + write_seqcount_begin(&rq->gstate_seq);
> > > + blk_mq_rq_update_state(rq, MQ_RQ_IN_FLIGHT);
> > > + blk_add_tim
On Thu, 2017-12-14 at 11:19 -0800, t...@kernel.org wrote:
> On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
> > On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> > > --- a/block/blk-core.c
> > > +++ b/block/blk-core.c
> > > @@ -126,6 +126,8 @@ void blk_rq_init(struct request_qu
On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
> On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> > + write_seqcount_begin(&rq->gstate_seq);
> > + blk_mq_rq_update_state(rq, MQ_RQ_IN_FLIGHT);
> > + blk_add_timer(rq);
> > + write_seqcount_end(&rq->gstate_seq);
>
> My
Hello,
On Thu, Dec 14, 2017 at 06:51:11PM +, Bart Van Assche wrote:
> On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> > rules. Unfortunatley, it contains quite a few holes.
> ^
> Unfortunately?
>
> > While this change makes REQ_ATOM_COMPLETE synchornizat
On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> rules. Unfortunatley, it contains quite a few holes.
^
Unfortunately?
> While this change makes REQ_ATOM_COMPLETE synchornization unnecessary
^^^
Hi, Jianchao.
On Wed, Dec 13, 2017 at 01:07:30PM +0800, jianchao.wang wrote:
> Test ok with NVMe
Awesome, thanks for testing!
--
tejun
Hi Tejun
On 12/13/2017 03:01 AM, Tejun Heo wrote:
> Currently, blk-mq timeout path synchronizes against the usual
> issue/completion path using a complex scheme involving atomic
> bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence
> rules. Unfortunatley, it contains quite a few hol
Hello, Bart.
On Tue, Dec 12, 2017 at 09:37:11PM +, Bart Van Assche wrote:
> Have you considered the following instead of introducing MQ_RQ_IDLE and
> MQ_RQ_IN_FLIGHT? I think this could help to limit the number of new atomic
> operations introduced in the hot path by this patch series.
But no
On Tue, 2017-12-12 at 11:01 -0800, Tejun Heo wrote:
> +/*
> + * Bits for request->gstate. The lower two bits carry MQ_RQ_* state value
> + * and the upper bits the generation number.
> + */
> +enum mq_rq_state {
> + MQ_RQ_IDLE = 0,
> + MQ_RQ_IN_FLIGHT = 1,
> +
> +
Currently, blk-mq timeout path synchronizes against the usual
issue/completion path using a complex scheme involving atomic
bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence
rules. Unfortunatley, it contains quite a few holes.
There's a complex dancing around REQ_ATOM_STARTED and
Hello,
On Tue, Dec 12, 2017 at 12:56:57PM +0100, Peter Zijlstra wrote:
> On Sat, Dec 09, 2017 at 11:25:21AM -0800, Tejun Heo wrote:
> > +static inline void blk_mq_rq_update_state(struct request *rq,
> > + enum mq_rq_state state)
> > +{
> > + u64 new_val = (rq-
Hello, Peter.
On Tue, Dec 12, 2017 at 11:10:52AM +0100, Peter Zijlstra wrote:
> > + seqcount_t gstate_seqc;
> > + u64 gstate;
>
> We typically name seqcount_t thingies _seq.
Will rename. Thanks.
--
tejun
Hello, Peter.
On Tue, Dec 12, 2017 at 11:09:35AM +0100, Peter Zijlstra wrote:
> > + /*
> > +* ->aborted_gstate is used by the timeout to claim a specific
> > +* recycle instance of this request. See blk_mq_timeout_work().
> > +*/
> > + struct u64_stats_sync aborted_gstate_sync;
>
On Sat, Dec 09, 2017 at 11:25:21AM -0800, Tejun Heo wrote:
> +static inline void blk_mq_rq_update_state(struct request *rq,
> + enum mq_rq_state state)
> +{
> + u64 new_val = (rq->gstate & ~MQ_RQ_STATE_MASK) | state;
> +
> + if (state == MQ_RQ_IN_FLIGHT
On Sat, Dec 09, 2017 at 11:25:21AM -0800, Tejun Heo wrote:
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index 8089ca1..e6cfe4b3 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -228,6 +230,27 @@ struct request {
>
> unsigned short write_hint;
>
I don't yet have a full picture, but let me make a few comments.
On Sat, Dec 09, 2017 at 11:25:21AM -0800, Tejun Heo wrote:
> +static u64 blk_mq_rq_aborted_gstate(struct request *rq)
> +{
> + unsigned int start;
> + u64 aborted_gstate;
> +
> + do {
> + start = u64_stats_fe
Currently, blk-mq timeout path synchronizes against the usual
issue/completion path using a complex scheme involving atomic
bitflags, REQ_ATOM_*, memory barriers and subtle memory coherence
rules. Unfortunatley, it contains quite a few holes.
There's a complex dancing around REQ_ATOM_STARTED and
24 matches
Mail list logo