On 2017/7/4 上午6:51, bca...@lists.ewheeler.net wrote:
> From: Eric Wheeler
>
> Flag for bypass if the IO is for read-ahead or background, unless the
> read-ahead request is for metadata (eg, from gfs2).
> Bypass if:
> bio->bi_opf &
On Sat, Jul 1, 2017 at 10:18 AM, Brian King wrote:
> On 06/30/2017 06:26 PM, Jens Axboe wrote:
>> On 06/30/2017 05:23 PM, Ming Lei wrote:
>>> Hi Bian,
>>>
>>> On Sat, Jul 1, 2017 at 2:33 AM, Brian King
>>> wrote:
On 06/30/2017 09:08 AM,
On 07/03/2017 06:37 AM, Ming Lei wrote:
> When mq-deadline is taken, IOPS of sequential read and
> seqential write is observed more than 20% drop on sata(scsi-mq)
> devices, compared with using 'none' scheduler.
>
> The reason is that the default nr_requests for scheduler is
> too big for small
From: Eric Wheeler
Flag for bypass if the IO is for read-ahead or background, unless the
read-ahead request is for metadata (eg, from gfs2).
Bypass if:
bio->bi_opf & (REQ_RAHEAD|REQ_BACKGROUND) && !(bio->bi_opf &
REQ_META))
Writeback if:
On 07/03/2017 02:00 AM, Paolo Valente wrote:
> Hi Jens,
> I'm writing this short cover letter to hopefully help you decide what
> to do with this patch, in this late phase of the development
> cycle. This patch fixes a bug causing kernel crashes for at least
> one year. Crashes apparently affect
On Mon, 3 Jul 2017, tang.jun...@zte.com.cn wrote:
> Hello Eric, Coly
>
> > So usually it takes 1.4s, as much as 7s on our systems. Average frequency
> > is almost an hour. Can GC just be triggered more frequently? Say, once
> > every 5min? Is that configurable?
>
> GC is triggered by
On 07/03/2017 11:49 AM, Linus Torvalds wrote:
> On Sun, Jul 2, 2017 at 4:44 PM, Jens Axboe wrote:
>>
>> This is the main pull request for the block layer for 4.13. Not a huge
>> round in terms of features, but there's a lot of churn related to some
>> core cleanups. Note that
On Sun, Jul 2, 2017 at 4:44 PM, Jens Axboe wrote:
>
> This is the main pull request for the block layer for 4.13. Not a huge
> round in terms of features, but there's a lot of churn related to some
> core cleanups. Note that merge request will throw 3 merge failures for
> you.
On 06/30/2017 01:15 PM, Christoph Hellwig wrote:
> This is based on the old idea and code from Milosz Tanski. With the
> aio nowait code it becomes mostly trivial now.
>
> Signed-off-by: Christoph Hellwig
> ---
> fs/aio.c | 6 --
> fs/btrfs/file.c| 9
On 07/02/2017 04:45 AM, Max Gurtovoy wrote:
>
>
> On 6/30/2017 8:26 PM, Jens Axboe wrote:
>> Hi Max,
>
> Hi Jens,
>
>>
>> I remembered you reporting this. I think this is a regression introduced
>> with the scheduling, since ->rqs[] isn't static anymore. ->static_rqs[]
>> is, but that's not
On Mon, Jul 03, 2017 at 03:46:34PM +0300, Max Gurtovoy wrote:
>
>
> On 7/3/2017 3:03 PM, Ming Lei wrote:
> > On Mon, Jul 03, 2017 at 01:07:44PM +0300, Sagi Grimberg wrote:
> > > Hi Ming,
> > >
> > > > Yeah, the above change is correct, for any canceling requests in this
> > > > way we should
So finally:
Tested-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
4.4-stable review patch. If anyone has any objections, please let me know.
--
From: Roman Pen
commit 39a169b62b415390398291080dafe63aec751e0a upstream.
get_disk(),get_gendisk() calls have non explicit side effect: they
increase the reference on
On 7/3/2017 3:03 PM, Ming Lei wrote:
On Mon, Jul 03, 2017 at 01:07:44PM +0300, Sagi Grimberg wrote:
Hi Ming,
Yeah, the above change is correct, for any canceling requests in this
way we should use blk_mq_quiesce_queue().
I still don't understand why should blk_mq_flush_busy_ctxs hit a
When mq-deadline is taken, IOPS of sequential read and
seqential write is observed more than 20% drop on sata(scsi-mq)
devices, compared with using 'none' scheduler.
The reason is that the default nr_requests for scheduler is
too big for small queuedepth devices, and latency is increased
much.
On Mon, Jul 03, 2017 at 01:07:44PM +0300, Sagi Grimberg wrote:
> Hi Ming,
>
> > Yeah, the above change is correct, for any canceling requests in this
> > way we should use blk_mq_quiesce_queue().
>
> I still don't understand why should blk_mq_flush_busy_ctxs hit a NULL
> deref if we don't touch
Hi Ming,
Yeah, the above change is correct, for any canceling requests in this
way we should use blk_mq_quiesce_queue().
I still don't understand why should blk_mq_flush_busy_ctxs hit a NULL
deref if we don't touch the tagset...
Also, I'm wandering in what case we shouldn't use
On Sun, Jul 02, 2017 at 02:56:56PM +0300, Sagi Grimberg wrote:
>
>
> On 02/07/17 13:45, Max Gurtovoy wrote:
> >
> >
> > On 6/30/2017 8:26 PM, Jens Axboe wrote:
> > > Hi Max,
> >
> > Hi Jens,
> >
> > >
> > > I remembered you reporting this. I think this is a regression introduced
> > > with
On each deactivation or re-scheduling (after being served) of a
bfq_queue, BFQ invokes the function __bfq_entity_update_weight_prio(),
to perform pending updates of ioprio, weight and ioprio class for the
bfq_queue. BFQ also invokes this function on I/O-request dispatches,
to raise or lower
Hi Jens,
I'm writing this short cover letter to hopefully help you decide what
to do with this patch, in this late phase of the development
cycle. This patch fixes a bug causing kernel crashes for at least
one year. Crashes apparently affect only a minority of users, but are
systematic for them (a
The BIO isuuing loop in __blkdev_issue_zeroout() was allocating BIOs
with a maximum number of pages equal to
min(nr_sects, (sector_t)BIO_MAX_PAGES)
This works since the BIO will always be limited to the absolute maximum
number of pages, but can be ineficient as too many pages may be
requested
21 matches
Mail list logo