On 2017/7/13 下午12:12, Eric Wheeler wrote:
> On Tue, 11 Jul 2017, tang.jun...@zte.com.cn wrote:
>
>>> Based on the above implementation, non-dirty space from flash only
>>> bcache device will mislead writeback rate calculation too. So I suggest
>>> to subtract all buckets size from all flash only
On Tue, 11 Jul 2017, Coly Li wrote:
> On 2017/7/11 下午1:39, tang.jun...@zte.com.cn wrote:
> > Compared to bucket depletion, resulting in hanging dead,
> > It is worthy to consumes a little time to update the bucket_in_use.
> > If you have any better solution, please show to us,
> > We should solve
On Tue, 11 Jul 2017, tang.jun...@zte.com.cn wrote:
> > Based on the above implementation, non-dirty space from flash only
> > bcache device will mislead writeback rate calculation too. So I suggest
> > to subtract all buckets size from all flash only bcache devices. Then it
> > might be something
On Sun, 2 Jul 2017, Coly Li wrote:
> On 2017/7/1 上午4:42, bca...@lists.ewheeler.net wrote:
> > From: Tang Junhui
> >
> > Thin flash device does not initialize stripe_sectors_dirty correctly, this
> > patch fixes this issue.
>
> Hi Junhui,
>
> Could you please explain
On Sun, 2 Jul 2017, Coly Li wrote:
> On 2017/7/1 上午4:42, bca...@lists.ewheeler.net wrote:
> > From: Tang Junhui
> >
> > Some missed IOs are not counted into cache_misses, this patch fix this
> > issue.
>
> Could you please explain more about,
> - which kind of missed
Tang,
Please resend. This patch seems to be malformed.
--
Eric Wheeler
On Thu, 6 Jul 2017, tang.jun...@zte.com.cn wrote:
> From: Tang Junhui
>
> bcache called ida_simple_remove() with minor which have multiplied by
> BCACHE_MINORS, it would cause minor wrong release
On Thu, Jul 13 2017, Ming Lei wrote:
> On Thu, Jul 13, 2017 at 10:01:33AM +1000, NeilBrown wrote:
>> On Wed, Jul 12 2017, Ming Lei wrote:
>>
>> > We will support multipage bvec soon, so initialize bvec
>> > table using the standardy way instead of writing the
>> > talbe directly. Otherwise it
On Thu, Jul 13, 2017 at 09:58:41AM +1000, NeilBrown wrote:
> On Wed, Jul 12 2017, Ming Lei wrote:
>
> > bio_add_page() won't fail for resync bio, and the page index for each
> > bio is same, so remove it.
> >
> > More importantly the 'idx' of 'struct resync_pages' is initialized in
> > mempool
On Wed, Jul 12, 2017 at 02:16:08AM -0700, Christoph Hellwig wrote:
> On Wed, Jul 12, 2017 at 04:29:12PM +0800, Ming Lei wrote:
> > We will support multipage bvec soon, so initialize bvec
> > table using the standardy way instead of writing the
> > talbe directly. Otherwise it won't work any more
On Thu, Jul 13, 2017 at 10:01:33AM +1000, NeilBrown wrote:
> On Wed, Jul 12 2017, Ming Lei wrote:
>
> > We will support multipage bvec soon, so initialize bvec
> > table using the standardy way instead of writing the
> > talbe directly. Otherwise it won't work any more once
> > multipage bvec is
On Wed, Jul 12, 2017 at 09:30:50AM -0700, Shaohua Li wrote:
> On Wed, Jul 12, 2017 at 09:40:10AM +0800, Ming Lei wrote:
> > On Tue, Jul 11, 2017 at 7:14 AM, NeilBrown wrote:
> > > On Mon, Jul 10 2017, Shaohua Li wrote:
> > >
> > >> On Mon, Jul 10, 2017 at 03:25:41PM +0800, Ming
On Wed, 12 Jul 2017, Coly Li wrote:
> On 2017/7/12 上午10:01, tang.jun...@zte.com.cn wrote:
> >>I meant "it is very necessary for data base applications which always
> >>use *writeback* mode and not switch to other mode during all their
> >>online time." ^_^
> >
> > I know, it is necessary, but
On Wed, Jul 12 2017, Ming Lei wrote:
> We will support multipage bvec soon, so initialize bvec
> table using the standardy way instead of writing the
> talbe directly. Otherwise it won't work any more once
> multipage bvec is enabled.
>
> Acked-by: Guoqing Jiang
>
On Wed, Jul 12 2017, Ming Lei wrote:
> bio_add_page() won't fail for resync bio, and the page index for each
> bio is same, so remove it.
>
> More importantly the 'idx' of 'struct resync_pages' is initialized in
> mempool allocator function, this way is wrong since mempool is only
> responsible
On 07/12/2017 12:57 PM, Alex Ivanov wrote:
> It's now makes sense to use elevator boot argument when blk-mq is in use,
> since there is now a bunch of schedulers for it (deadline, kyber, bfq, none).
No, that boot option was a mistake, let's not propagate that to mq
scheduling as well.
--
Jens
On 07/12/2017 12:49 PM, Shaohua Li wrote:
> From: Shaohua Li
>
> Hi,
>
> Currently blktrace isn't cgroup aware. blktrace prints out task name of
> current
> context, but the task of current context isn't always in the cgroup where the
> BIO comes from. We can't use task name to
It's now makes sense to use elevator boot argument when blk-mq is in use,
since there is now a bunch of schedulers for it (deadline, kyber, bfq, none).
Signed-off-by: Alex Ivanov
---
block/elevator.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
From: Shaohua Li
When working on adding exportfs operations in kernfs, I found it's hard
to initialize dentry->d_fsdata in the exportfs operations. Looks there
is no way to do it without race condition. Look at the kernfs code
closely, there is no point to set dentry->d_fsdata.
From: Shaohua Li
kernfs uses ida to manage inode number. The problem is we can't get
kernfs_node from inode number with ida. Switching to use idr, next patch
will add an API to get kernfs_node from inode number.
Acked-by: Tejun Heo
Acked-by: Greg Kroah-Hartman
From: Shaohua Li
By default we output cgroup id in blktrace. This adds an option to
display cgroup path. Since get cgroup path is a relativly heavy
operation, we don't enable it by default.
with the option enabled, blktrace will output something like this:
dd-1353 [007] d..2
From: Shaohua Li
Set i_generation for kernfs inode. This is required to implement
exportfs operations. The generation is 32-bit, so it's possible the
generation wraps up and we find stale files. To reduce the posssibility,
we don't reuse inode numer immediately. When the inode
From: Shaohua Li
Now we have the facilities to implement exportfs operations. The idea is
cgroup can export the fhandle info to userspace, then userspace uses
fhandle to find the cgroup name. Another example is userspace can get
fhandle for a cgroup and BPF uses the fhandle to
From: Shaohua Li
blkcg_bio_issue_check() already gets blkcg for a BIO.
bio_associate_blkcg() uses a percpu refcounter, so it's a very cheap
operation. There is no point we don't attach the cgroup info into bio at
blkcg_bio_issue_check. This also makes blktrace outputs correct cgroup
From: Shaohua Li
inode number and generation can identify a kernfs node. We are going to
export the identification by exportfs operations, so put ino and
generation into a separate structure. It's convenient when later patches
use the identification.
Acked-by: Greg Kroah-Hartman
From: Shaohua Li
Currently cfq/bfq/blk-throttle output cgroup info in trace in their own
way. Now we have standard blktrace API for this, so convert them to use
it.
Note, this changes the behavior a little bit. cgroup info isn't output
by default, we only do this with 'blk_cgroup'
From: Shaohua Li
Hi,
Currently blktrace isn't cgroup aware. blktrace prints out task name of current
context, but the task of current context isn't always in the cgroup where the
BIO comes from. We can't use task name to find out IO cgroup. For example,
Writeback BIOs always comes
From: Shaohua Li
This is to partially revert commit 9ae3b3f52c62 (block: provide
bio_uninit() free freeing integrity/task associations). With commit
b222dd2 (block: call bio_uninit in bio_endio) and 7c20f11(bio-integrity:
stop abusing bi_end_io), integrity/cgroup info is freed in
From: Shaohua Li
bio_uninit only calls bio_disassociate_task now. It's meaningless to
have a wrap.
Cc: Christoph Hellwig
Signed-off-by: Shaohua Li
---
block/bio.c | 11 +++
1 file changed, 3 insertions(+), 8 deletions(-)
diff --git
On Wed, Jul 12, 2017 at 10:57:37AM -0600, Jens Axboe wrote:
> On 07/12/2017 10:54 AM, weiping zhang wrote:
> > A mapping show as following:
> >
> > hctxcpus
> > hctx0 0 1
> > hctx1 2
> > hctx2 3
> > hctx3 4 5
>
>
ests as a
> testlist. I'm assuming that it creates the correct amount or pattern
> of actions to the device. The testlist consists of the following
> lines:
>
> igt@gem_exec_gttfill@basic
> igt@gem_exec_suspend@basic-s3
>
> Kernel option scsi_mod.use_blk_mq=0 hides th
On Wed, Jul 12, 2017 at 09:40:10AM +0800, Ming Lei wrote:
> On Tue, Jul 11, 2017 at 7:14 AM, NeilBrown wrote:
> > On Mon, Jul 10 2017, Shaohua Li wrote:
> >
> >> On Mon, Jul 10, 2017 at 03:25:41PM +0800, Ming Lei wrote:
> >>> On Mon, Jul 10, 2017 at 02:38:19PM +1000, NeilBrown
On Wed, Jul 12, 2017 at 09:25:17AM +0200, Christoph Hellwig wrote:
> On Mon, Jul 10, 2017 at 11:40:17AM -0700, Shaohua Li wrote:
> > bio_free isn't a good place to free cgroup info. There are a
> > lot of cases bio is allocated in special way (for example, in stack) and
> > never gets called by
On Wed, 2017-07-12 at 10:30 +0800, Ming Lei wrote:
> On Tue, Jul 11, 2017 at 12:25:16PM -0600, Jens Axboe wrote:
> > What happens with fluid congestion boundaries, with shared tags?
>
> The approach in this patch should work, but the threshold may not
> be accurate in this way, one simple method
On Wed, 2017-07-12 at 11:15 +0800, Ming Lei wrote:
> On Tue, Jul 11, 2017 at 07:57:53PM +, Bart Van Assche wrote:
> > On Wed, 2017-07-12 at 02:20 +0800, Ming Lei wrote:
> > > Now SCSI won't stop queue, and not necessary to use
> > > blk_mq_start_hw_queues(), so switch to blk_mq_run_hw_queues()
lines:
igt@gem_exec_gttfill@basic
igt@gem_exec_suspend@basic-s3
Kernel option scsi_mod.use_blk_mq=0 hides the issue on testhosts.
Configuration option was copied over on testhosts and 20170712 was re-
tested, that's why today looks so much greener.
More information including traces and reproduction
> Il giorno 12 lug 2017, alle ore 16:22, Jens Axboe ha
> scritto:
>
> On 07/12/2017 03:41 AM, Paolo Valente wrote:
>>
>>> Il giorno 11 lug 2017, alle ore 15:58, Hou Tao ha
>>> scritto:
>>>
>>> There are mq devices (eg., virtio-blk, nbd and loopback)
On 07/12/2017 03:41 AM, Paolo Valente wrote:
>
>> Il giorno 11 lug 2017, alle ore 15:58, Hou Tao ha
>> scritto:
>>
>> There are mq devices (eg., virtio-blk, nbd and loopback) which don't
>> invoke blk_mq_run_hw_queues() after the completion of a request.
>> If bfq is enabled
On Wed, Jul 12, 2017 at 3:10 PM, Greg Kroah-Hartman
wrote:
> On Tue, Jul 11, 2017 at 03:35:15PM -0700, Linus Torvalds wrote:
>> [ Very random list of maintainers and mailing lists, at least
>> partially by number of warnings generated by gcc-7.1.1 that is then
>>
On Wed, Jul 12, 2017 at 5:41 AM, Linus Torvalds
wrote:
>
> We also have about a bazillion
>
> warning: ‘*’ in boolean context, suggest ‘&&’ instead
>
> warnings in drivers/ata/libata-core.c, all due to a single macro that
> uses a pattern that gcc-7.1.1 doesn't
Em Tue, 11 Jul 2017 15:35:15 -0700
Linus Torvalds escreveu:
> [ Very random list of maintainers and mailing lists, at least
> partially by number of warnings generated by gcc-7.1.1 that is then
> correlated with the get_maintainers script ]
Under drivers/media, I
On 2017/7/12 17:41, Paolo Valente wrote:
>
>> Il giorno 11 lug 2017, alle ore 15:58, Hou Tao ha
>> scritto:
>>
>> There are mq devices (eg., virtio-blk, nbd and loopback) which don't
>> invoke blk_mq_run_hw_queues() after the completion of a request.
>> If bfq is enabled
> Il giorno 11 lug 2017, alle ore 15:58, Hou Tao ha
> scritto:
>
> There are mq devices (eg., virtio-blk, nbd and loopback) which don't
> invoke blk_mq_run_hw_queues() after the completion of a request.
> If bfq is enabled on these devices and the slice_idle attribute or
>
On 2017/7/11 上午11:48, Coly Li wrote:
> On 2017/7/6 下午11:24, Christoph Hellwig wrote:
>> On Thu, Jul 06, 2017 at 03:35:48PM +0800, Coly Li wrote:
>>> Then gfs2 breaks the above rule ? in gfs2_metapath_ra() and
>>> gfs2_dir_readahead(), only REQ_META is used in submit_bh(). It seems an
>>> extra
On Wed, Jul 12, 2017 at 04:29:12PM +0800, Ming Lei wrote:
> We will support multipage bvec soon, so initialize bvec
> table using the standardy way instead of writing the
> talbe directly. Otherwise it won't work any more once
> multipage bvec is enabled.
It seems to me like these callsites also
> Il giorno 12 lug 2017, alle ore 09:25, Hou Tao ha
> scritto:
>
> The start time of eligible entity should be less than or equal to
> the current virtual time, and the entity in idle tree has a finish
> time being greater than the current virtual time.
>
Thanks for
We will support multipage bvec soon, so initialize bvec
table using the standardy way instead of writing the
talbe directly. Otherwise it won't work any more once
multipage bvec is enabled.
Acked-by: Guoqing Jiang
Signed-off-by: Ming Lei
---
bio_add_page() won't fail for resync bio, and the page index for each
bio is same, so remove it.
More importantly the 'idx' of 'struct resync_pages' is initialized in
mempool allocator function, this way is wrong since mempool is only
responsible for allocation, we can't use that for
On Mon, Jul 10, 2017 at 11:40:17AM -0700, Shaohua Li wrote:
> bio_free isn't a good place to free cgroup info. There are a
> lot of cases bio is allocated in special way (for example, in stack) and
> never gets called by bio_put hence bio_free, we are leaking memory. This
> patch moves the free to
The start time of eligible entity should be less than or equal to
the current virtual time, and the entity in idle tree has a finish
time being greater than the current virtual time.
Signed-off-by: Hou Tao
---
block/bfq-iosched.h | 2 +-
block/bfq-wf2q.c| 2 +-
2 files
49 matches
Mail list logo