[PATCH v2] bcache: move closure debug file into debug direcotry

2018-03-04 Thread Chengguang Xu
In current code closure debug file is outside of debug directory and when unloading module there is lack of removing operation for closure debug file, so it will cause creating error when trying to reload module. This patch move closure debug file into "bcache" debug direcory so that the file can

RE: [PATCH V3 1/8] scsi: hpsa: fix selection of reply queue

2018-03-04 Thread Kashyap Desai
> -Original Message- > From: Laurence Oberman [mailto:lober...@redhat.com] > Sent: Saturday, March 3, 2018 3:23 AM > To: Don Brace; Ming Lei > Cc: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike > Snitzer; > linux-s...@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar Sando

Re: [PATCH] bcache: remove closure debug file when unloading module

2018-03-04 Thread Chengguang Xu
> 在 2018年3月2日,下午2:34,tang.jun...@zte.com.cn 写道: > > From: Tang Junhui > > Hello Chengguang > >> When unloading bcache module there is lack of removing >> operation for closure debug file, so it will cause >> creating error when trying to reload module. >> > > Yes, This issue is true. > Act

[PATCH V2 2/5] genirq/affinity: mark 'node_to_cpumask' as const for get_nodes_in_cpumask()

2018-03-04 Thread Ming Lei
Inside irq_create_affinity_masks(), once 'node_to_cpumask' is created, it is accessed read-only, so mark it as const for get_nodes_in_cpumask(). Cc: Thomas Gleixner Cc: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) di

[PATCH V2 3/5] genirq/affinity: move actual irq vector spread into one helper

2018-03-04 Thread Ming Lei
No functional change, just prepare for converting to 2-stage irq vector spread. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 97 +-- 1 file changed, 55 insertions(+), 42 deletions(-) diff

[PATCH V2 4/5] genirq/affinity: support to do irq vectors spread starting from any vector

2018-03-04 Thread Ming Lei
Now two parameters(start_vec, affv) are introduced to irq_build_affinity_masks(), then this helper can build the affinity of each irq vector starting from the irq vector of 'start_vec', and handle at most 'affv' vectors. This way is required to do 2-stages irq vectors spread among all possible CP

[PATCH V2 5/5] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-04 Thread Ming Lei
84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs") may cause irq vector assigned to all offline CPUs, and this kind of assignment may cause much less irq vectors mapped to online CPUs, and performance may get hurt. For example, in a 8 cores system, 0~3 online, 4~8 offline/not pres

[PATCH V2 0/5] genirq/affinity: irq vector spread among online CPUs as far as possible

2018-03-04 Thread Ming Lei
Hi, This patchset tries to spread among online CPUs as far as possible, so that we can avoid to allocate too less irq vectors with online CPUs mapped. For example, in a 8cores system, 4 cpu cores(4~7) are offline/non present, on a device with 4 queues: 1) before this patchset irq 39, cpu

[PATCH V2 1/5] genirq/affinity: rename *node_to_possible_cpumask as *node_to_cpumask

2018-03-04 Thread Ming Lei
The following patches will introduce two stage irq spread for improving irq spread on all possible CPUs. No funtional change. Cc: Thomas Gleixner Reviewed-by: Christoph Hellwig Signed-off-by: Ming Lei --- kernel/irq/affinity.c | 26 +- 1 file changed, 13 insertions(+),

Re: [PATCH V3 1/8] scsi: hpsa: fix selection of reply queue

2018-03-04 Thread Ming Lei
On Fri, Mar 02, 2018 at 04:53:21PM -0500, Laurence Oberman wrote: > On Fri, 2018-03-02 at 15:03 +, Don Brace wrote: > > > -Original Message- > > > From: Laurence Oberman [mailto:lober...@redhat.com] > > > Sent: Friday, March 02, 2018 8:09 AM > > > To: Ming Lei > > > Cc: Don Brace ; Jen

Re: [PATCH v2 07/10] nvme-pci: Use PCI p2pmem subsystem to manage the CMB

2018-03-04 Thread Oliver
On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe wrote: > Register the CMB buffer as p2pmem and use the appropriate allocation > functions to create and destroy the IO SQ. > > If the CMB supports WDS and RDS, publish it for use as p2p memory > by other devices. > > Signed-off-by: Logan Gunthorpe

Re: [PATCH v2] blk-throttle: fix race between blkcg_bio_issue_check and cgroup_rmdir

2018-03-04 Thread Joseph Qi
On 18/3/5 04:23, Tejun Heo wrote: > Hello, Joseph. > > Sorry about late reply. > > On Wed, Feb 28, 2018 at 02:52:10PM +0800, Joseph Qi wrote: >> In current code, I'm afraid pd_offline_fn() as well as the rest >> destruction have to be called together under the same blkcg->lock and >> q->queue_l

Re: [PATCH] bcache: don't attach backing with duplicate UUID

2018-03-04 Thread tang . junhui
Hello Mike I send the email from my personal mailbox(110950...@qq.com), it may be fail, so I resend this email from my office mailbox again. bellow is the mail context I send previous. I am Tang Junhui(tang.jun...@zte.com.cn), This email come

Re: [PATCH v2] blk-throttle: fix race between blkcg_bio_issue_check and cgroup_rmdir

2018-03-04 Thread Tejun Heo
Hello, Joseph. Sorry about late reply. On Wed, Feb 28, 2018 at 02:52:10PM +0800, Joseph Qi wrote: > In current code, I'm afraid pd_offline_fn() as well as the rest > destruction have to be called together under the same blkcg->lock and > q->queue_lock. > For example, if we split the pd_offline_fn

Re: vgdisplay hang on iSCSI session

2018-03-04 Thread Bart Van Assche
On Sun, 2018-03-04 at 20:01 +0100, Jean-Louis Dupond wrote: > I'm running indeed CentOS 6 with the Virt SIG kernels. Already updated > to 4.9.75, but recently hit the problem again. > > The first PID that was in D-state (root 27157 0.0 0.0 127664 5196 > ?D06:19 0:00 \_ vgdi

Re: vgdisplay hang on iSCSI session

2018-03-04 Thread Jean-Louis Dupond
Hi Bart, Thanks for your answer. I'm running indeed CentOS 6 with the Virt SIG kernels. Already updated to 4.9.75, but recently hit the problem again. The first PID that was in D-state (root 27157 0.0 0.0 127664 5196 ?D06:19 0:00 \_ vgdisplay -c --ignorelockingfailure),