In current code closure debug file is outside of debug directory
and when unloading module there is lack of removing operation
for closure debug file, so it will cause creating error when trying
to reload module.
This patch move closure debug file into "bcache" debug direcory
so that the file can
> -Original Message-
> From: Laurence Oberman [mailto:lober...@redhat.com]
> Sent: Saturday, March 3, 2018 3:23 AM
> To: Don Brace; Ming Lei
> Cc: Jens Axboe; linux-block@vger.kernel.org; Christoph Hellwig; Mike
> Snitzer;
> linux-s...@vger.kernel.org; Hannes Reinecke; Arun Easi; Omar Sando
> 在 2018年3月2日,下午2:34,tang.jun...@zte.com.cn 写道:
>
> From: Tang Junhui
>
> Hello Chengguang
>
>> When unloading bcache module there is lack of removing
>> operation for closure debug file, so it will cause
>> creating error when trying to reload module.
>>
>
> Yes, This issue is true.
> Act
Inside irq_create_affinity_masks(), once 'node_to_cpumask' is created,
it is accessed read-only, so mark it as const for
get_nodes_in_cpumask().
Cc: Thomas Gleixner
Cc: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
di
No functional change, just prepare for converting to 2-stage
irq vector spread.
Cc: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 97 +--
1 file changed, 55 insertions(+), 42 deletions(-)
diff
Now two parameters(start_vec, affv) are introduced to
irq_build_affinity_masks(),
then this helper can build the affinity of each irq vector starting from
the irq vector of 'start_vec', and handle at most 'affv' vectors.
This way is required to do 2-stages irq vectors spread among all
possible CP
84676c1f21 ("genirq/affinity: assign vectors to all possible CPUs")
may cause irq vector assigned to all offline CPUs, and this kind of
assignment may cause much less irq vectors mapped to online CPUs, and
performance may get hurt.
For example, in a 8 cores system, 0~3 online, 4~8 offline/not pres
Hi,
This patchset tries to spread among online CPUs as far as possible, so
that we can avoid to allocate too less irq vectors with online CPUs
mapped.
For example, in a 8cores system, 4 cpu cores(4~7) are offline/non present,
on a device with 4 queues:
1) before this patchset
irq 39, cpu
The following patches will introduce two stage irq spread for improving
irq spread on all possible CPUs.
No funtional change.
Cc: Thomas Gleixner
Reviewed-by: Christoph Hellwig
Signed-off-by: Ming Lei
---
kernel/irq/affinity.c | 26 +-
1 file changed, 13 insertions(+),
On Fri, Mar 02, 2018 at 04:53:21PM -0500, Laurence Oberman wrote:
> On Fri, 2018-03-02 at 15:03 +, Don Brace wrote:
> > > -Original Message-
> > > From: Laurence Oberman [mailto:lober...@redhat.com]
> > > Sent: Friday, March 02, 2018 8:09 AM
> > > To: Ming Lei
> > > Cc: Don Brace ; Jen
On Thu, Mar 1, 2018 at 10:40 AM, Logan Gunthorpe wrote:
> Register the CMB buffer as p2pmem and use the appropriate allocation
> functions to create and destroy the IO SQ.
>
> If the CMB supports WDS and RDS, publish it for use as p2p memory
> by other devices.
>
> Signed-off-by: Logan Gunthorpe
On 18/3/5 04:23, Tejun Heo wrote:
> Hello, Joseph.
>
> Sorry about late reply.
>
> On Wed, Feb 28, 2018 at 02:52:10PM +0800, Joseph Qi wrote:
>> In current code, I'm afraid pd_offline_fn() as well as the rest
>> destruction have to be called together under the same blkcg->lock and
>> q->queue_l
Hello Mike
I send the email from my personal mailbox(110950...@qq.com), it may be fail,
so I resend this email from my office mailbox again. bellow is the mail
context I send previous.
I am Tang Junhui(tang.jun...@zte.com.cn), This email come
Hello, Joseph.
Sorry about late reply.
On Wed, Feb 28, 2018 at 02:52:10PM +0800, Joseph Qi wrote:
> In current code, I'm afraid pd_offline_fn() as well as the rest
> destruction have to be called together under the same blkcg->lock and
> q->queue_lock.
> For example, if we split the pd_offline_fn
On Sun, 2018-03-04 at 20:01 +0100, Jean-Louis Dupond wrote:
> I'm running indeed CentOS 6 with the Virt SIG kernels. Already updated
> to 4.9.75, but recently hit the problem again.
>
> The first PID that was in D-state (root 27157 0.0 0.0 127664 5196
> ?D06:19 0:00 \_ vgdi
Hi Bart,
Thanks for your answer.
I'm running indeed CentOS 6 with the Virt SIG kernels. Already updated
to 4.9.75, but recently hit the problem again.
The first PID that was in D-state (root 27157 0.0 0.0 127664 5196
?D06:19 0:00 \_ vgdisplay -c --ignorelockingfailure),
16 matches
Mail list logo