Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Kent Overstreet
On Wed, Feb 24, 2016 at 10:48:10AM +0800, Ming Lei wrote:
> On Tue, Feb 23, 2016 at 10:54 PM, Mike Snitzer  wrote:
> > On Mon, Feb 22 2016 at  9:55pm -0500,
> > Ming Lei  wrote:
> >
> >> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
> >>  wrote:
> >> > On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> >> >> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
> >> >> wrote:
> >> >> >>-Original Message-
> >> >> >
> >> >> > So it's almost already "per request_queue"
> >> >>
> >> >> Yes, that is because of the following line:
> >> >>
> >> >> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> >> >>
> >> >> in blk_alloc_queue_node().
> >> >>
> >> >> Looks like this bio_set doesn't need to be per-request_queue, and
> >> >> now it is only used for fast-cloning bio for splitting, and one global
> >> >> split bio_set should be enough.
> >> >
> >> > It does have to be per request queue for stacking block devices (which 
> >> > includes
> >> > loopback).
> >>
> >> In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
> >> stacking drivers), deadlock in this situation has been avoided already.
> >> Or are there other issues with global bio_set? I appreciate if you may
> >> explain it a bit if there are.
> >
> > Even with commit df2cb6daa4 there is still risk of deadlocks (even
> > without low memory condition), see:
> > https://patchwork.kernel.org/patch/7398411/
> 
> That is definitely another problem which isn't related with low memory,
> and I guess Kent means there might be deadlock risk in case of shared
> bio_set.
> 
> >
> > (you may recall you blocked this patch with concerns about performance,
> > context switches, plug merging being compromised, etc.. to which I never
> > circled back to verify your concerns)
> 
> I still remember that problem:
> 
> 1) Process A
>  - two bio(a, b) are splitted in dm's make_request funtion
>  - bio(a) is submitted via generic_make_request(), so it is staged
>in current->bio_list
>  - time t1
>  - before bio(b) is submitted, down_write(>lock) is run and
>   never return
> 
> 2) Process B:
>  - just during time t1, wait completion of bio(a) by down_write(>lock)
> 
> Then Process A waits the lock which is acquired by B first, and the
> two bio(a, b)
> can't reach to driver/device at all.
> 
> Looks that current->bio_list is fragile to locks from make_request function,
> and moving the lock into workqueue context should be helpful.
> 
> And I am happy to continue to discuss this issue further.
> 
> >
> > But it illustrates the type of problems that can occur when your rescue
> > infrastructure is shared across devices (in the context of df2cb6daa4,
> > current->bio_list contains bios from multiple devices).
> >
> > If a single splitting bio_set were shared across devices there would be
> > no guarantee of forward progress with complex stacked devices (one or
> > more devices could exhaust the reserve and starve out other devices in
> > the stack).  So keeping the bio_set per request_queue isn't prone to
> > failure like a shared bio_set might be.
> 
> Not consider the dm lock problem, from Kent's commit(df2cb6daa4) log and
> the patch, looks forward progress can be guaranteed for stacked devices
> with same bio_set, but better to get Kent's clarification.
> 
> If forward progress can be guaranteed, percpu mempool might avoid
> easy exhausting, because it is reasonable to assume that one CPU can only
> provide a certain amount of bandwidth wrt. block transfer.

Generally speaking, with potential deadlocks like this I don't bother to work
out the specific scenario, it's enough to know that there's a shared resource
and multiple users that depend on each other... if you've got that, you'll have
a deadlock.

But, if you're curious: say we've got block devices a and b, when you submit to
a the bio will get passed down to b:

for the bioset itself: if a bio gets split when submitted to a, then needs to be
split again when it's submitted to b - you're allocating twice from the same
mempool, and the first allocation can't be freed until the original bio
completes. deadlock.

with the rescuer threads it's more subtle, but you just need a scenario where
the rescuer is required twice in a row. I'm not going to bother trying to work
out the details, but it's the same principle - you can end up in a situation
where you're blocked, and you need the rescuer thread to make forward progress
(or you'd deadlock - that's why it exists, right?) - well, what happens if that
happens twice in a row, and the second time you're running out of the rescuer
thread? oops.


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Kent Overstreet
On Wed, Feb 24, 2016 at 10:48:10AM +0800, Ming Lei wrote:
> On Tue, Feb 23, 2016 at 10:54 PM, Mike Snitzer  wrote:
> > On Mon, Feb 22 2016 at  9:55pm -0500,
> > Ming Lei  wrote:
> >
> >> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
> >>  wrote:
> >> > On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> >> >> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
> >> >> wrote:
> >> >> >>-Original Message-
> >> >> >
> >> >> > So it's almost already "per request_queue"
> >> >>
> >> >> Yes, that is because of the following line:
> >> >>
> >> >> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> >> >>
> >> >> in blk_alloc_queue_node().
> >> >>
> >> >> Looks like this bio_set doesn't need to be per-request_queue, and
> >> >> now it is only used for fast-cloning bio for splitting, and one global
> >> >> split bio_set should be enough.
> >> >
> >> > It does have to be per request queue for stacking block devices (which 
> >> > includes
> >> > loopback).
> >>
> >> In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
> >> stacking drivers), deadlock in this situation has been avoided already.
> >> Or are there other issues with global bio_set? I appreciate if you may
> >> explain it a bit if there are.
> >
> > Even with commit df2cb6daa4 there is still risk of deadlocks (even
> > without low memory condition), see:
> > https://patchwork.kernel.org/patch/7398411/
> 
> That is definitely another problem which isn't related with low memory,
> and I guess Kent means there might be deadlock risk in case of shared
> bio_set.
> 
> >
> > (you may recall you blocked this patch with concerns about performance,
> > context switches, plug merging being compromised, etc.. to which I never
> > circled back to verify your concerns)
> 
> I still remember that problem:
> 
> 1) Process A
>  - two bio(a, b) are splitted in dm's make_request funtion
>  - bio(a) is submitted via generic_make_request(), so it is staged
>in current->bio_list
>  - time t1
>  - before bio(b) is submitted, down_write(>lock) is run and
>   never return
> 
> 2) Process B:
>  - just during time t1, wait completion of bio(a) by down_write(>lock)
> 
> Then Process A waits the lock which is acquired by B first, and the
> two bio(a, b)
> can't reach to driver/device at all.
> 
> Looks that current->bio_list is fragile to locks from make_request function,
> and moving the lock into workqueue context should be helpful.
> 
> And I am happy to continue to discuss this issue further.
> 
> >
> > But it illustrates the type of problems that can occur when your rescue
> > infrastructure is shared across devices (in the context of df2cb6daa4,
> > current->bio_list contains bios from multiple devices).
> >
> > If a single splitting bio_set were shared across devices there would be
> > no guarantee of forward progress with complex stacked devices (one or
> > more devices could exhaust the reserve and starve out other devices in
> > the stack).  So keeping the bio_set per request_queue isn't prone to
> > failure like a shared bio_set might be.
> 
> Not consider the dm lock problem, from Kent's commit(df2cb6daa4) log and
> the patch, looks forward progress can be guaranteed for stacked devices
> with same bio_set, but better to get Kent's clarification.
> 
> If forward progress can be guaranteed, percpu mempool might avoid
> easy exhausting, because it is reasonable to assume that one CPU can only
> provide a certain amount of bandwidth wrt. block transfer.

Generally speaking, with potential deadlocks like this I don't bother to work
out the specific scenario, it's enough to know that there's a shared resource
and multiple users that depend on each other... if you've got that, you'll have
a deadlock.

But, if you're curious: say we've got block devices a and b, when you submit to
a the bio will get passed down to b:

for the bioset itself: if a bio gets split when submitted to a, then needs to be
split again when it's submitted to b - you're allocating twice from the same
mempool, and the first allocation can't be freed until the original bio
completes. deadlock.

with the rescuer threads it's more subtle, but you just need a scenario where
the rescuer is required twice in a row. I'm not going to bother trying to work
out the details, but it's the same principle - you can end up in a situation
where you're blocked, and you need the rescuer thread to make forward progress
(or you'd deadlock - that's why it exists, right?) - well, what happens if that
happens twice in a row, and the second time you're running out of the rescuer
thread? oops.


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Ming Lei
On Tue, Feb 23, 2016 at 10:54 PM, Mike Snitzer  wrote:
> On Mon, Feb 22 2016 at  9:55pm -0500,
> Ming Lei  wrote:
>
>> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
>>  wrote:
>> > On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
>> >> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
>> >> wrote:
>> >> >>-Original Message-
>> >> >
>> >> > So it's almost already "per request_queue"
>> >>
>> >> Yes, that is because of the following line:
>> >>
>> >> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
>> >>
>> >> in blk_alloc_queue_node().
>> >>
>> >> Looks like this bio_set doesn't need to be per-request_queue, and
>> >> now it is only used for fast-cloning bio for splitting, and one global
>> >> split bio_set should be enough.
>> >
>> > It does have to be per request queue for stacking block devices (which 
>> > includes
>> > loopback).
>>
>> In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
>> stacking drivers), deadlock in this situation has been avoided already.
>> Or are there other issues with global bio_set? I appreciate if you may
>> explain it a bit if there are.
>
> Even with commit df2cb6daa4 there is still risk of deadlocks (even
> without low memory condition), see:
> https://patchwork.kernel.org/patch/7398411/

That is definitely another problem which isn't related with low memory,
and I guess Kent means there might be deadlock risk in case of shared
bio_set.

>
> (you may recall you blocked this patch with concerns about performance,
> context switches, plug merging being compromised, etc.. to which I never
> circled back to verify your concerns)

I still remember that problem:

1) Process A
 - two bio(a, b) are splitted in dm's make_request funtion
 - bio(a) is submitted via generic_make_request(), so it is staged
   in current->bio_list
 - time t1
 - before bio(b) is submitted, down_write(>lock) is run and
  never return

2) Process B:
 - just during time t1, wait completion of bio(a) by down_write(>lock)

Then Process A waits the lock which is acquired by B first, and the
two bio(a, b)
can't reach to driver/device at all.

Looks that current->bio_list is fragile to locks from make_request function,
and moving the lock into workqueue context should be helpful.

And I am happy to continue to discuss this issue further.

>
> But it illustrates the type of problems that can occur when your rescue
> infrastructure is shared across devices (in the context of df2cb6daa4,
> current->bio_list contains bios from multiple devices).
>
> If a single splitting bio_set were shared across devices there would be
> no guarantee of forward progress with complex stacked devices (one or
> more devices could exhaust the reserve and starve out other devices in
> the stack).  So keeping the bio_set per request_queue isn't prone to
> failure like a shared bio_set might be.

Not consider the dm lock problem, from Kent's commit(df2cb6daa4) log and
the patch, looks forward progress can be guaranteed for stacked devices
with same bio_set, but better to get Kent's clarification.

If forward progress can be guaranteed, percpu mempool might avoid
easy exhausting, because it is reasonable to assume that one CPU can only
provide a certain amount of bandwidth wrt. block transfer.

Thanks
Ming


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Ming Lei
On Tue, Feb 23, 2016 at 10:54 PM, Mike Snitzer  wrote:
> On Mon, Feb 22 2016 at  9:55pm -0500,
> Ming Lei  wrote:
>
>> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
>>  wrote:
>> > On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
>> >> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
>> >> wrote:
>> >> >>-Original Message-
>> >> >
>> >> > So it's almost already "per request_queue"
>> >>
>> >> Yes, that is because of the following line:
>> >>
>> >> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
>> >>
>> >> in blk_alloc_queue_node().
>> >>
>> >> Looks like this bio_set doesn't need to be per-request_queue, and
>> >> now it is only used for fast-cloning bio for splitting, and one global
>> >> split bio_set should be enough.
>> >
>> > It does have to be per request queue for stacking block devices (which 
>> > includes
>> > loopback).
>>
>> In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
>> stacking drivers), deadlock in this situation has been avoided already.
>> Or are there other issues with global bio_set? I appreciate if you may
>> explain it a bit if there are.
>
> Even with commit df2cb6daa4 there is still risk of deadlocks (even
> without low memory condition), see:
> https://patchwork.kernel.org/patch/7398411/

That is definitely another problem which isn't related with low memory,
and I guess Kent means there might be deadlock risk in case of shared
bio_set.

>
> (you may recall you blocked this patch with concerns about performance,
> context switches, plug merging being compromised, etc.. to which I never
> circled back to verify your concerns)

I still remember that problem:

1) Process A
 - two bio(a, b) are splitted in dm's make_request funtion
 - bio(a) is submitted via generic_make_request(), so it is staged
   in current->bio_list
 - time t1
 - before bio(b) is submitted, down_write(>lock) is run and
  never return

2) Process B:
 - just during time t1, wait completion of bio(a) by down_write(>lock)

Then Process A waits the lock which is acquired by B first, and the
two bio(a, b)
can't reach to driver/device at all.

Looks that current->bio_list is fragile to locks from make_request function,
and moving the lock into workqueue context should be helpful.

And I am happy to continue to discuss this issue further.

>
> But it illustrates the type of problems that can occur when your rescue
> infrastructure is shared across devices (in the context of df2cb6daa4,
> current->bio_list contains bios from multiple devices).
>
> If a single splitting bio_set were shared across devices there would be
> no guarantee of forward progress with complex stacked devices (one or
> more devices could exhaust the reserve and starve out other devices in
> the stack).  So keeping the bio_set per request_queue isn't prone to
> failure like a shared bio_set might be.

Not consider the dm lock problem, from Kent's commit(df2cb6daa4) log and
the patch, looks forward progress can be guaranteed for stacked devices
with same bio_set, but better to get Kent's clarification.

If forward progress can be guaranteed, percpu mempool might avoid
easy exhausting, because it is reasonable to assume that one CPU can only
provide a certain amount of bandwidth wrt. block transfer.

Thanks
Ming


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Pavel Machek
On Mon 2016-02-22 13:58:18, Kent Overstreet wrote:
> On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> > On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
> > wrote:
> > >>-Original Message-
> > >
> > > So it's almost already "per request_queue"
> > 
> > Yes, that is because of the following line:
> > 
> > q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> > 
> > in blk_alloc_queue_node().
> > 
> > Looks like this bio_set doesn't need to be per-request_queue, and
> > now it is only used for fast-cloning bio for splitting, and one global
> > split bio_set should be enough.
> 
> It does have to be per request queue for stacking block devices (which 
> includes
> loopback).

Could we only allocate request queues for devices that are not even
opened? I have these in my system:

loop0  loop2  loop4  loop6  md0   nbd1   nbd11  nbd13  nbd15  nbd3 nbd5  nbd7   
 nbd9  sda1  sda3
loop1  loop3  loop5  loop7  nbd0  nbd10  nbd12  nbd14  nbd2   nbd4 nbd6  nbd8   
 sda   sda2  sda4

...but nbd is never used, loop1+ is never used, and loop0 is only used
once in a blue moon. Each process takes 8K+...

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Pavel Machek
On Mon 2016-02-22 13:58:18, Kent Overstreet wrote:
> On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> > On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
> > wrote:
> > >>-Original Message-
> > >
> > > So it's almost already "per request_queue"
> > 
> > Yes, that is because of the following line:
> > 
> > q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> > 
> > in blk_alloc_queue_node().
> > 
> > Looks like this bio_set doesn't need to be per-request_queue, and
> > now it is only used for fast-cloning bio for splitting, and one global
> > split bio_set should be enough.
> 
> It does have to be per request queue for stacking block devices (which 
> includes
> loopback).

Could we only allocate request queues for devices that are not even
opened? I have these in my system:

loop0  loop2  loop4  loop6  md0   nbd1   nbd11  nbd13  nbd15  nbd3 nbd5  nbd7   
 nbd9  sda1  sda3
loop1  loop3  loop5  loop7  nbd0  nbd10  nbd12  nbd14  nbd2   nbd4 nbd6  nbd8   
 sda   sda2  sda4

...but nbd is never used, loop1+ is never used, and loop0 is only used
once in a blue moon. Each process takes 8K+...

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Mike Snitzer
On Mon, Feb 22 2016 at  9:55pm -0500,
Ming Lei  wrote:

> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
>  wrote:
> > On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> >> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
> >> wrote:
> >> >>-Original Message-
> >> >
> >> > So it's almost already "per request_queue"
> >>
> >> Yes, that is because of the following line:
> >>
> >> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> >>
> >> in blk_alloc_queue_node().
> >>
> >> Looks like this bio_set doesn't need to be per-request_queue, and
> >> now it is only used for fast-cloning bio for splitting, and one global
> >> split bio_set should be enough.
> >
> > It does have to be per request queue for stacking block devices (which 
> > includes
> > loopback).
> 
> In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
> stacking drivers), deadlock in this situation has been avoided already.
> Or are there other issues with global bio_set? I appreciate if you may
> explain it a bit if there are.

Even with commit df2cb6daa4 there is still risk of deadlocks (even
without low memory condition), see:
https://patchwork.kernel.org/patch/7398411/

(you may recall you blocked this patch with concerns about performance,
context switches, plug merging being compromised, etc.. to which I never
circled back to verify your concerns)

But it illustrates the type of problems that can occur when your rescue
infrastructure is shared across devices (in the context of df2cb6daa4,
current->bio_list contains bios from multiple devices). 

If a single splitting bio_set were shared across devices there would be
no guarantee of forward progress with complex stacked devices (one or
more devices could exhaust the reserve and starve out other devices in
the stack).  So keeping the bio_set per request_queue isn't prone to
failure like a shared bio_set might be.

Mike


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-23 Thread Mike Snitzer
On Mon, Feb 22 2016 at  9:55pm -0500,
Ming Lei  wrote:

> On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
>  wrote:
> > On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> >> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  
> >> wrote:
> >> >>-Original Message-
> >> >
> >> > So it's almost already "per request_queue"
> >>
> >> Yes, that is because of the following line:
> >>
> >> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> >>
> >> in blk_alloc_queue_node().
> >>
> >> Looks like this bio_set doesn't need to be per-request_queue, and
> >> now it is only used for fast-cloning bio for splitting, and one global
> >> split bio_set should be enough.
> >
> > It does have to be per request queue for stacking block devices (which 
> > includes
> > loopback).
> 
> In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
> stacking drivers), deadlock in this situation has been avoided already.
> Or are there other issues with global bio_set? I appreciate if you may
> explain it a bit if there are.

Even with commit df2cb6daa4 there is still risk of deadlocks (even
without low memory condition), see:
https://patchwork.kernel.org/patch/7398411/

(you may recall you blocked this patch with concerns about performance,
context switches, plug merging being compromised, etc.. to which I never
circled back to verify your concerns)

But it illustrates the type of problems that can occur when your rescue
infrastructure is shared across devices (in the context of df2cb6daa4,
current->bio_list contains bios from multiple devices). 

If a single splitting bio_set were shared across devices there would be
no guarantee of forward progress with complex stacked devices (one or
more devices could exhaust the reserve and starve out other devices in
the stack).  So keeping the bio_set per request_queue isn't prone to
failure like a shared bio_set might be.

Mike


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-22 Thread Ming Lei
On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
 wrote:
> On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
>> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  wrote:
>> >>-Original Message-
>> >
>> > So it's almost already "per request_queue"
>>
>> Yes, that is because of the following line:
>>
>> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
>>
>> in blk_alloc_queue_node().
>>
>> Looks like this bio_set doesn't need to be per-request_queue, and
>> now it is only used for fast-cloning bio for splitting, and one global
>> split bio_set should be enough.
>
> It does have to be per request queue for stacking block devices (which 
> includes
> loopback).

In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
stacking drivers), deadlock in this situation has been avoided already.
Or are there other issues with global bio_set? I appreciate if you may
explain it a bit if there are.

Thanks,
Ming Lei


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-22 Thread Ming Lei
On Tue, Feb 23, 2016 at 6:58 AM, Kent Overstreet
 wrote:
> On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
>> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  wrote:
>> >>-Original Message-
>> >
>> > So it's almost already "per request_queue"
>>
>> Yes, that is because of the following line:
>>
>> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
>>
>> in blk_alloc_queue_node().
>>
>> Looks like this bio_set doesn't need to be per-request_queue, and
>> now it is only used for fast-cloning bio for splitting, and one global
>> split bio_set should be enough.
>
> It does have to be per request queue for stacking block devices (which 
> includes
> loopback).

In commit df2cb6daa4(block: Avoid deadlocks with bio allocation by
stacking drivers), deadlock in this situation has been avoided already.
Or are there other issues with global bio_set? I appreciate if you may
explain it a bit if there are.

Thanks,
Ming Lei


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-22 Thread Kent Overstreet
On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  wrote:
> >>-Original Message-
> >
> > So it's almost already "per request_queue"
> 
> Yes, that is because of the following line:
> 
> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> 
> in blk_alloc_queue_node().
> 
> Looks like this bio_set doesn't need to be per-request_queue, and
> now it is only used for fast-cloning bio for splitting, and one global
> split bio_set should be enough.

It does have to be per request queue for stacking block devices (which includes
loopback).


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-22 Thread Kent Overstreet
On Sun, Feb 21, 2016 at 05:40:59PM +0800, Ming Lei wrote:
> On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  wrote:
> >>-Original Message-
> >
> > So it's almost already "per request_queue"
> 
> Yes, that is because of the following line:
> 
> q->bio_split = bioset_create(BIO_POOL_SIZE, 0);
> 
> in blk_alloc_queue_node().
> 
> Looks like this bio_set doesn't need to be per-request_queue, and
> now it is only used for fast-cloning bio for splitting, and one global
> split bio_set should be enough.

It does have to be per request queue for stacking block devices (which includes
loopback).


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-21 Thread Ming Lei
On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  wrote:
>>-Original Message-
>
> So it's almost already "per request_queue"

Yes, that is because of the following line:

q->bio_split = bioset_create(BIO_POOL_SIZE, 0);

in blk_alloc_queue_node().

Looks like this bio_set doesn't need to be per-request_queue, and
now it is only used for fast-cloning bio for splitting, and one global
split bio_set should be enough.


thanks,
Ming


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-21 Thread Ming Lei
On Sun, Feb 21, 2016 at 2:43 PM, Ming Lin-SSI  wrote:
>>-Original Message-
>
> So it's almost already "per request_queue"

Yes, that is because of the following line:

q->bio_split = bioset_create(BIO_POOL_SIZE, 0);

in blk_alloc_queue_node().

Looks like this bio_set doesn't need to be per-request_queue, and
now it is only used for fast-cloning bio for splitting, and one global
split bio_set should be enough.


thanks,
Ming


RE: 4.4-final: 28 bioset threads on small notebook

2016-02-21 Thread Ming Lin-SSI
>-Original Message-
>From: Kent Overstreet [mailto:kent.overstr...@gmail.com]
>
>On Sat, Feb 20, 2016 at 09:55:19PM +0100, Pavel Machek wrote:
>> Hi!
>>
>> > > > You're directing this concern to the wrong person.
>> > > >
>> > > > I already told you DM is _not_ contributing any extra "bioset" threads
>> > > > (ever since commit dbba42d8a).
>> > >
>> > > Well, sorry about that. Note that l-k is on the cc list, so hopefully
>> > > the right person sees it too.
>> > >
>> > > Ok, let me check... it seems that
>> > > 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
>> > > Overstreet  is to blame.
>> > >
>> > > Um, and you acked the patch, so you are partly responsible.
>> >
>> > You still haven't shown you even understand the patch so don't try to
>> > blame me for one aspect you don't like.
>>
>> Well, I don't have to understand the patch to argue its wrong.
>>
>> > > > But in general, these "bioset" threads are a side-effect of the
>> > > > late-bio-splitting support.  So is your position on it: "I don't like
>> > > > that feature if it comes at the expense of adding resources I can _see_
>> > > > for something I (naively?) view as useless"?
>> > >
>> > > > Just seems... naive... but you could be trying to say something else
>> > > > entirely.
>> > >
>> > > > Anyway, if you don't like something: understand why it is there and
>then
>> > > > try to fix it to your liking (without compromising why it was there to
>> > > > begin with).
>> > >
>> > > Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
>> > > you argue it is not?
>> >
>> > Just implies you have 28 request_queues right?  You clearly have
>> > something else going on on your notebook than the average notebook
>> > user.
>>
>> I'm not using the modules, but otherwise I'm not doing anything
>> special. How many request_queues should I expect? How many do you
>have
>> on your notebook?
>
>It's one rescuer thread per bio_set, not one per request queue, so 28 is more
>than I'd expect but there's lots of random bio_sets so it's not entirely
>unexpected.
>
>It'd be better to have the rescuers be per request_queue, just someone is
>going
>to have to write the code.

I boot a VM and it also has 28 bioset threads.

That's because I have 27 block devices.

root@wheezy:~# ls /sys/block/
loop0  loop2  loop4  loop6  ram0  ram10  ram12  ram14  ram2  ram4  ram6  ram8  
sr0  vdb
loop1  loop3  loop5  loop7  ram1  ram11  ram13  ram15  ram3  ram5  ram7  ram9  
vda

And the additional one comes from init_bio

[0.329627] Call Trace:
[0.329970]  [] dump_stack+0x63/0x87
[0.330531]  [] __bioset_create+0x29e/0x2b0
[0.331127]  [] ? ca_keys_setup+0xa6/0xa6
[0.331735]  [] init_bio+0xa1/0xd1
[0.332284]  [] do_one_initcall+0xcd/0x1f0
[0.332883]  [] ? parse_args+0x296/0x480
[0.333460]  [] kernel_init_freeable+0x16f/0x1fa
[0.334131]  [] ? initcall_blacklist+0xba/0xba
[0.334747]  [] ? rest_init+0x80/0x80
[0.335301]  [] kernel_init+0xe/0xf0
[0.335842]  [] ret_from_fork+0x3f/0x70
[0.336371]  [] ? rest_init+0x80/0x80

So it's almost already "per request_queue"


RE: 4.4-final: 28 bioset threads on small notebook

2016-02-21 Thread Ming Lin-SSI
>-Original Message-
>From: Kent Overstreet [mailto:kent.overstr...@gmail.com]
>
>On Sat, Feb 20, 2016 at 09:55:19PM +0100, Pavel Machek wrote:
>> Hi!
>>
>> > > > You're directing this concern to the wrong person.
>> > > >
>> > > > I already told you DM is _not_ contributing any extra "bioset" threads
>> > > > (ever since commit dbba42d8a).
>> > >
>> > > Well, sorry about that. Note that l-k is on the cc list, so hopefully
>> > > the right person sees it too.
>> > >
>> > > Ok, let me check... it seems that
>> > > 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
>> > > Overstreet  is to blame.
>> > >
>> > > Um, and you acked the patch, so you are partly responsible.
>> >
>> > You still haven't shown you even understand the patch so don't try to
>> > blame me for one aspect you don't like.
>>
>> Well, I don't have to understand the patch to argue its wrong.
>>
>> > > > But in general, these "bioset" threads are a side-effect of the
>> > > > late-bio-splitting support.  So is your position on it: "I don't like
>> > > > that feature if it comes at the expense of adding resources I can _see_
>> > > > for something I (naively?) view as useless"?
>> > >
>> > > > Just seems... naive... but you could be trying to say something else
>> > > > entirely.
>> > >
>> > > > Anyway, if you don't like something: understand why it is there and
>then
>> > > > try to fix it to your liking (without compromising why it was there to
>> > > > begin with).
>> > >
>> > > Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
>> > > you argue it is not?
>> >
>> > Just implies you have 28 request_queues right?  You clearly have
>> > something else going on on your notebook than the average notebook
>> > user.
>>
>> I'm not using the modules, but otherwise I'm not doing anything
>> special. How many request_queues should I expect? How many do you
>have
>> on your notebook?
>
>It's one rescuer thread per bio_set, not one per request queue, so 28 is more
>than I'd expect but there's lots of random bio_sets so it's not entirely
>unexpected.
>
>It'd be better to have the rescuers be per request_queue, just someone is
>going
>to have to write the code.

I boot a VM and it also has 28 bioset threads.

That's because I have 27 block devices.

root@wheezy:~# ls /sys/block/
loop0  loop2  loop4  loop6  ram0  ram10  ram12  ram14  ram2  ram4  ram6  ram8  
sr0  vdb
loop1  loop3  loop5  loop7  ram1  ram11  ram13  ram15  ram3  ram5  ram7  ram9  
vda

And the additional one comes from init_bio

[0.329627] Call Trace:
[0.329970]  [] dump_stack+0x63/0x87
[0.330531]  [] __bioset_create+0x29e/0x2b0
[0.331127]  [] ? ca_keys_setup+0xa6/0xa6
[0.331735]  [] init_bio+0xa1/0xd1
[0.332284]  [] do_one_initcall+0xcd/0x1f0
[0.332883]  [] ? parse_args+0x296/0x480
[0.333460]  [] kernel_init_freeable+0x16f/0x1fa
[0.334131]  [] ? initcall_blacklist+0xba/0xba
[0.334747]  [] ? rest_init+0x80/0x80
[0.335301]  [] kernel_init+0xe/0xf0
[0.335842]  [] ret_from_fork+0x3f/0x70
[0.336371]  [] ? rest_init+0x80/0x80

So it's almost already "per request_queue"


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-21 Thread Kent Overstreet
On Sat, Feb 20, 2016 at 09:55:19PM +0100, Pavel Machek wrote:
> Hi!
> 
> > > > You're directing this concern to the wrong person.
> > > > 
> > > > I already told you DM is _not_ contributing any extra "bioset" threads
> > > > (ever since commit dbba42d8a).
> > > 
> > > Well, sorry about that. Note that l-k is on the cc list, so hopefully
> > > the right person sees it too.
> > > 
> > > Ok, let me check... it seems that 
> > > 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
> > > Overstreet  is to blame.
> > > 
> > > Um, and you acked the patch, so you are partly responsible.
> > 
> > You still haven't shown you even understand the patch so don't try to
> > blame me for one aspect you don't like.
> 
> Well, I don't have to understand the patch to argue its wrong.
> 
> > > > But in general, these "bioset" threads are a side-effect of the
> > > > late-bio-splitting support.  So is your position on it: "I don't like
> > > > that feature if it comes at the expense of adding resources I can _see_
> > > > for something I (naively?) view as useless"?
> > > 
> > > > Just seems... naive... but you could be trying to say something else
> > > > entirely.
> > > 
> > > > Anyway, if you don't like something: understand why it is there and then
> > > > try to fix it to your liking (without compromising why it was there to
> > > > begin with).
> > > 
> > > Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
> > > you argue it is not?
> > 
> > Just implies you have 28 request_queues right?  You clearly have
> > something else going on on your notebook than the average notebook
> > user.
> 
> I'm not using the modules, but otherwise I'm not doing anything
> special. How many request_queues should I expect? How many do you have
> on your notebook?

It's one rescuer thread per bio_set, not one per request queue, so 28 is more
than I'd expect but there's lots of random bio_sets so it's not entirely
unexpected.

It'd be better to have the rescuers be per request_queue, just someone is going
to have to write the code.


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-21 Thread Kent Overstreet
On Sat, Feb 20, 2016 at 09:55:19PM +0100, Pavel Machek wrote:
> Hi!
> 
> > > > You're directing this concern to the wrong person.
> > > > 
> > > > I already told you DM is _not_ contributing any extra "bioset" threads
> > > > (ever since commit dbba42d8a).
> > > 
> > > Well, sorry about that. Note that l-k is on the cc list, so hopefully
> > > the right person sees it too.
> > > 
> > > Ok, let me check... it seems that 
> > > 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
> > > Overstreet  is to blame.
> > > 
> > > Um, and you acked the patch, so you are partly responsible.
> > 
> > You still haven't shown you even understand the patch so don't try to
> > blame me for one aspect you don't like.
> 
> Well, I don't have to understand the patch to argue its wrong.
> 
> > > > But in general, these "bioset" threads are a side-effect of the
> > > > late-bio-splitting support.  So is your position on it: "I don't like
> > > > that feature if it comes at the expense of adding resources I can _see_
> > > > for something I (naively?) view as useless"?
> > > 
> > > > Just seems... naive... but you could be trying to say something else
> > > > entirely.
> > > 
> > > > Anyway, if you don't like something: understand why it is there and then
> > > > try to fix it to your liking (without compromising why it was there to
> > > > begin with).
> > > 
> > > Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
> > > you argue it is not?
> > 
> > Just implies you have 28 request_queues right?  You clearly have
> > something else going on on your notebook than the average notebook
> > user.
> 
> I'm not using the modules, but otherwise I'm not doing anything
> special. How many request_queues should I expect? How many do you have
> on your notebook?

It's one rescuer thread per bio_set, not one per request queue, so 28 is more
than I'd expect but there's lots of random bio_sets so it's not entirely
unexpected.

It'd be better to have the rescuers be per request_queue, just someone is going
to have to write the code.


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Pavel Machek
Hi!

> > > You're directing this concern to the wrong person.
> > > 
> > > I already told you DM is _not_ contributing any extra "bioset" threads
> > > (ever since commit dbba42d8a).
> > 
> > Well, sorry about that. Note that l-k is on the cc list, so hopefully
> > the right person sees it too.
> > 
> > Ok, let me check... it seems that 
> > 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
> > Overstreet  is to blame.
> > 
> > Um, and you acked the patch, so you are partly responsible.
> 
> You still haven't shown you even understand the patch so don't try to
> blame me for one aspect you don't like.

Well, I don't have to understand the patch to argue its wrong.

> > > But in general, these "bioset" threads are a side-effect of the
> > > late-bio-splitting support.  So is your position on it: "I don't like
> > > that feature if it comes at the expense of adding resources I can _see_
> > > for something I (naively?) view as useless"?
> > 
> > > Just seems... naive... but you could be trying to say something else
> > > entirely.
> > 
> > > Anyway, if you don't like something: understand why it is there and then
> > > try to fix it to your liking (without compromising why it was there to
> > > begin with).
> > 
> > Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
> > you argue it is not?
> 
> Just implies you have 28 request_queues right?  You clearly have
> something else going on on your notebook than the average notebook
> user.

I'm not using the modules, but otherwise I'm not doing anything
special. How many request_queues should I expect? How many do you have
on your notebook?

Thanks,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Pavel Machek
Hi!

> > > You're directing this concern to the wrong person.
> > > 
> > > I already told you DM is _not_ contributing any extra "bioset" threads
> > > (ever since commit dbba42d8a).
> > 
> > Well, sorry about that. Note that l-k is on the cc list, so hopefully
> > the right person sees it too.
> > 
> > Ok, let me check... it seems that 
> > 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
> > Overstreet  is to blame.
> > 
> > Um, and you acked the patch, so you are partly responsible.
> 
> You still haven't shown you even understand the patch so don't try to
> blame me for one aspect you don't like.

Well, I don't have to understand the patch to argue its wrong.

> > > But in general, these "bioset" threads are a side-effect of the
> > > late-bio-splitting support.  So is your position on it: "I don't like
> > > that feature if it comes at the expense of adding resources I can _see_
> > > for something I (naively?) view as useless"?
> > 
> > > Just seems... naive... but you could be trying to say something else
> > > entirely.
> > 
> > > Anyway, if you don't like something: understand why it is there and then
> > > try to fix it to your liking (without compromising why it was there to
> > > begin with).
> > 
> > Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
> > you argue it is not?
> 
> Just implies you have 28 request_queues right?  You clearly have
> something else going on on your notebook than the average notebook
> user.

I'm not using the modules, but otherwise I'm not doing anything
special. How many request_queues should I expect? How many do you have
on your notebook?

Thanks,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Mike Snitzer
On Sat, Feb 20 2016 at  3:04pm -0500,
Pavel Machek  wrote:

> Hi!
> 
> > > > > > I know it is normal to spawn 8 threads for every single function,
> > > > > ...
> > > > > > but 28 threads?
> > > > > > 
> > > > > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 
> > > > > > [bioset]
> > > > > ...
> > > > > 
> > > > > How many physical block devices do you have?
> > > > > 
> > > > > DM is doing its part to not contribute to this:
> > > > > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based 
> > > > > DM device")
> > > > > 
> > > > > (but yeah, all these extra 'bioset' threads aren't ideal)
> > > > 
> > > > Still there in 4.4-final.
> > > 
> > > ...and still there in 4.5-rc4 :-(.
> > 
> > You're directing this concern to the wrong person.
> > 
> > I already told you DM is _not_ contributing any extra "bioset" threads
> > (ever since commit dbba42d8a).
> 
> Well, sorry about that. Note that l-k is on the cc list, so hopefully
> the right person sees it too.
> 
> Ok, let me check... it seems that 
> 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
> Overstreet  is to blame.
> 
> Um, and you acked the patch, so you are partly responsible.

You still haven't shown you even understand the patch so don't try to
blame me for one aspect you don't like.
 
> > But in general, these "bioset" threads are a side-effect of the
> > late-bio-splitting support.  So is your position on it: "I don't like
> > that feature if it comes at the expense of adding resources I can _see_
> > for something I (naively?) view as useless"?
> 
> > Just seems... naive... but you could be trying to say something else
> > entirely.
> 
> > Anyway, if you don't like something: understand why it is there and then
> > try to fix it to your liking (without compromising why it was there to
> > begin with).
> 
> Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
> you argue it is not?

Just implies you have 28 request_queues right?  You clearly have
something else going on on your notebook than the average notebook user.


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Mike Snitzer
On Sat, Feb 20 2016 at  3:04pm -0500,
Pavel Machek  wrote:

> Hi!
> 
> > > > > > I know it is normal to spawn 8 threads for every single function,
> > > > > ...
> > > > > > but 28 threads?
> > > > > > 
> > > > > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 
> > > > > > [bioset]
> > > > > ...
> > > > > 
> > > > > How many physical block devices do you have?
> > > > > 
> > > > > DM is doing its part to not contribute to this:
> > > > > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based 
> > > > > DM device")
> > > > > 
> > > > > (but yeah, all these extra 'bioset' threads aren't ideal)
> > > > 
> > > > Still there in 4.4-final.
> > > 
> > > ...and still there in 4.5-rc4 :-(.
> > 
> > You're directing this concern to the wrong person.
> > 
> > I already told you DM is _not_ contributing any extra "bioset" threads
> > (ever since commit dbba42d8a).
> 
> Well, sorry about that. Note that l-k is on the cc list, so hopefully
> the right person sees it too.
> 
> Ok, let me check... it seems that 
> 54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
> Overstreet  is to blame.
> 
> Um, and you acked the patch, so you are partly responsible.

You still haven't shown you even understand the patch so don't try to
blame me for one aspect you don't like.
 
> > But in general, these "bioset" threads are a side-effect of the
> > late-bio-splitting support.  So is your position on it: "I don't like
> > that feature if it comes at the expense of adding resources I can _see_
> > for something I (naively?) view as useless"?
> 
> > Just seems... naive... but you could be trying to say something else
> > entirely.
> 
> > Anyway, if you don't like something: understand why it is there and then
> > try to fix it to your liking (without compromising why it was there to
> > begin with).
> 
> Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
> you argue it is not?

Just implies you have 28 request_queues right?  You clearly have
something else going on on your notebook than the average notebook user.


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Pavel Machek
Hi!

> > > > > I know it is normal to spawn 8 threads for every single function,
> > > > ...
> > > > > but 28 threads?
> > > > > 
> > > > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 
> > > > > [bioset]
> > > > ...
> > > > 
> > > > How many physical block devices do you have?
> > > > 
> > > > DM is doing its part to not contribute to this:
> > > > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based DM 
> > > > device")
> > > > 
> > > > (but yeah, all these extra 'bioset' threads aren't ideal)
> > > 
> > > Still there in 4.4-final.
> > 
> > ...and still there in 4.5-rc4 :-(.
> 
> You're directing this concern to the wrong person.
> 
> I already told you DM is _not_ contributing any extra "bioset" threads
> (ever since commit dbba42d8a).

Well, sorry about that. Note that l-k is on the cc list, so hopefully
the right person sees it too.

Ok, let me check... it seems that 
54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
Overstreet  is to blame.

Um, and you acked the patch, so you are partly responsible.

> But in general, these "bioset" threads are a side-effect of the
> late-bio-splitting support.  So is your position on it: "I don't like
> that feature if it comes at the expense of adding resources I can _see_
> for something I (naively?) view as useless"?

> Just seems... naive... but you could be trying to say something else
> entirely.

> Anyway, if you don't like something: understand why it is there and then
> try to fix it to your liking (without compromising why it was there to
> begin with).

Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
you argue it is not?

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Pavel Machek
Hi!

> > > > > I know it is normal to spawn 8 threads for every single function,
> > > > ...
> > > > > but 28 threads?
> > > > > 
> > > > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 
> > > > > [bioset]
> > > > ...
> > > > 
> > > > How many physical block devices do you have?
> > > > 
> > > > DM is doing its part to not contribute to this:
> > > > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based DM 
> > > > device")
> > > > 
> > > > (but yeah, all these extra 'bioset' threads aren't ideal)
> > > 
> > > Still there in 4.4-final.
> > 
> > ...and still there in 4.5-rc4 :-(.
> 
> You're directing this concern to the wrong person.
> 
> I already told you DM is _not_ contributing any extra "bioset" threads
> (ever since commit dbba42d8a).

Well, sorry about that. Note that l-k is on the cc list, so hopefully
the right person sees it too.

Ok, let me check... it seems that 
54efd50bfd873e2dbf784e0b21a8027ba4299a3e is responsible, thus Kent
Overstreet  is to blame.

Um, and you acked the patch, so you are partly responsible.

> But in general, these "bioset" threads are a side-effect of the
> late-bio-splitting support.  So is your position on it: "I don't like
> that feature if it comes at the expense of adding resources I can _see_
> for something I (naively?) view as useless"?

> Just seems... naive... but you could be trying to say something else
> entirely.

> Anyway, if you don't like something: understand why it is there and then
> try to fix it to your liking (without compromising why it was there to
> begin with).

Well, 28 kernel threads on a notebook is a bug, plain and simple. Do
you argue it is not?

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Mike Snitzer
On Sat, Feb 20 2016 at  1:42pm -0500,
Pavel Machek  wrote:

> On Sat 2016-02-20 18:40:35, Pavel Machek wrote:
> > 
> > On Fri 2015-12-11 09:08:41, Mike Snitzer wrote:
> > > On Fri, Dec 11 2015 at  5:49am -0500,
> > > Pavel Machek  wrote:
> > > 
> > > > Hi!
> > > > 
> > > > I know it is normal to spawn 8 threads for every single function,
> > > ...
> > > > but 28 threads?
> > > > 
> > > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 
> > > > [bioset]
> > > ...
> > > 
> > > How many physical block devices do you have?
> > > 
> > > DM is doing its part to not contribute to this:
> > > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based DM 
> > > device")
> > > 
> > > (but yeah, all these extra 'bioset' threads aren't ideal)
> > 
> > Still there in 4.4-final.
> 
> ...and still there in 4.5-rc4 :-(.
>   Pavel

You're directing this concern to the wrong person.

I already told you DM is _not_ contributing any extra "bioset" threads
(ever since commit dbba42d8a).

But in general, these "bioset" threads are a side-effect of the
late-bio-splitting support.  So is your position on it: "I don't like
that feature if it comes at the expense of adding resources I can _see_
for something I (naively?) view as useless"?

Just seems... naive... but you could be trying to say something else
entirely.

Anyway, if you don't like something: understand why it is there and then
try to fix it to your liking (without compromising why it was there to
begin with).


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Mike Snitzer
On Sat, Feb 20 2016 at  1:42pm -0500,
Pavel Machek  wrote:

> On Sat 2016-02-20 18:40:35, Pavel Machek wrote:
> > 
> > On Fri 2015-12-11 09:08:41, Mike Snitzer wrote:
> > > On Fri, Dec 11 2015 at  5:49am -0500,
> > > Pavel Machek  wrote:
> > > 
> > > > Hi!
> > > > 
> > > > I know it is normal to spawn 8 threads for every single function,
> > > ...
> > > > but 28 threads?
> > > > 
> > > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 
> > > > [bioset]
> > > ...
> > > 
> > > How many physical block devices do you have?
> > > 
> > > DM is doing its part to not contribute to this:
> > > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based DM 
> > > device")
> > > 
> > > (but yeah, all these extra 'bioset' threads aren't ideal)
> > 
> > Still there in 4.4-final.
> 
> ...and still there in 4.5-rc4 :-(.
>   Pavel

You're directing this concern to the wrong person.

I already told you DM is _not_ contributing any extra "bioset" threads
(ever since commit dbba42d8a).

But in general, these "bioset" threads are a side-effect of the
late-bio-splitting support.  So is your position on it: "I don't like
that feature if it comes at the expense of adding resources I can _see_
for something I (naively?) view as useless"?

Just seems... naive... but you could be trying to say something else
entirely.

Anyway, if you don't like something: understand why it is there and then
try to fix it to your liking (without compromising why it was there to
begin with).


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Pavel Machek
On Sat 2016-02-20 18:40:35, Pavel Machek wrote:
> 
> On Fri 2015-12-11 09:08:41, Mike Snitzer wrote:
> > On Fri, Dec 11 2015 at  5:49am -0500,
> > Pavel Machek  wrote:
> > 
> > > Hi!
> > > 
> > > I know it is normal to spawn 8 threads for every single function,
> > ...
> > > but 28 threads?
> > > 
> > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 [bioset]
> > ...
> > 
> > How many physical block devices do you have?
> > 
> > DM is doing its part to not contribute to this:
> > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based DM 
> > device")
> > 
> > (but yeah, all these extra 'bioset' threads aren't ideal)
> 
> Still there in 4.4-final.

...and still there in 4.5-rc4 :-(.
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


Re: 4.4-final: 28 bioset threads on small notebook

2016-02-20 Thread Pavel Machek
On Sat 2016-02-20 18:40:35, Pavel Machek wrote:
> 
> On Fri 2015-12-11 09:08:41, Mike Snitzer wrote:
> > On Fri, Dec 11 2015 at  5:49am -0500,
> > Pavel Machek  wrote:
> > 
> > > Hi!
> > > 
> > > I know it is normal to spawn 8 threads for every single function,
> > ...
> > > but 28 threads?
> > > 
> > > root   974  0.0  0.0  0 0 ?S<   Dec08   0:00 [bioset]
> > ...
> > 
> > How many physical block devices do you have?
> > 
> > DM is doing its part to not contribute to this:
> > dbba42d8a ("dm: eliminate unused "bioset" process for each bio-based DM 
> > device")
> > 
> > (but yeah, all these extra 'bioset' threads aren't ideal)
> 
> Still there in 4.4-final.

...and still there in 4.5-rc4 :-(.
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html