Re: [PATCH] sched/fair: remove the spin_lock operations

2020-11-02 Thread Hui Su
On Mon, Nov 02, 2020 at 08:53:41AM -0500, Phil Auld wrote:
> On Fri, Oct 30, 2020 at 10:16:29PM + David Laight wrote:
> > From: Benjamin Segall
> > > Sent: 30 October 2020 18:48
> > > 
> > > Hui Su  writes:
> > > 
> > > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > > distribute_running fromCFS bandwidth")',there is
> > > > nothing to protect between raw_spin_lock_irqsave/store()
> > > > in do_sched_cfs_slack_timer().
> > > >
> > > > So remove it.
> > > 
> > > Reviewed-by: Ben Segall 
> > > 
> > > (I might nitpick the subject to be clear that it should be trivial
> > > because the lock area is empty, or call them dead or something, but it's
> > > not all that important)
> > 
> > I don't know about this case, but a lock+unlock can be used
> > to ensure that nothing else holds the lock when acquiring
> > the lock requires another lock be held.
> > 
> > So if the normal sequence is:
> > lock(table)
> > # lookup item
> > lock(item)
> > unlock(table)
> > 
> > unlock(item)
> > 
> > Then it can make sense to do:
> > lock(table)
> > lock(item)
> > unlock(item)
> > 
> > unlock(table)
> > 
> > although that ought to deserve a comment.
> >
> 
> Nah, this one used to be like this :
> 
> raw_spin_lock_irqsave(_b->lock, flags);
> lsub_positive(_b->runtime, runtime);
> cfs_b->distribute_running = 0;
> raw_spin_unlock_irqrestore(_b->lock, flags);
> 
> It's just a leftover. I agree that if it was there for some other
> purpose that it would really need a comment. In this case, it's an
> artifact of patch-based development I think.
> 
> 
> Cheers,
> Phil
> 

Yeah, thanks for your explanation, Phil.

It is just a leftover.



Re: [PATCH] sched/fair: remove the spin_lock operations

2020-11-02 Thread Phil Auld
On Fri, Oct 30, 2020 at 10:16:29PM + David Laight wrote:
> From: Benjamin Segall
> > Sent: 30 October 2020 18:48
> > 
> > Hui Su  writes:
> > 
> > > Since 'ab93a4bc955b ("sched/fair: Remove
> > > distribute_running fromCFS bandwidth")',there is
> > > nothing to protect between raw_spin_lock_irqsave/store()
> > > in do_sched_cfs_slack_timer().
> > >
> > > So remove it.
> > 
> > Reviewed-by: Ben Segall 
> > 
> > (I might nitpick the subject to be clear that it should be trivial
> > because the lock area is empty, or call them dead or something, but it's
> > not all that important)
> 
> I don't know about this case, but a lock+unlock can be used
> to ensure that nothing else holds the lock when acquiring
> the lock requires another lock be held.
> 
> So if the normal sequence is:
>   lock(table)
>   # lookup item
>   lock(item)
>   unlock(table)
>   
>   unlock(item)
> 
> Then it can make sense to do:
>   lock(table)
>   lock(item)
>   unlock(item)
>   
>   unlock(table)
> 
> although that ought to deserve a comment.
>

Nah, this one used to be like this :

raw_spin_lock_irqsave(_b->lock, flags);
lsub_positive(_b->runtime, runtime);
cfs_b->distribute_running = 0;
raw_spin_unlock_irqrestore(_b->lock, flags);

It's just a leftover. I agree that if it was there for some other
purpose that it would really need a comment. In this case, it's an
artifact of patch-based development I think.


Cheers,
Phil


>   avid
> 
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 
> 1PT, UK
> Registration No: 1397386 (Wales)
> 

-- 



RE: [PATCH] sched/fair: remove the spin_lock operations

2020-10-30 Thread David Laight
From: Benjamin Segall
> Sent: 30 October 2020 18:48
> 
> Hui Su  writes:
> 
> > Since 'ab93a4bc955b ("sched/fair: Remove
> > distribute_running fromCFS bandwidth")',there is
> > nothing to protect between raw_spin_lock_irqsave/store()
> > in do_sched_cfs_slack_timer().
> >
> > So remove it.
> 
> Reviewed-by: Ben Segall 
> 
> (I might nitpick the subject to be clear that it should be trivial
> because the lock area is empty, or call them dead or something, but it's
> not all that important)

I don't know about this case, but a lock+unlock can be used
to ensure that nothing else holds the lock when acquiring
the lock requires another lock be held.

So if the normal sequence is:
lock(table)
# lookup item
lock(item)
unlock(table)

unlock(item)

Then it can make sense to do:
lock(table)
lock(item)
unlock(item)

unlock(table)

although that ought to deserve a comment.

avid

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, 
UK
Registration No: 1397386 (Wales)



Re: [PATCH] sched/fair: remove the spin_lock operations

2020-10-30 Thread Benjamin Segall
Hui Su  writes:

> Since 'ab93a4bc955b ("sched/fair: Remove
> distribute_running fromCFS bandwidth")',there is
> nothing to protect between raw_spin_lock_irqsave/store()
> in do_sched_cfs_slack_timer().
>
> So remove it.

Reviewed-by: Ben Segall 

(I might nitpick the subject to be clear that it should be trivial
because the lock area is empty, or call them dead or something, but it's
not all that important)

>
> Signed-off-by: Hui Su 
> ---
>  kernel/sched/fair.c | 3 ---
>  1 file changed, 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..5ecbf5e63198 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct 
> cfs_bandwidth *cfs_b)
>   return;
>  
>   distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(_b->lock, flags);
> - raw_spin_unlock_irqrestore(_b->lock, flags);
>  }
>  
>  /*


Re: [PATCH] sched/fair: remove the spin_lock operations

2020-10-30 Thread Phil Auld
On Fri, Oct 30, 2020 at 10:46:21PM +0800 Hui Su wrote:
> Since 'ab93a4bc955b ("sched/fair: Remove
> distribute_running fromCFS bandwidth")',there is
> nothing to protect between raw_spin_lock_irqsave/store()
> in do_sched_cfs_slack_timer().
> 
> So remove it.
> 
> Signed-off-by: Hui Su 
> ---
>  kernel/sched/fair.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 290f9e38378c..5ecbf5e63198 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct 
> cfs_bandwidth *cfs_b)
>   return;
>  
>   distribute_cfs_runtime(cfs_b);
> -
> - raw_spin_lock_irqsave(_b->lock, flags);
> - raw_spin_unlock_irqrestore(_b->lock, flags);
>  }
>  
>  /*
> -- 
> 2.29.0
> 
> 

Nice :)

Reviewed-by: Phil Auld 
-- 



[PATCH] sched/fair: remove the spin_lock operations

2020-10-30 Thread Hui Su
Since 'ab93a4bc955b ("sched/fair: Remove
distribute_running fromCFS bandwidth")',there is
nothing to protect between raw_spin_lock_irqsave/store()
in do_sched_cfs_slack_timer().

So remove it.

Signed-off-by: Hui Su 
---
 kernel/sched/fair.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 290f9e38378c..5ecbf5e63198 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5105,9 +5105,6 @@ static void do_sched_cfs_slack_timer(struct cfs_bandwidth 
*cfs_b)
return;
 
distribute_cfs_runtime(cfs_b);
-
-   raw_spin_lock_irqsave(_b->lock, flags);
-   raw_spin_unlock_irqrestore(_b->lock, flags);
 }
 
 /*
-- 
2.29.0