On Wed, Jan 31, 2018 at 10:39:53PM +0300, Ildar Ismagilov wrote: > Signed-off-by: Ildar Ismagilov <devi...@gmail.com>
I did apply this for testing and review, thank you and good eyes! However, I had to: 1. hand-edit it to make it work against the rcu/dev branch of my -rcu tree: git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git 2. fill out the commit logs. Please see below for the updated patch, and please let me know if I messed something up. In the future, could you please apply your patches against the rcu/dev branch of -rcu and provide more complete commit logs? The commit log should describe the problem and how that problem is fixed. Also, are you able to test these using rcutorture? If not, let's get you set up for that, as I am prone to testing backlogs just now. Plus I am writing this while 34,000 feet over Western Australia. So it would mean fewer delays for you if you could test your own patches. ;-) Thanx, Paul ------------------------------------------------------------------------ commit 753abb606ba61012c8838c73e2c888aea20efe13 Author: Ildar Ismagilov <devi...@gmail.com> Date: Wed Jan 31 22:39:53 2018 +0300 srcu: Prevent sdp->srcu_gp_seq_needed_exp counter wrap SRCU checks each srcu_data structure's grace-period number for counter wrap four times per cycle by default. This frequency guarantees that normal comparisons will detect potential wrap. However, the expedited grace-period number is not checked. The consquences are not too horrible (a failure to expedite a grace period when requested), but it would be good to avoid such things. This commit therefore adds this check to the expedited grace-period number. Signed-off-by: Ildar Ismagilov <devi...@gmail.com> Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com> diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c index 2878387d4189..ad3e1aa5e6ea 100644 --- a/kernel/rcu/srcutree.c +++ b/kernel/rcu/srcutree.c @@ -579,6 +579,9 @@ static void srcu_gp_end(struct srcu_struct *sp) if (ULONG_CMP_GE(gpseq, sdp->srcu_gp_seq_needed + 100)) sdp->srcu_gp_seq_needed = gpseq; + if (ULONG_CMP_GE(gpseq, + sdp->srcu_gp_seq_needed_exp + 100)) + sdp->srcu_gp_seq_needed_exp = gpseq; spin_unlock_irqrestore_rcu_node(sdp, flags); } }