Hello Joel, Peter

On Mon, Jan 12, 2026 at 02:37:14PM +0000, Joel Fernandes wrote:
> 
> 
> > On Jan 12, 2026, at 9:24 AM, Peter Zijlstra <[email protected]> wrote:
> > 
> > On Mon, Jan 12, 2026 at 02:20:44PM +0000, Joel Fernandes wrote:
> >> 
> >> 
> >>>> On Jan 12, 2026, at 9:03 AM, Joel Fernandes <[email protected]> 
> >>>> wrote:
> >>> 
> >>> 
> >>> 
> >>>> On Jan 12, 2026, at 4:44 AM, Vishal Chourasia <[email protected]> 
> >>>> wrote:
> >>>> 
> >>>> Bulk CPU hotplug operations—such as switching SMT modes across all
> >>>> cores—require hotplugging multiple CPUs in rapid succession. On large
> >>>> systems, this process takes significant time, increasing as the number
> >>>> of CPUs grows, leading to substantial delays on high-core-count
> >>>> machines. Analysis [1] reveals that the majority of this time is spent
> >>>> waiting for synchronize_rcu().
> >>>> 
> >>>> Expedite synchronize_rcu() during the hotplug path to accelerate the
> >>>> operation. Since CPU hotplug is a user-initiated administrative task,
> >>>> it should complete as quickly as possible.
> >>> 
> >>> When does the user initiate this in your system?
Workloads exhibit varying sensitivity to SMT levels. Users dynamically
adjust SMT modes to optimize performance.

> >>> 
> >>> Hotplug should not be happening that often to begin with, it is a slow 
> >>> path that
> >>> depends on the disruptive stop-machine mechanism.
Yes, it doesn't happen too often, but when it does, on machines with 
(>= 1920 CPUs) it takes more than 20 mins to finish.

> >>> 
> >>>> 
> >>>> Performance data on a PPC64 system with 400 CPUs:
> >>>> 
> >>>> + ppc64_cpu --smt=1 (SMT8 to SMT1)
> >>>> Before: real 1m14.792s
> >>>> After:  real 0m03.205s  # ~23x improvement
> >>>> 
> >>>> + ppc64_cpu --smt=8 (SMT1 to SMT8)
> >>>> Before: real 2m27.695s
> >>>> After:  real 0m02.510s  # ~58x improvement
> >>> 
> >>> This does look compelling but, Could you provide more information about 
> >>> how this was tested - what does the ppc binary do (how many hot plugs , 
> >>> how does the performance change with cycle count etc)?
The ppc64_cpu utility generates a list of target CPUs based on the
requested SMT state and writes to their corresponding sysfs online
entries.

Sorry, I didn't get your second question about the performance change
with cycle count.
> >>> 
> >>> Can you also run rcutorture testing? Some of the scenarios like TREE03 
> >>> stress hotplug.
Sure, I will get back with the numbers.

> >> 
> >> Also, why not just use the expedite api at the callsite that is slow
> >> than blanket expediting everything between hotplug lock and unlock.
> >> That is more specific fix than this fix which applies more broadly to
> >> all operations. It appears the report you provided does provide the
> >> culprit callsite.
I initially attempted to replace synchronize_rcu() with
synchronize_rcu_expedited() at specific callsites. However, the primary
bottlenecks are within percpu_down_write(), called via _cpu_up() and
try_online_node(). Please refer to the callstack shared below. Since
percpu_down_write() is used throughout the kernel, modifying it directly
would force expedited grace periods on unrelated subsystems.

@[
    synchronize_rcu+12
    rcu_sync_enter+260
    percpu_down_write+76
    _cpu_up+140
    cpu_up+440
    cpu_subsys_online+128
    device_online+176
    online_store+220
    dev_attr_store+52
    sysfs_kf_write+120
    kernfs_fop_write_iter+456
    vfs_write+952
    ksys_write+132
    system_call_exception+292
    system_call_vectored_common+348
]: 350
@[
    synchronize_rcu+12
    rcu_sync_enter+260
    percpu_down_write+76
    try_online_node+64
    cpu_up+120
    cpu_subsys_online+128
    device_online+176
    online_store+220
    dev_attr_store+52
    sysfs_kf_write+120
    kernfs_fop_write_iter+456
    vfs_write+952
    ksys_write+132
    system_call_exception+292
    system_call_vectored_common+348
]: 350


> > 
> > Because hotplug is not a fast path; there is no expectation of
> > performance here.
True.

> 
> Agreed, I was just wondering if it was incredibly slow or something. Looking 
> forward to more justification from Vishal on usecase,
> 
>  - Joel
> 
> 
> > 

- vishalc

Reply via email to