On Sun, Apr 12, 2026 at 03:48:58PM -0700, Alexei Starovoitov wrote:
> On Fri, Apr 3, 2026 at 9:28 AM Samuel Wu <[email protected]> wrote:
> >
> > On Fri, Apr 3, 2026 at 3:04 AM Greg Kroah-Hartman
> > <[email protected]> wrote:
> > >
> > > On Thu, Apr 02, 2026 at 12:37:12PM -0700, Samuel Wu wrote:
> > > > On Wed, Apr 1, 2026 at 9:06 PM Greg Kroah-Hartman
> > > > <[email protected]> wrote:
> > > > >
> > > > > On Wed, Apr 01, 2026 at 12:07:12PM -0700, Samuel Wu wrote:
> > > > > > On Wed, Apr 1, 2026 at 2:15 AM Greg Kroah-Hartman
> > > > > > <[email protected]> wrote:
> > > > > > >
> > > > > > > On Tue, Mar 31, 2026 at 08:34:09AM -0700, Samuel Wu wrote:
> >
> > [ ... ]
> >
> > > > The data is fundamental for debugging and improving power at scale.
> > > > The original discussion and patch [1] provide more context of the
> > > > intent. To summarize the history, debugfs was unstable and insecure,
> > > > leading to the current sysfs implementation. However, sysfs has the
> > > > constraint of one attribute per node, requiring 10 sysfs accesses per
> > > > wakeup source.
> > >
> > > Ok, as the sysfs api doesn't work your use case anymore, why do we need
> > > to keep it around at all?
> > >
> > > > That said, I completely agree that reading 1500+ sysfs files at once
> > > > is unreasonable. Perhaps the sysfs approach was manageable at the time
> > > > of [1], but moving forward we need a more scalable solution. This is
> > > > the main motivator and makes BPF the sane approach, as it improves
> > > > traversal in nearly every aspect (e.g. cycles, memory, simplicity,
> > > > scalability).
> > >
> > > I'm all for making this more scalable and work for your systems now, but
> > > consider if you could drop the sysfs api entirely, would you want this
> > > to be a different type of api entirely instead of having to plug through
> > > these using ebpf?
> >
> > Almost all use cases want all this data at once, so AFAICT BPF offers
> > the best performance for that. But of course, open to discussion if
> > there is an alternative API that matches BPF's performance for this
> > use case.
> >
> > I'm not opposed to dropping the sysfs approach, and I attempted to do
> > so in the v1 patch [1]. I'm not sure who else currently uses those
> > sysfs nodes, but a config flag should remove friction and could be a
> > stepping stone toward deprecation/removal.
> >
> > [1]: 
> > https://lore.kernel.org/all/[email protected]/
> 
> The patches make sense to me.
> 
> Patch 2 adds a bpf selftest and corresponding:
> +CONFIG_DIBS_LO=y
> +CONFIG_PM_WAKELOCKS=y
> 
> and almost green in BPF CI.
> 
> Except s390 that fails with:
> 
> Error: #682/1 wakeup_source/iterate_and_verify_times
> Error: #682/1 wakeup_source/iterate_and_verify_times
> libbpf: extern (func ksym) 'bpf_wakeup_sources_get_head': not found in
> kernel or module BTFs
> libbpf: failed to load BPF skeleton 'test_wakeup_source': -EINVAL
> test_wakeup_source:FAIL:skel_open_and_load unexpected error: -22
> 
> We can still land it into bpf-next for this merge window.
> 
> Greg,
> any objection ?

Yes, it is too late for 7.1-rc1, sorry, there will have not been any
time in linux-next to add it.  Let's revisit it after -rc1 is out, and
again, I feel that "walk all sysfs devices in bpf" is not the correct
solution for a system-wide snapshot interface you want to have,
especially as the one you previously added you feel is now obsolete.

thanks,

greg k-h

Reply via email to