On 2025-12-14 15:49, Boqun Feng wrote:
[...]
A few issues in the prototype:


See [1] for the updated commit.

* hazptr_load_try_protect_percpu() needs to either be called in
   preemption disabling context or disable preemption itself.

Good point, fixed. Added a "guard(preempt)()" within the scope of a
for loop so preemption is disabled for each attempt, but not across
attempts.


* In __hazptr_load_try_protect(), seems to me that we shouldn't return
   after hazptr_chain_backup_slot() being called, but rather continue,
   otherwise we would miss the updater changes.

Good point, fixed.


* As I mentioned above, since we allow preemption during hazptr readers,
   then when we have multiple users, it's likely that the per-CPU slot is
   used at most of the time, and other try_protect()s would always go
   into the slow path, which downgrade us to normal locking.

Fair point. I've modified my implementation to have 8 slots per cpu
(rather than one). It fits within a single cache line. That should
allow a few preempted tasks to keep hold of a hazard pointer without
requiring use of the backup slot.

As for the overflow list, I've done the following changes:

- It now has per-cpu overflow lists, so the fallback relies on per-cpu
  locking rather than a global lock,
- Protect list add/del with raw spinlock rather than mutex.
- Introduce a generation counter (64-bit integer) with each overflow
  list, allowing piecewise iteration of the list by synchronize. The
  generation counter is read when beginning list iteration. Each
  time the raw spinlock is released and taken again to continue list
  iteration, the generation counter is checked, and retry entire list
  traversal if it does not match the counter value at beginning of
  traversal.

Strictly speaking, the main downside of the generation counter approach
is that a steady stream of list modifications could cause a synchronize
to retry endlessly. I open to ideas on how to improve this.

Feedback is welcome!

[1] 
https://github.com/compudj/linux-dev/commit/730616cd2989b710b585144ac10fc34b4fd641ea

Thanks,

Mathieu


--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com

Reply via email to