Hi, Akihiko,
I found that you replied to me and forgot to cc all, I added all of the
people back.
On Sun, Nov 09, 2025 at 06:05:00PM +0900, Akihiko Odaki wrote:
> On 2025/11/08 10:45, Akihiko Odaki wrote:
> > On 2025/11/07 6:52, Peter Xu wrote:
> > > When I was having a closer look, I found some other issues, I'll list it
> > > all here.
> > >
> > > 1. I found that rcu_gp_event was initialized as "true". I'm not sure
> > > whether it should be false. This has nothing to do with your
> > > series as
> > > well, only some observation of mine.
> > > qemu_event_init(&rcu_gp_event, true);
> >
> > Indeed it makes more sense to initialize it with false.
> >
> > rcu_read_unlock() sets the event only in grace periods, and
> > wait_for_readers() always reset it when ending those grace periods. By
> > initializing it with false, the event will be false whenever outside
> > grace periods, improving consistency.
Thanks for confirming this. AFAIU this isn't a huge deal, as
wait_for_readers() should be tolerant of false positives. So I'll leave it
to you on whether to touch it together.
> >
> > >
> > > 2. It looks to me your patch here checked the wrong retval of
> > > qemu_event_timedwait()..
> > >
> > > } else if (qatomic_read(&rcu_call_count) >= RCU_CALL_MIN_SIZE ||
> > > !sleeps || qemu_event_timedwait(&sync_event, 10)) {
> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > >
> > > should it be "!qemu_event_timedwait(&sync_event, 10)"? Side
> > > note: maybe
> > > we should cleanup all _timedwait() for QEMU's eventfd, sem, cond,
> > > ... because they don't return the same retval.. but if you think
> > > sem is
> > > good, then we don't need eventfd's timedwait() in this series (your
> > > initial two patches).
> >
> > You are right here too. This is a stupid mistake.
>
> No, the original patch is right. qemu_event_timedwait(&sync_event, 10)
> returning true means that the forced variable was set before timing out,
> hence the we need to enter the force quiescent state.
You're correct. I got misleaded by the comment here:
/*
* Now one of the following heuristical conditions is satisfied:
* - A decent number of callbacks piled up.
* - It timed out. <--------------------
* - force_rcu() was called.
*
* Force a quiescent state.
*/
I'm guessing you got misleaded too when I raised the question and when you
were looking. Maybe it means it would be good to change that line of
comment into:
* - It timed out continuously 5 times (causing sleeps==0)
>
> >
> > >
> > > 3. I doubt if malloc_trim() feature is broken with your current patchset,
> > > because now the loop looks like:
> > >
> > > for (;;) {
> > > qemu_event_reset(&sync_event);
> > > n = qatomic_read(&rcu_call_count);
> > > if (n) {
> > > break;
> > > }
> > > #if defined(CONFIG_MALLOC_TRIM)
> > > malloc_trim(4 * 1024 * 1024);
> > > #endif
> > > qemu_event_wait(&sync_event);
> > > }
> > >
> > > I don't know if n can be zero here at all.. if you use the sem change
> > > this might trigger but it's not designed for it. When using sem
> > > we may
> > > want to periodically trim. But I'd like to know how you think in
> > > general
> > > on the sem idea first (e.g. do we need to be prepared for high
> > > freq rcu
> > > frees).
> >
> > malloc_trim() is triggered when the RCU thread is idle, and that is not
> > changed with this patch.
You're right, I read it wrong. Please ignore this comment.
Thanks,
--
Peter Xu