On Tue, 5 Sep 2023 12:37:28 +0100
Jon Turney wrote:
> On 05/09/2023 10:28, Takashi Yano wrote:
> > Previous wait time of 100msec is too long if application specifies
> > smaller buffer. With this patch, the wait time is reduced to 1msec.
> 
> I don't really have the context to understand this change, but it seems 
> to me the obvious questions to ask are:
> 
> Are there negative consequences of making this wait much smaller (i.e. 
> lots more CPU spent busy-waiting?)
> 
> Your comment seems to imply that the wait time should be proportional to 
> the buffer size and sample rate?
> 
> > ---
> >   winsup/cygwin/fhandler/dsp.cc | 4 ++--
> >   1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > diff --git a/winsup/cygwin/fhandler/dsp.cc b/winsup/cygwin/fhandler/dsp.cc
> > index e872aa08c..00f2bab69 100644
> > --- a/winsup/cygwin/fhandler/dsp.cc
> > +++ b/winsup/cygwin/fhandler/dsp.cc
> > @@ -931,8 +931,8 @@ fhandler_dev_dsp::Audio_in::waitfordata ()
> >       set_errno (EAGAIN);
> >       return false;
> >     }
> > -      debug_printf ("100ms");
> > -      switch (cygwait (100))
> > +      debug_printf ("1ms");
> > +      switch (cygwait (1))
> >     {
> >     case WAIT_SIGNALED:
> >       if (!_my_tls.call_signal_handler ())

The code around the modification is as follows.

  while (!Qisr2app_->recv (&pHdr))
    {
      if (fh->is_nonblocking ())
        {
          set_errno (EAGAIN);
          return false;
        }
      debug_printf ("1ms");
      switch (cygwait (1))
        {
        case WAIT_SIGNALED:
          if (!_my_tls.call_signal_handler ())
            {
              set_errno (EINTR);
              return false;
            }
          break;
        case WAIT_CANCELED:
          pthread::static_cancel_self ();
          /*NOTREACHED*/
        default:
          break;
        }
    }

while loop is very short, so almost all the time in the loop
is consumed by cygwait() even with wait time of 1msec.

Theoretically, the CPU gets 100 times load, however, it is
too small to care the CPU load.

-- 
Takashi Yano <takashi.y...@nifty.ne.jp>

Reply via email to