> > I was assuming that using poll (snd_pcm_wait), with the
> > alsa-lib/test/latency.c program would have snd_pcm_read
> > return period size samples at a time, but this is not
> > the case.
> > [..SNIP..]

> It's intended. It looks better in my eyes, if we can process some data 
> ahead in low-latency application that wait for whole batch. Of course, 
> optimized applications for cpu-usage might work differently.

I see... However, I can imagine many situations in which you are
processing with a fixed buffer size (doing an fft for example).
And I still think that it is reasonable to expect that the default
behavious is that readbuf will always return the same amount of
samples. Would it not be possible to add a commandline argument
to specify the behaviour ?

-p,--poll      use poll (wait for event - reduces CPU usage)
               events are generated when a new period of frames is available
-q,--quickpoll use poll (wait for event - reduces CPU usage)
               events are generated when new frames are available

or something like that. As the user has to specify either one, he/she will
be concient of what is happening.

Apart from that, I think it is a bit agressive that the default execution
of the latency test is non-blocking, which causes all CPU to go to the
latency test, and at first sight really looks like your machine locks up.
Would it be a good idea to make blocking the default, and change the
argument


-b,--block     block mode

for
-n,--nonblock  non-block mode

?

Maarten



_______________________________________________
Alsa-devel mailing list
[EMAIL PROTECTED]
https://lists.sourceforge.net/lists/listinfo/alsa-devel

Reply via email to