EDO 0.6 contains an experiemental menu (Settings > Advanced > Digital
Output > Buffer Tuning) to allow some simple output buffer tuning to be
performed without need to install TT3 and log into Touch.  I make no
claim that buffer tuning will change sound quality, but did want to
speculate on what changes it creates.  I'm posting this here as I don't
want to distract the EDO thread with speculative discussion related to
sound quality tweeks...

Some background to the touch software:

Touch runs a real time version of the linux kernel which means its
possible to define some processes as real time (run with priority as
soon as they are want to access the cpu) and others which run on normal
process fairness basis.  The purpose of this is to be able to set the
back end audio process to make sure it can also provide data to the
output device avoiding dropouts due to lack of data.  

There are 3 main threads and 3 buffers which are relavent running within
the squeezeplay software:

TCP ---> jive main thread -> [streambuf (3M)] -> jive decode thread ->
[output buf (10sec at 44.1)] -> jive_alsa (RT) -> [alsa buffer]

Of these the only process which runs at real time is jive_alsa.  This
means it normally runs as soon as it is able to and when it runs it
moves samples from the output buffer to the alsa buffer, applying volume
gain and format conversion when it runs.  The frequency jive_alsa wakes
up at is directly determined by the alsa period_time which is defined
from the alsa buffer time and the sample rate.  Thus tuning the alsa
buffer (aka "buffer tuning") changes how frequently jive_alsa wakes up
and does something.  Timing of other processes running is really
dependant on other things and to a first order has no relation to the
output buffer size.  For instance the decode process wakes up every few
hundreds ms and decodes and equivalent chunk of audio data and then
sleeps (once the buffers are full).

So my contention is that the primary impact of buffer tuning is to
change how frequenty the jive_alsa process runs and how much data it
moves each time.   As the process is RT it actually runs like a
metronome with a very constant interval between when it starts.  The
interval which jive_alsa runs is the period_time which is determined by
the buffer_time / period_count, with the exception that the period size
can't exceed the internal alsa buffer size set by the kernel device
driver.

Setting the buffer time too small will result in dropouts as even with
real time processing the cpu can't keep the output device dma buffer
full with data.  Setting it too high was also avoided by Logitech as it
will more output latency which probably leads to worse synchronisation.

So what does this mean:
- default settings are 20ms buffer time and period count of 2 - this
results in a period time of 10ms for most cases, but at 192k sample
rates, the period time becomes 5.3ms as it is limited by the alsa buffer
size if using the spdif output.
- increasing settings to 100ms buffer time and period count of 4 results
in a period time of 23ms for 44.1k, 10.6ms for 96k and 5.3ms for 192k,
again limited by the device driver buffer size at higher rates

A period count of 2 is the minimum which can usefully work and allows
the minimum number of times jive_alsa needs to run for a given buffer
time.  The only real reason I can see for making the period count larger
is that it allows larger effective buffers once it becomes limited by
the device driver buffer size - i.e. we can have 4 or more periods each
of max size which match the device driver size.

What causes an impact on sound quality (if there is one?).  Well I
speculate that it could be the frequency at which jive_alsa runs - hence
changing the buffer size impacts this.  The cpu still needs to move as
many bytes of data, but the process which is running at real time and
hence like a metronome is running on a less frequent basis for larger
buffer sizes (at lower sample rates).  I don't know why this actually
impacts sound quality (if it does) - it could be due to cpu related
noise or noise from DRAM access coupling to other parts of the circuit,
or something completely different...  However there is something which
definately changes which is the frequency that a bust of activity
occurs. [note the size of the burst changes depending on the frequency
too as the total amount of data moved is constant]

EDO 0.6 includes a simple way to try the default buffer settings + large
(buffer time 100ms, period count 4) and small (buffer time 4ms, period
count 2) via a simple menu option - which will then reboot when
selected.  It also includes an option called "large + randomise cpu" -
this still runs the jive_alsa process at real time, but attempts to
randomise when jive_alsa wakes up.  This is a total experiement to see
if it has any impact...  What it does is schedule the next time it
should wake to be the time until the output buffer can accept a period's
worth of data + between 0 and 90% of additional delay of the next
period.  This means the interval between jive_alsa waking has randomness
applied.  It still aims to shift the same amount of data and but the
bursts of cpu/dram activity occur at a less well defined bases - i.e. it
no longer acts like a metronome.  Whether this is postive or negative or
completely undetectable as far as sound quality is concerned I don't
know...  However it tries to understand what it is that people are able
to detect.  I will publish the code differences once I've agreed if it
should be here or elsewhere so you can comment on the implementation.


------------------------------------------------------------------------
Triode's Profile: http://forums.slimdevices.com/member.php?userid=17
View this thread: http://forums.slimdevices.com/showthread.php?t=94855

_______________________________________________
audiophiles mailing list
audiophiles@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/audiophiles

Reply via email to