TerryS;675513 Wrote: 
> I'm not sure, but I think we strayed from the point.
> Let's look at from the playback side since I think this is is easier to
> visualize.  Let's say the correct output value of the DAC at a
> particular instant in time is 9.5 but due to the resolution limits of
> the DAC, it can only be either 9 or 10.  This means you will have an
> error of 0.5 out of 9.5 which is almost 5% distortion.  
> There is nothing that can be done about it.  The DAC can only output 9
> or 10, never the correct value of 9.5.  Dithering cannot fix this.
> If the signal was steady for long enough for several samples to be
> taken of the value, then dithering does help.  The random noise added
> during recording that causes the dithering will result in the signal
> being recorded as a random set of 9s and 10s.  Then during playback,
> this random set of 9s and 10s would average out (after the low pass
> filter) to 9.5.  The distortion is fixed.
> But how does this work when the value isn't steady enough to get
> multiple readings of the value before it changes?  The DAC simply
> cannot output the correct value.  The input value to the DAC can only
> be 9 or 10.  It cannot be the correct value of 9.5.  That number
> doesn't exist in this DAC's numbering system.
> Imagine the stream of values that should be representing the music
> waveform.  How can this ever be anything more than a constant stream of
> wrong (approximately right) values.  The will on the average be wrong by
> one half the minimum step size of the DAC resolution.  I don't see how
> dithering can fix this unless we are talking about a repetitious signal
> so that we can average the errors over multiple samples.
> 
> Terry
Terry, we have had this discussion before and it always goes round and
round in circles because 
a.you simply refuse to accept the distinction between noise and
distortion
b.dithering does not simply work by averaging although some concept of
averaging does come into it.
c. the point is that once you have got the time domain information
right with only random error (ie noise) , the Fourier transform of that
time domain information will then have spurious signals (noise) spread
out throughout the frequency range (and therefore at a lower level than
93db in each relevant frequency band) and the true signal (eg the 1 kHz
tone) will be discernible even though that signal is at less than 1 lsb
in level.
d. you are wrong that this process only works with a constant sample
value, it will work if the signal is changing. This will be true
incidentally if there is a 1kHz tone in a 44kHz system where there will
be 44 samples per cycle of that tone. It will also be the case if you
have a 5 kHz tone that comes in and out every half a second too (and so
on with increasing complexity)

e   perhaps one way to imagine this is as a line of best fit through
data with values of limited accuracy. If the error is made random your
line of best fit will get closer to the real value then if there are
systematic errors one way rather than the other. This applies for data
values which are different and will be true of a curves as well as
straight lines (before you start...). You don;t nweed the value to stay
the same for several samples, just for there to be a "trend" which
continuous for several samples. If there werre no such "trend" you
would not have any signal let alone any music, because you would have
(by definition) noise. 
[Digital sampling rests on the principle (discovered by Fourier?) that
any squiggly line can be made by taken a load of sine waves and
superimposing them. if your signal is suitably band limited (ie only
sine waves up to a certain frequency are allowed), then you can deduce
the formula fior the squiggly line by  deriving from the sample data
the frequency phase and amplitude of all the sine waves. The squiggly
line must therefore have a "trend" in the sense that it follows a
mathematical progression]

Now the sample values have limited accuracy, but if the error is
radnom (noise) you can still deduce the underlying squiggly line more
accurately than you can if the  error is systematic (distortion)

f. there is a connection between how fast the signal changes ansd the
number of samples per second. the number of samples tells you the
highest frequency you are allowed (nyquist anyone) ; and the highest
frequency determines the steepest slope you can have on the time domain
representation of the signal, ie how quick it can change. 
g. musical signals are not random. When you make a musical note then
the same basci tones are played (albeit with decay)for some time. In
effect you have a tone and its harmonics superimposed which continue
for quite a long time RELATIVE TO THE SAMPLE RATE. Although new events
do occur (eg a new cymbal strike as opposed to the decay of one strike
over time) they don't occur all that often from  the perspective of
44kHz. Muscial signals are therefore somewhat "repetitious" relative to
the sampling rate. 
h you have to bear in mind that bit depth and sample rate are somewhat
interchangeable -think DSD, think delta sigma dac. The trick is that
you have to increase the sample rate to compensate for the decline in
bit rate.

I promised myself i wasn't going to waste lots of time trying to
explain something I only just about grasp to someone who either doesn't
really care or already understands and is just pretending to be
persistently obtuse.
But I have.


-- 
adamdea
------------------------------------------------------------------------
adamdea's Profile: http://forums.slimdevices.com/member.php?userid=37603
View this thread: http://forums.slimdevices.com/showthread.php?t=89733

_______________________________________________
audiophiles mailing list
audiophiles@lists.slimdevices.com
http://lists.slimdevices.com/mailman/listinfo/audiophiles

Reply via email to