On 2021-11-20, robert bristow-johnson wrote:

The *only* operational difference between additive and subtractive dither is that the latter has the *possibilty* of subtracting the added dither leaving only the uncorrelated (to the 2nd moment) quantization noise.

Subtractive has the possibility of regaining any and all added dither signals. It is nice to have the dither signal be TPDF, because then the undecoded signal behaves as an additive dither one and decouples the second statistical moment of error from the signal (no noise pumping even at low levels).

But there *is* an operational difference: as long as subtractive dither goes above RPDF (the optimal minimum), anything else beyond goes. At the same time, no amount short of infinite power in additive dither does the job fully.

To regain that 4.77 dB lost when adding TPDF dither to the signal immediately before quantizing to a shorter word length.

In subtractive dither, all of it is subtracted out. This is also rather interesting when you chain devices: if you use additive dither, every interface out and in will add another 4.77dB (plus something coming from the bitwidth you're using). If on the other hand you do subtractive on every interface, you'll only add half of an LSB, fully uncorrelated, so that successive stages in the processing chain ought to add to eachother by root two, by stage.

The net kernal of this Gerzon insight is that you can use the LSB bit of the previous 128 samples, put those 128 noisy bits into a big messy logic mess to scramble them bits up even more.  From those scrambled bits, two uniform p.d f. pseudo-random words are derived that are added to become the TPDF dither word that is added to the longer wordlength signal immediately before quantization.

Please give me a reference. I could use it. As the maintainer of the Ambisonic Motherlode, I already thought I had seen everything from Michael-dearest, but apparently not.

Adding that adds 4.77 dB to the noise floor.

If you add information to a datastream, of course that information will add noise. Come now, I also came into the circuit by reading, at 15a, Storer's Text Compression, or what was it now. I actually read even rate-distortion theory by hand.

At the receiver end, that triangular p.d.f. dither word can be rederived from the same 128 LSBs of the quantized data sent over the stream and subtracted from quantized signal which should recover that 4.77 dB of lost noise floor.

Much the same I thought about. Only block-wise, and not running, nor self-sychronising like I'd want it to be.

Even if there is noise-shaping in the quantization, this dither can be identically filtered and subtracted at the receiving end and i would expect some theoretical gain, if not the 4.77 dB.

As said, once you have something synchronized, you can subtract anything at all. It'd alwaays be better to make the undecoded signal something nice, like TPDF at start, or then something like Sony's SuperBitmapping II. While being able to just subtract all of that out when synchronized.

The fun thing about that is that it's actually a much more difficult problem. Straight subtraction of any combination of RPDF-dithers, derived from a deterministic pseudorandom number generator, is easy. But something like SBM is not, because you cannot guarantee that its signal stays continuously above norm half. (That's the limit for subtractive dither. If you really want to push it, it could be complex norm one, but norm one is still the ultimate limit.)

You're no worse off with the additive dither than you would be otherwise.

But you are.

 The dither generated this way is still useful dither.

There you are right. But it isn't optimal, like subtractive is. ;)
--
Sampo Syreeni, aka decoy - de...@iki.fi, http://decoy.iki.fi/front
+358-40-3751464, 025E D175 ABE5 027C 9494 EEB0 E090 8BA9 0509 85C2

Reply via email to