To only way to guarantee precision is to use enough bits for intermediate results. Given your running sum formulation, the worst-case quantization error for any N is

0.5*Pi + 0.25*Pm*(N+2)(N-1)/N

where Pi is the precision of inputs (the summed signals) and Pm is that of the partial sum (notice it's the precision, not the error). In theory you want to keep this below 1/2 of the desired output precision.

How you do it is highly platform-dependent so there's no universal solution. Usually we do not like a lot of multiplications and divisions, for speed rather than precision reasons (again, that is platform-dependent too). Personally I prefer storing the sum and count separately, then do the division when the result is read. But this is not going to help with the precision if the intermediate result (sum in this case) is given the same number of bits.

xue

-----Original Message----- From: Alessandro Saccoia
Sent: Monday, December 10, 2012 9:41 AM
To: A discussion list for music-related DSP
Subject: Re: [music-dsp] Precision issues when mixing a large number ofsignals


I don't think you have been clear about what you are trying to achieve.

Are you trying to compute the sum of many signals for each time point? Or are you trying to compute the running sum of a single signal over many time points?

Hello, thanks for helping. I want to sum prerecorded signals progressively. Each time a new recording is added to the system, this signal is added to the running mix and then discarded so the original source gets lost. At each instant it should be possible to retrieve the mix run till that moment.


What are the signals? are they of nominally equal amplitude?


normalized (-1,1)

Your original formula looks like you are looking for a recursive solution to a normalized running sum of a single signal over many time points.

nope. I meant summing many signals, without knowing all of them beforehand, and needing to know all the intermediate results


I could relax this requirement, and forcing all the signals to
be of a given size, but I can't see how a sample by sample summation,
where there are M sums (M the forced length of the signals) could
profit from a running compensation.

It doesn't really matter whether the sum is accross samples of a signal signal or accross signals, you can always use error compensation when computing the sum. It's just a way of increasing the precision of an accumulator.


I have watched again the wikipedia entry, yeah that makes totally sense now, yesterday night it was really late!


Also, with a non linear
operation, I fear of introducing discontinuities that could sound
even worse than the white noise I expect using the simple approach.

Using floating point is a non-linear operation. Your simple approach also has quite some nonlinearity (accumulated error due to recursive division and re-rounding at each step).

I see. cheers

alessandro


Ross
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp
--
dupswapdrop -- the music-dsp mailing list and website:
subscription info, FAQ, source code archive, list archive, book reviews, dsp 
links
http://music.columbia.edu/cmc/music-dsp
http://music.columbia.edu/mailman/listinfo/music-dsp

Reply via email to