Re: [music-dsp] efficient running max algorithm

2016-09-04 Thread Ethan Fenn
Wow, my very own hyphenated algorithm. :o)

Thanks for sharing this, Evan, it's impressive how concise the code ends up
being.

I'd echo the general feeling (which you also express in your Future Work
section) that for DSP work it really wants to be implemented using a ring
buffer. Neither std::vector nor std::deque is very good for the purpose...
vector because popping an element off the front is going to be slow, and
deque because it's probably going to be allocating and deallocating memory
all the time for no good reason.

Sort of an aside, but I think an STL-style ring buffer, possibly using
positive indices to look up things from the front and negative to look from
the back, would be awesome. Maybe I'll try putting one together that's
compatible with the code you've written, if I manage to find some
time. That would be a useful thing to have in general.

Robert:

well, this is where i think the rubber meets the road.  i think the worst
> case ain't gonna be too much better than O(log2(windowSize)) per sample
> even with amortization over the block.


It's a good question, but I'm not sure what kind of answer you're hoping to
get. I think the Lemire algorithm is not just theoretically interesting but
practical, for two reasons: 1) algorithms that store data in sorted arrays
rather than trees tend to be really fast, because they're cache coherent
and the CPU doesn't have to spend cycles following pointers around; and 2)
one of the basic insights behind the Lemire algorithm is that with a wedge
structure, half of the time all you have to do is push a sample on the end
of your array and carry on; this is obviously preferable to having to
update some kind of tree structure for every sample coming in.

That said, it's really not possible to answer the question about what
algorithm will have a better worst case per block on given hardware, with a
given window size and block size, without trying it out. You can try it and
see what happens, but you don't have to if you don't want to.

-Ethan




On Sun, Sep 4, 2016 at 2:42 AM, Evan Balster  wrote:

> I think the fact that binary searches play into both the Lemire-Fenn
> algorithm and the binary-search algorithm is causing some confusion.
> Remember that the original Lemire algorithm, with its simple linear search
> and worst-case O(windowSize) per sample time, *still* manages O(1) per
> sample over any block of size larger than windowSize, and will thus
> outperform the binary search algorithm in terms of total complexity.
>
> Understanding why that is the case --- IE, why the O(windowSize) wedge
> search can outperform the O(log2(windowSize)) heap search --- is strictly
> prerequisite to understanding why the Lemire-Fenn algorithm is more
> scalable than the binary search.
>
> It boils down to the fact that the actual number of comparisons performed
> in the wedge search is proportional to how many items are being removed
> from the wedge --- and each of those is less work we'll need to do next
> time.
>
> I believe the worst case for processing N samples with windowSize R is
> roughly O(N+R) for the original Lemire algorithm---because in the worst
> case we'll have to prune that many samples.  As N approaches infinity,
> (N+R)/N approaches our amortized complexity: 1.
>
>
> The trick of the Lemire-Fenn algorithm is that it behaves as the Lemire
> algorithm, but caps the per-sample complexity by switching over to binary
> search just as the linear search surpasses O(log2(windowSize)) steps.  This
> change reduces the complexity class for sample-by-sample processing without
> increasing the complexity in any case.
>
> – Evan Balster
> creator of imitone 
>
> On Sat, Sep 3, 2016 at 11:05 PM, Ross Bencina 
> wrote:
>
>> On 4/09/2016 1:42 PM, robert bristow-johnson wrote:
>>
>>> i think the worst case ain't gonna be too much better than
>>> O(log2(windowSize)) per sample even with amortization over the block.
>>>
>>
>> You can think that if you like, but I don't think the worst case is that
>> bad. I have given examples. If you would like to provide a counter-example,
>> feel free. But first you probably need to understand the algorithm well
>> enough to implement it.
>>
>> I'm exiting this discussion now. Talking in vague terms isn't getting us
>> anywhere. Much as I would like to, I don't have time to write a new
>> implementation and a paper to convince you.
>>
>> Sorry,
>>
>>
>> Ross.
>> ___
>> dupswapdrop: music-dsp mailing list
>> music-dsp@music.columbia.edu
>> https://lists.columbia.edu/mailman/listinfo/music-dsp
>>
>>
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu

Re: [music-dsp] BW limited peak computation?

2016-09-04 Thread Russell McClellan
Hi, I missed this thread until now.  Last year, I published an article
on this topic here:
https://techblog.izotope.com/2015/08/24/true-peak-detection/

In it is included a proof that the true peak can be unboundedly higher
than the sample peak.

Thanks,
-Russell

On Mon, Aug 1, 2016 at 5:05 PM, Theo Verelst  wrote:
> Paul Stoffregen wrote:
>>
>> Does anyone have any suggestions or references for an efficient algorithm
>> to find the peak
>> of a bandwidth limited signal?
>>
>
> Hi,
>
> I think without getting lost in quadratic algebra or endless searches for a
> holy grail that doesn't exist that I don't take part in, you've answered the
> main theoretical question yourself: to know the signal between the samples,
> you need perfect reconstruction to the actual signal, and then analyze that.
>
> Of course, like the "Fast Lookahead Limiter" from Ladspa or LV2 which I use
> regularly does, you could up-sample to a reasonably high sampling frequency
> with the best tools you've got, and hope the best of a tool that up-samples
> another 8 times (IIRC) and leave it at it that if you're using decent input
> signals to your sampling path that there aren't a great many signals
> actually mirrors around the Nyquist frequency so that a tool like that will
> doe a reasonable flattening job.
>
> Of course it's possible there's one peak in your signal at 1/4*PI between
> two samples such that no matter what a rational fraction between samples you
> compute you could never find it with infinite accuracy... I suppose however
> in most practical cases you can have a pre-conditioned situation where you
> know which possible up-samplers are going to be used on your decent digital
> signal product, like wide window sinc, standard short interpolation and a
> couple of other methods (FIR/IIR approximations, wave shape approximations,
> multi-band approaches, and for the pro's: average based frequency components
> with multi-limited computations in the 96kHz or higher sampling domain). If
> you know what the customers are going to use as up-sampler, and once with
> the final product you make a reasonable quality wide-window sync up-sampled
> test run to see if there are any special cases to tend to, you could work
> with that, unless people enjoy endless (but not particularly useful)
> discussions on heuristics.
>
> Theo V.
>
> ___
> dupswapdrop: music-dsp mailing list
> music-dsp@music.columbia.edu
> https://lists.columbia.edu/mailman/listinfo/music-dsp
>
___
dupswapdrop: music-dsp mailing list
music-dsp@music.columbia.edu
https://lists.columbia.edu/mailman/listinfo/music-dsp