Of course, for processing n samples, something that is O(n) is going to
eventually beat something that's O(n*log(w)), for big enough w.
FWIW if it's important to have O(log(w)) worst case per sample, I think you
can adapt the method of the paper to achieve this while keeping the O(1)
average.
Ins
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ethan Fenn"
Date: Wed, July 20, 2016 10:27 am
To: music-dsp@music.co
http://imitone.com>
On Wed, Jul 20, 2016 at 11:04 AM, robert bristow-johnson <
r...@audioimagination.com> wrote:
>
>
> -------- Original Message
> Subject: Re: [music-dsp] efficient running max algorithm
> From:
nd of
> magic used in realtime hash-table migration and make that discard step run
> in constant time per sample---but that's a little more tango than I feel
> like dancing today!
>
> – Evan Balster
> creator of imitone <http://imitone.com>
>
> On Wed, Jul 20, 2016
Regarding the Lemire paper his code is provided here:
https://github.com/lemire/runningmaxmin
It's been a while since I've read the paper, but iirc it reports fast
average O(.), not worst case. Specifically it's good if the signal has
few changes in direction, but the worst case is not better
Ethan ---
If we use only a binary search in the discard step, isn't the amortized
complexity still O(N)? I suppose not... We'd be doing a log2(w) search
every sample in the worst case where a monotonic decrease occurs.
I'll have to look over the paper to get a better understanding, but would
be
Yeah, with a binary search, you're doing O(log(w)) work, but you might not
end up discarding any samples. With the paper's linear search, you might do
O(w) work, but only if you end up discarding O(w) samples. So, it turns out
to be worth it, as long as you can tolerate the spike.
The thinking wit
The trick is to count the total number of operations, not for each incoming
sample as the window moves.
The algorithm maintains a buffer that pushes at the back and pops at both front
and back. Each sample is pushed onto the buffer and popped out of it exactly
once. If R samples are popped fr
Ahh, smart! Thanks for the insight; I understand now.
It occurs to me that you could slightly tighten the bound on per-sample
work, because it's not necessary to include the elements of the linear
search in the binary one. The worst-case would then be O(J) per sample,
where J=log2(w-J). But thi
Hello, all ---
Reviving this topic to mention I've created an STL-compatible header
implementing what I'm calling the "Lemire-Fenn" algorithm for rolling
minimum and maximum:
https://github.com/EvanBalster/STL_mono_wedge
This was a little pet project I undertook today to familiarize myself with
-- Original Message --------
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster"
Date: Fri, September 2, 2016 12:12 am
To: music-dsp@music.columbia.edu
-
; speed (or reduction of execution time) of this Lemire alg over the simpler
> binary tree alg. it doesn't seem to me to be a lot better for memory
> requirements.
>
>
> --
>
> r b-j r...@audioimagination.com
>
> "Imagination is more impor
anna know, most specifically, what is the expectation of gain of
>> speed (or reduction of execution time) of this Lemire alg over the simpler
>> binary tree alg. it doesn't seem to me to be a lot better for memory
>> requireme
--
r b-j r...@audioimagination.com
"Imagination is more important than knowledge."
Original Message ------------
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster"
On 2/09/2016 4:37 PM, Ross Bencina wrote:
When the first difference is positive, the history is trimmed. This is
the only time any kind of O(N) or O(log2(N)) operation is performed.
First difference positive implies that a new local maximum is achieved:
in this case, all of the most recent histor
Just a few clarifications:
- Local maxima and first difference don't really matter. The maximum wedge
describes global maxima for all intervals [t-x, t], where x=[R-1..0].
- If two equal samples are encountered, the algorithm can forget about the
older of them.
It has been very helpful for my p
Right aren't monotonic signals the worst case here? Or maybe not, since
they're worst for one wedge, but best for the other?
Ethan D
On Fri, Sep 2, 2016 at 10:12 AM, Evan Balster wrote:
> Just a few clarifications:
>
> - Local maxima and first difference don't really matter. The maximum
> wedg
For one wedge, the worst case is a monotonic signal, which will saturate
the wedge with R samples.
For two wedges, the worst case is the situation where the absolute distance
from previous samples to the latest sample is monotonically decreasing. In
this case the wedges will contain R+1 samples b
On 3/09/2016 3:12 AM, Evan Balster wrote:
Just a few clarifications:
- Local maxima and first difference don't really matter. The maximum
wedge describes global maxima for all intervals [t-x, t], where x=[R-1..0].
I think it's interesting to look at the algorithm from different
perspectives.
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ross Bencina"
Date: Fri, September 2, 2016 10:19 pm
To: music-dsp@music.co
On 3/09/2016 2:14 PM, robert bristow-johnson wrote:
and in discussing iterators, says nothing about push_back()
and the like.
push_back(), push_front(), pop_back(), pop_front() are methods generally
available on queue-like containers.
from online i can get an idea, but it seems to me to be
Ross' explanation is solid.
Robert: R refers to range ("delay line" size, one could say) and N refers
to signal length.
Clarifying about the weird linear/binary search: It's necessary to
maintain O(N) complexity for processing N samples. This took me a while to
grasp:
If we do a backward lin
On 3/09/2016 5:00 PM, Evan Balster wrote:
Robert: R refers to range ("delay line" size, one could say) and N
refers to signal length.
In that case R, is what I've been calling windowSize. and when I say
O(1) I mean Evan's O(N).
Ross.
___
dupswapdr
a time,
like processing samples in blocks of R samples. now i have no idea.
�
�
r b-j
Original Message --------
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ross Bencina"
Date: Sat, Septe
using "N". earlier i
> had been under the impression that you're processing R samples at a time,
> like processing samples in blocks of R samples. now i have no idea.
>
>
> r b-j
>
>
> Original Message --
�
okay, so it appears, all of this time, that i have mixed up the roles of N and
R.
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster"
Date: Sat, September 3, 2016 1:40 pm
T
ation.com> wrote:
>
>
> okay, so it appears, all of this time, that i have mixed up the roles of N
> and R.
>
>
>
> ------------ Original Message
> Subject: Re: [music-dsp] efficient running max algorithm
> From:
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Evan Balster"
Date: Sat, September 3, 2016 3:24 pm
To: "robert bristow-johnson"
music-ds
On 4/09/2016 2:49 AM, robert bristow-johnson wrote:
sorry to have to get to the basics, but there are no *two* length
parameters to this alg. there is only one.
define the streaming real-time input sequence as x[n]. the length of
this signal is infinite.
output of running max alg is y[n
On 4/09/2016 6:27 AM, robert bristow-johnson wrote:
if i were to test this out myself, i need to understand it enough to
write C code (or asm code) to do it.
The paper provides all of the information that you need for the basic
implementation (which I recommend to start with). If there is some
�
Original Message
Subject: Re: [music-dsp] efficient running max algorithm
From: "Ross Bencina"
Date: Sat, September 3, 2016 10:16 pm
To: r...@audioimagination.com
music-dsp@music.co
On 4/09/2016 1:42 PM, robert bristow-johnson wrote:
i think the worst case ain't gonna be too much better than
O(log2(windowSize)) per sample even with amortization over the block.
You can think that if you like, but I don't think the worst case is that
bad. I have given examples. If you would
I think the fact that binary searches play into both the Lemire-Fenn
algorithm and the binary-search algorithm is causing some confusion.
Remember that the original Lemire algorithm, with its simple linear search
and worst-case O(windowSize) per sample time, *still* manages O(1) per
sample over any
Wow, my very own hyphenated algorithm. :o)
Thanks for sharing this, Evan, it's impressive how concise the code ends up
being.
I'd echo the general feeling (which you also express in your Future Work
section) that for DSP work it really wants to be implemented using a ring
buffer. Neither std::vec
Hi, OP here.
Unfortunately I don't understand all you are saying, due to my lack of C
and Z-transform skills.
This is what I understood, please correct me if I'm wrong:
- My algo is called a binary max-tree. (When used with max as an
operator)
- The other algo discussed is faster on average,
Hey, Bart ---
The complexity of the rolling-max algorithm we discussed is always O(N) to
process N samples. Using the algorithm from the paper Ross linked, the
worst case for a single sample is O(w); with Ethan Fenn's binary-search
modification, the worst case complexity for one sample is O(log2(
> Hi, OP here.
>
>
> Unfortunately I don't understand all you are saying, due to my lack of C
> and Z-transform skills.
>
> This is what I understood, please correct me if I'm wrong:
> - My algo is called a binary max-tree. (When used with max as an
> operator)
> - The other algo discussed is fas
> - IIUC, none the other algos mentioned can cheaply vary the window size
> at run-time, right?
I don't think what I've been calling "the Tito method" can cheaply vary the
window size, unless I've missed something clever. Or unless you're willing
to accept a "crossover period" during which the re
38 matches
Mail list logo